Monday, July 12, 2010

Requirements management for Task Coach

As announced in a previous posting, I am investigating which of the CMMI goals are achieved by the Task Coach project. The first process area I am looking at is Requirements Management (REQM). REQM is an Engineering process area at level two of the CMMI for Development. Because REQM is part of the core model foundation, it is also part of the other two CMMI constellations, CMMI for Services and CMMI for Acquisition. But here, I'll be looking at REQM from a development perspective.

According to the CMMI, "The purpose of Requirements Management (REQM) is to manage the requirements of the project’s products and product components and to identify inconsistencies between those requirements and the project’s plans and work products." (emphasis mine). The purpose has four interesting concepts: (a) a project, (b) the project's products and product components, (c) requirements of the project's products and product components that need to be managed and (d) the project's plans and work products that need to be kept consistent with the requirements.

(a) We need to decide what the project is. Since we're using Task Coach as a guinea pig, the project must be the Task Coach project. The Task Coach project is staffed by two developers (Jérôme and myself) and supported by dozens of translators, beta testers, bug reporters, etc.

(b) The Task Coach project has two main products: the desktop version of Task Coach and the iPhone/iPod/iPad version of Task Coach. Because Jérôme is the only one working on the iPhone/iPod/iPad version at the moment, I'll limit the scope of this investigation to the desktop version of Task Coach, currently at release 1.0.7.

(c) Now on to the requirements. CMMI defines requirements as "(1) A condition or capability needed by a user to solve a problem or achieve an objective. (2) A condition or capability that must be met or possessed by a product or product component to satisfy a contract, standard, specification, or other formally imposed documents. (3) A documented representation of a condition or capability as in (1) or (2)." Obviously, (2) doesn't apply for Task Coach since there are no formally imposed documents. (1) does apply. We collect requirements on UserVoice, a website that allows users to request features and vote for existing feature requests.

(d) The final concept in the purpose of REQM is the project's plans and work products that need to be kept consistent with the requirements. Since this is an open source project, there is not much planning going on. Progress is mostly determined by the amount of time the developers have available and are willing to spend on the project. Work products are defined as "In the CMMI Product Suite, a useful result of a process. This can include files, documents, products, parts of a product, services, process descriptions, specifications, and invoices. A key distinction between a work product and a product component is that a work product is not necessarily part of the product." So, work products for Task Coach are obviously product components like source code and tests, but also include things like the Task Coach website and announcement emails.

Having determined how to interpret the main concepts from REQM in the light of Task Coach, we move on to the goals and practices of REQM. REQM has one specific goal and, like all CMMI process areas, five generic goals. For now, I'll only be looking at specific goals.

The specific goal (SG1) of REQM is quite similar to the purpose and reads "Requirements are managed and inconsistencies with project plans and work products are identified." CMMI expects organisations to perform five practices to achieve this goal.

The first specific practice (SP1.1) is "Develop an understanding with the requirements providers on the meaning of the requirements." As mentioned before, Task Coach uses UserVoice to collect feature requests. UserVoice also allows for discussing feature requests. In the subpractices of SP1.1, CMMI suggests establishing criteria for determining appropriate requirements providers and criteria for evaluating and accepting requirements. The first we don't need at this time; everybody is allowed to submit feature requests. The second might be good to have so that people submit feature requests that have a higher chance of getting implemented. However, since the subpractices are informative material we don't have to implement them. My conclusion is that the Task Coach project has implemented this practice.

The second practice (SP1.2) of REQM is "Obtain commitment to the requirements from the project participants." Since there are only two developers, volunteering to work on a feature request is the main form of commitment. By starting to work on a specific requirement, a developer shows commitment to that requirement. Practice implemented, I conclude.

The third practice (SP1.3) is "Manage changes to the requirements as they evolve during the project." New and changed requirements are tracked on UserVoice by means of feature requests. The status of requirements (started, completed, declined) is tracked there as well. Released features are listed in the change history of the product, with links to the originating requests on UserVoice. Again, practice implemented, I think.

The fourth practice (SP1.4) of REQM is "Maintain bidirectional traceability among the requirements and work products." The idea of this practice is that by maintaining bidirectional traceability, the project is able to check that all requirements have been properly addressed in work products and that all work products can be traced to a valid source requirement. This is particularly important when doing impact analyses of changing requirements. Task Coach doesn't maintain requirements traceability, other than links from the change history to the feature requests on UserVoice. But that limited traceability doesn't help with impact analysis. So this practice is not implemented.

The fifth and last specific practice (SP1.5) is "Identify inconsistencies between the project plans and work products and the requirements." Task Coach doesn't have much of a project plan, so there can't be many inconsistencies there. Inconsistencies between requirements and work products are identified via the feature request submission "process". When people submit a feature request, we are notified by email and check whether the requested feature hasn't been implemented already. We also check for duplicate requests. When a features is completed, the corresponding feature request is marked as completed. Is this enough to identify inconsistencies between work products and requirements? I think for us it is.

To summarize, SP1.1, SP1.2 and SP1.3 have been implemented by the Task Coach project. SP1.4 has not. SP1.5 has been implemented partly, for the part that is relevant. Now, does the Task Coach project achieve the specific goal of REQM: "Requirements are managed and inconsistencies with project plans and work products are identified."? My judgement is that yes, we do. But, IANAL (I Am Not A Lead Appraiser) so comments are welcome.


Unknown said...

On gathering requirements (c): you state that you do that using User Voice. Does that contain all requirements? Or just those that your users give you - leaving out those requirements that you as developers come up with?

SP1.1 I agree with your view, this seems fully implemented. One small side note on the appropriate requirements providers. You state that everybody can submit feature requests. That is true, but are these requests real REQUIREMENTS? I think these requests are just WISHES, and you, as developers, decide which ones you will implement. So the real requirements providers are you as developers.

SP1.2 Fully agree. Life is really simple in this setup.

SP1.3 I agree again. Again a small side note. The concept of "change" can only exist when there is a stable basis. So changing requirements is always with respect to an agreed baseline. In your case, the requirements baseline is everything that is already implemented in the product. All new wishes in User Voice are changes with respect to this baseline.

SP1.4 You might be right here, but maybe you are too negative. Let's try.
I know that you do maintain lots of automated tests (mostly unit tests). Do these refer to requirements? To a User Voice number, maybe somewhere in the comments?
I guess that before you release a new version, you will do some tests to ensure that you deliver everything you promise in the release notes. Is there no traceability between there tests and the release notes and the requirements?

SP1.5 To me, the intent of this practice is:
1. To ensure that all stated requirements are being implemented during the project (so nothing is forgotten)
2. To ensure that nothing more is implemented, to prevent gold plating.
In your case #1 seems to be OK. Using the tracking mechanisms you describe you ensure that you implement everything that you want to implement.
#2 does not seem to be relevant. The traditional problem with gold plating - cost overruns - are not relevant to you.

Well, so far for now. After I see your response on my remarks, I will do a goal rating. Or better, a goal estimate. A formal goal rating is only allowed in a SCAMPI A appraisal, and that is not what I am doing at the moment.

Frank said...

On gathering requirements (c): you're correct in that UserVoice contains only the user supplied requirements. When the developers need something they just add it :-) These changes can be recognized as new features in the change history without a link to a feature request.

SP1.1: The way I see it is that because people can vote on feature requests, these feature requests gradually change into requirements as they receive more votes.

SP1.3: You're right about the requirements baseline being the requirements implemented in the latest version.

SP1.4: We do have lots of automated tests, but they are not explicitly linked to requirements or feature requests.

Unknown said...

Thanks for the additional explanations Frank.
That helps me to finalise the rating:

SP1.1 - Fully Implemented
SP1.2 - Fully Implemented
SP1.3 - Fully Implemented
SP1.4 - Not Implemented
SP1.5 - Fully Implemented

SG1 - Not satisfied

Now the question remains, should you be doing something about this. There is no customer requirement for a formal CMMI level, so fortunately there is no pressure in that direction. We can use CMMI here in a way that helps the project.
Would your project benefit from better traceability?
I doubt so. If you interpret the intention of traceability as supporting the completeness of your implementation, the fact that all requirements have been properly implemented, then I think this is not really an issue for you.
But maybe you have other views?