As announced in a previous posting, I am investigating which of the CMMI goals are achieved by the Task Coach project. The fifth process area I am looking at is Measurement and Analysis (MA). MA is a support process area at level two of the CMMI for Development. Because MA is part of the core model foundation, it is also part of the other two CMMI constellations, CMMI for Services and CMMI for Acquisition. But here, I'll be looking at MA from a development perspective.
According to the CMMI, "The purpose of Measurement and Analysis (MA) is to develop and sustain a measurement capability that is used to support management information needs." The first question is who "management" is in an open source project. CMMI doesn't define management. It does define "manager" however, as "... a person who provides technical and administrative direction and control to those performing tasks or activities within the manager’s area of responsibility." I guess management in the case of the Task Coach project equals the developers. So the purpose of MA in our case is to develop and sustain a measurement capability that is used to support the information needs of the developers. Well, let's see.
MA has two specific goals and, like all CMMI process areas, five generic goals. For now, I'll only be looking at specific goals.
The first specific goal of MA states "Measurement objectives and activities are aligned with identified information needs and objectives." To achieve this goal, CMMI expects us to perform four specific practices.
The first specific practice (SP1.1) of MA is to "Establish and maintain measurement objectives that are derived from identified information needs and objectives." We as a project don't have any formally identified and documented information needs and objectives. I guess one implicit objective is to have as many happy Task Coach users as possible, but we have no measurement objectives derived from this objective that are "established and maintained".
The second specific practice (SP1.2) of MA reads "Specify measures to address the measurement objectives." We have no specified measures.
The third specific practice (SP1.3) states "Specify how measurement data will be obtained and stored." Again, we have no specifications on how to obtain and store measurement data.
The fourth specific practice (SP1.4) of MA expects us to "Specify how measurement data will be analyzed and reported." It gets boring, but we don´t have specifications on how to analyze and report measurement data.
The conclusion is clear: since we don´t do any of the practices, the first goal of MA is not satisfied.
The second goal of MA states "Measurement results, which address identified information needs and
objectives, are provided." To achieve this goal, CMMI expects us to perform four specific practices.
The first specific practice (SP2.1) of the second goal of MA is to "Obtain specified measurement data." Measurement data we obtain include number of downloads, build status, test coverage, source code volume and translation completeness. However, which measures we collect is partly determined by what different tools and websites collect for us automatically. I think the intent of this practice is to obtain measurement data that fulfills the specified information needs as discussed in SG1 of MA. However, if we would have made our information needs explicit I think these measurement data would still be in line with those information needs, so I´d judge this practice as largely implemented.
The second specific practice (SP2.2) expects us to "Analyze and interpret measurement data." We do analyze some of the data, but mostly on an ad hoc basis. For example, we changed the coverage measurement when analysis showed that third party software included in the Task Coach source code repository was included in the coverage measurement. One analysis is always done, and that is when the build bot fails at building one or more of the distributions. However, other measurements like number of downloads and source code volume are not analysed at all. Conclusion: this practice is only implemented a little bit.
The third specific practice of SG2 of MA says that we should " Manage and store measurement data, measurement specifications, and analysis results." Due to the open nature of the project and the tools used, all measurement data is stored and available. However, measurement specifications and analysis results are not explicitly stored, except maybe in emails between the developers. Practice partly implemented.
The fourth and last specific practice of SG2 reads "Report results of measurement and analysis activities to all relevant stakeholders." Some of the measurements are reported (emailed) automatically, such as build failures. Other measurements are available via the web on demand. Since we report (actively or passively) all measurement activities that we do, I guess that makes this practice largely implemented. By the way, it is interesting to note that the intent of this practice is to "to support decision making and assist in taking corrective action", but that this expectation is not part of the mandatory or expected content of the model.
Conclusion, the second goal is only partly achieved since most of the practices are only partly implemented.
Task Coach
About developing an open source task manager.
Monday, September 06, 2010
Sunday, August 15, 2010
Project Monitoring and Control for Task Coach
As announced in a previous posting, I am investigating which of the CMMI goals are achieved by the Task Coach project. The fourth process area I am looking at is Project Monitoring and Control (PMC). PMC is a project management process area at level two of the CMMI for Development. Because PMC is part of the core model foundation, it is also part of the other two CMMI constellations, CMMI for Services and CMMI for Acquisition. But here, I'll be looking at PMC from a development perspective.
According to the CMMI, "The purpose of Project Monitoring and Control (PMC) is to provide an understanding of the project’s progress so that appropriate corrective actions can be taken when the project’s performance deviates significantly from the plan." When we were investigating Project Planning (PP), we saw that the Task Coach project doesn't really do much project planning and that it doesn't have an established project plan. That probably means the PMC goals and practices won't be implemented either, but let's investigate that more closely before we draw conclusions.
PMC has two specific goals and, like all CMMI process areas, five generic goals. For now, I'll only be looking at specific goals.
The first specific goal (SG1) of PMC states "Actual performance and progress of the project are monitored against the project plan." To achieve this first goal, CMMI expects us to implement seven specific practices.
The first specific practice (SP1.1) of PMC reads "Monitor the actual values of the project planning parameters against the project plan." Since there is no project plan and there are no project planning parameters, this practice cannot be, and is not, implemented.
The second specific practice (SP1.2) of PMC expects us to "Monitor commitments against those identified in the project plan." Since we don't have a project plan, don't have documented commitments and we don't monitor (implicit) commitments, this practice is not implemented.
The third specific practice (SP1.3) of PMC says "Monitor risks against those identified in the project plan." Again, we don't have a project plan and we don't explicitly identify risks, so we cannot and do not monitor risks.
The fourth specific practice (SP1.4) of PMC wants us to "Monitor the management of project data against the project plan." Almost all project data is in some online repository (Subversion repository, bug tracker, feature request tracker, etc.) and these are monitored by means of automated emails when something gets changed. However, CMMI expects us to monitor the management of project data. I'm not sure monitoring the data itself is the same as monitoring the management of the data. Probably not, but then I don't know how to interpret this practice in the context of this project.
The fifth specific practice (SP1.5) of SG1 of PMC reads "Monitor stakeholder involvement against the project plan." As explained in the discussion of Project Planning, we do involve stakeholders (users mostly) but this is not monitored against a plan.
The sixth specific practice (SP1.6) of SG1 of PMC expects us to "Periodically review the project's progress, performance, and issues." This is something that is not done on a periodic basis, but could be a nice additional practice for the project.
The last specific practice (SP1.7) of SG1 of PMC is "Review the accomplishments and results of the project at selected project milestones." Like SP1.6, this not done, but would be good to do, probably in the form of post-release retrospectives.
Well, the conclusion is simple. The Task Coach project does not achieve the first specific goal of PMC.
The second specific goal (SG2) of PMC reads "Corrective actions are managed to closure when the project's performance or results deviate significantly from the plan." Again, without a plan, this goal is hard to achieve. But let's look briefly at the practices anyway.
The first specific practice (SP2.1) of SG2 expects us to "Collect and analyze the issues and determine the corrective actions necessary to address the issues." We don't collect issues other than bugs. That doesn't mean there are no issues besides bugs of course, but these don't get collected and analyzed in a repeatable manner. It could be a good idea to keep track of other issues somewhere. Sourceforge allows for creating additional "trackers", so it wouldn't be hard to create a separate "issue tracker".
The second specific practice (SP2.2) of SG2 wants us to "Take corrective action on identified issues." Since we don't explicitly identify issues we also don't explicitly take corrective actions. At least not in a structured manner.
The third specific practice (SP2.3) of SG2 then expects us to "Manage corrective actions to closure." Again, we don't do this. Having an issue tracker would greatly help us do this.
Again, the conclusion is simple. This second goal of PMC hasn't been achieved either.
Friday, August 13, 2010
Tricks for debugging a wxPython GUI
The other day, it took me quite some time to debug an issue that would have taken much less time had I used these two tools sooner:
- The wxPython Widget Inspection Tool that shows how GUI elements are related, and,
- The traceback module that makes it easy to see who is calling a method by putting this line in a method: import traceback; traceback.print_stack().
Wednesday, July 14, 2010
Project Planning for Task Coach
As announced in a previous posting, I am investigating which of the CMMI goals are achieved by the Task Coach project. The third process area I am looking at is Project Planning (PP). PP is a Project Management process area at level two of the CMMI for Development. Because PP is part of the core model foundation, it is also part of the other two CMMI constellations, CMMI for Services and CMMI for Acquisition. But here, I'll be looking at PP from a development perspective.
According to the CMMI, "The purpose of Project Planning (PP) is to establish and maintain plans that define project activities." Unfortunately, CMMI doesn't define the word plan. However, CMMI defines project plan as "A plan that provides the basis for performing and controlling the project’s activities, which addresses the commitments to the project’s customer." We'll assume that the plans mentioned in the purpose statement of PP are meant to be project plans.
PP has three specific goals and, like all CMMI process areas, five generic goals. For now, I'll only be looking at specific goals.
The first specific goal (SG1) of PP reads "Estimates of project planning parameters are established and maintained." In the explanation of this goal, CMMI states that "Project planning parameters include all information needed by the project to perform the necessary planning, organizing, staffing, directing, coordinating, reporting, and budgeting." For a small open source project like Task Coach, the amount of necessary planning, organizing, staffing, etc., is probably much lower than is needed for commercial software development projects. To achieve this first PP goal, CMMI expects organisations to implement four specific practices. Let's see how these apply to the Task Coach project.
The first specific practice (SP1.1) of PP says "Establish a top-level work breakdown structure (WBS) to estimate the scope of the project." CMMI, in the informative part of the practice, explains that the WBS typically provides a scheme for organizing the work in logical units, called work packages. The Task Coach project does not have a WBS. I guess a very abstract version of a WBS would contain top-level work packages like: develop new features, fix bugs, release software, write documentation, and provide user support. However, we haven't documented this, so I guess this practice isn't implemented.
The second specific practice (SP1.2) of PP reads "Establish and maintain estimates of the attributes of the work products and tasks." The idea here is to estimate the size of work products and tasks, and then use those size estimates as a basis for estimating resource requirements (described in PP SP1.4). However, since this is a project with only volunteers (committed volunteers, but volunteers still), our resources are basically fixed. We also have no fixed deadlines, so there is no immediate need to make these estimates. Anyhow, this practice is not implemented.
The third specific practice (SP1.3) says "Define the project lifecycle phases on which to scope the planning effort." I guess the lifecycle phases of Task Coach are: developing new features, followed by bug fixing (we're not producing bug free software, yet :-) These phases usually overlap, i.e. while bugs are fixed in the latest x.y version, resulting in release x.y.1, x.y.2, etc., work starts on x.y+1. This is not explicitly documented. Again, this practice is not implemented.
The fourth specific practice (SP1.4) of the first PP goal reads "Estimate the project effort and cost for the work products and tasks based on estimation rationale." We don't estimate effort and cost. Practice not implemented.
Since all four practices of the SG1 of PP are not implemented, the goal is not achieved.
The second specific goal (SG2) of PP requires that "A project plan is established and maintained as the basis for managing the project." There are seven specific practices that CMMI expects us to implement to reach this goal.
The first specific practice (SP2.1) of SG2 is "Establish and maintain the project’s budget and schedule." The Task Coach project has no budget, other than the time its developers and other volunteers put into it. We do have an implicit schedule of frequent releases. This is evidenced by the 117 releases in 5,5 years (assuming I counted correctly). However, I am not sure this counts as an established schedule. Practice partly implemented at most.
The second specific practice (SP2.2) is "Identify and analyze project risks." We don't do this.
The third practice (SP2.3) reads "Plan for the management of project data." Like most open source projects, we keep almost all project data in on-line repositories, see the discussion of Configuration Management. Practice implemented.
SP2.4 says "Plan for necessary resources to perform the project." Project resources include labor and equipment, materials and methods. Labor is a given. Other resources such as our Subversion repository, bug tracker, email lists, etc., all are present, but haven't really been planned explicitly. Practice not implemented.
In SP2.5 of PP, CMMI expects you to "Plan for knowledge and skills needed to perform the project." We work with the knowledge and skills that are available and do not explicitly plan to acquire knowledge and skills.
The sixth specific practice (SP2.6) of SG2 reads "Plan the involvement of identified stakeholders." We do involve stakeholders, users mostly, via the UserVoice website and the users mailinglist, but there is no explicit plan for this.
The last specific practice (SP2.7) of SG2 expects us to "Establish and maintain the overall project plan content." We don't have a documented project plan so this practice is not implemented.
Of the seven practice of SG2, we haven't implemented the majority so this goal is clearly not achieved.
The third specific goal (SG3) of PP is "Commitments to the project plan are established and maintained." We have already established that the Task Coach project doesn't have a project plan, so it doesn't seem this goal can be satisfied. We look at the three specific practices for this goal anyway.
The first specific practice (SP3.1) for this goal reads "Review all plans that affect the project to understand project commitments." The idea here is that plans for other process areas (e.g. for Configuration Management, Requirements Management, etc.) may affect the project plan. Since we don't have a project plan, this practice cannot be, and is not, performed.
The second specific practice (SP3.2) states that we should "Reconcile the project plan to reflect available and estimated resources." This is the basis for how we work; adapting all activities to the amount of time the developers have/make available for the project. However, we do not capture the reconciliation in a project plan.
The third specific practice (SP3.3) of the third goal of PP reads "Obtain commitment from relevant stakeholders responsible for performing and supporting plan execution." We do have commitment from the two developers as shown by their involvement in the project for more than five years. Commitment of other stakeholders, like translators or documentation writers, is not secured. This is evidenced by incomplete translations and slow progress on writing a Task Coach user manual. Practice partly implemented.
The practices of the third specific PP goal are only partly implemented, so this goal is not achieved.
According to the CMMI, "The purpose of Project Planning (PP) is to establish and maintain plans that define project activities." Unfortunately, CMMI doesn't define the word plan. However, CMMI defines project plan as "A plan that provides the basis for performing and controlling the project’s activities, which addresses the commitments to the project’s customer." We'll assume that the plans mentioned in the purpose statement of PP are meant to be project plans.
PP has three specific goals and, like all CMMI process areas, five generic goals. For now, I'll only be looking at specific goals.
The first specific goal (SG1) of PP reads "Estimates of project planning parameters are established and maintained." In the explanation of this goal, CMMI states that "Project planning parameters include all information needed by the project to perform the necessary planning, organizing, staffing, directing, coordinating, reporting, and budgeting." For a small open source project like Task Coach, the amount of necessary planning, organizing, staffing, etc., is probably much lower than is needed for commercial software development projects. To achieve this first PP goal, CMMI expects organisations to implement four specific practices. Let's see how these apply to the Task Coach project.
The first specific practice (SP1.1) of PP says "Establish a top-level work breakdown structure (WBS) to estimate the scope of the project." CMMI, in the informative part of the practice, explains that the WBS typically provides a scheme for organizing the work in logical units, called work packages. The Task Coach project does not have a WBS. I guess a very abstract version of a WBS would contain top-level work packages like: develop new features, fix bugs, release software, write documentation, and provide user support. However, we haven't documented this, so I guess this practice isn't implemented.
The second specific practice (SP1.2) of PP reads "Establish and maintain estimates of the attributes of the work products and tasks." The idea here is to estimate the size of work products and tasks, and then use those size estimates as a basis for estimating resource requirements (described in PP SP1.4). However, since this is a project with only volunteers (committed volunteers, but volunteers still), our resources are basically fixed. We also have no fixed deadlines, so there is no immediate need to make these estimates. Anyhow, this practice is not implemented.
The third specific practice (SP1.3) says "Define the project lifecycle phases on which to scope the planning effort." I guess the lifecycle phases of Task Coach are: developing new features, followed by bug fixing (we're not producing bug free software, yet :-) These phases usually overlap, i.e. while bugs are fixed in the latest x.y version, resulting in release x.y.1, x.y.2, etc., work starts on x.y+1. This is not explicitly documented. Again, this practice is not implemented.
The fourth specific practice (SP1.4) of the first PP goal reads "Estimate the project effort and cost for the work products and tasks based on estimation rationale." We don't estimate effort and cost. Practice not implemented.
Since all four practices of the SG1 of PP are not implemented, the goal is not achieved.
The second specific goal (SG2) of PP requires that "A project plan is established and maintained as the basis for managing the project." There are seven specific practices that CMMI expects us to implement to reach this goal.
The first specific practice (SP2.1) of SG2 is "Establish and maintain the project’s budget and schedule." The Task Coach project has no budget, other than the time its developers and other volunteers put into it. We do have an implicit schedule of frequent releases. This is evidenced by the 117 releases in 5,5 years (assuming I counted correctly). However, I am not sure this counts as an established schedule. Practice partly implemented at most.
The second specific practice (SP2.2) is "Identify and analyze project risks." We don't do this.
The third practice (SP2.3) reads "Plan for the management of project data." Like most open source projects, we keep almost all project data in on-line repositories, see the discussion of Configuration Management. Practice implemented.
SP2.4 says "Plan for necessary resources to perform the project." Project resources include labor and equipment, materials and methods. Labor is a given. Other resources such as our Subversion repository, bug tracker, email lists, etc., all are present, but haven't really been planned explicitly. Practice not implemented.
In SP2.5 of PP, CMMI expects you to "Plan for knowledge and skills needed to perform the project." We work with the knowledge and skills that are available and do not explicitly plan to acquire knowledge and skills.
The sixth specific practice (SP2.6) of SG2 reads "Plan the involvement of identified stakeholders." We do involve stakeholders, users mostly, via the UserVoice website and the users mailinglist, but there is no explicit plan for this.
The last specific practice (SP2.7) of SG2 expects us to "Establish and maintain the overall project plan content." We don't have a documented project plan so this practice is not implemented.
Of the seven practice of SG2, we haven't implemented the majority so this goal is clearly not achieved.
The third specific goal (SG3) of PP is "Commitments to the project plan are established and maintained." We have already established that the Task Coach project doesn't have a project plan, so it doesn't seem this goal can be satisfied. We look at the three specific practices for this goal anyway.
The first specific practice (SP3.1) for this goal reads "Review all plans that affect the project to understand project commitments." The idea here is that plans for other process areas (e.g. for Configuration Management, Requirements Management, etc.) may affect the project plan. Since we don't have a project plan, this practice cannot be, and is not, performed.
The second specific practice (SP3.2) states that we should "Reconcile the project plan to reflect available and estimated resources." This is the basis for how we work; adapting all activities to the amount of time the developers have/make available for the project. However, we do not capture the reconciliation in a project plan.
The third specific practice (SP3.3) of the third goal of PP reads "Obtain commitment from relevant stakeholders responsible for performing and supporting plan execution." We do have commitment from the two developers as shown by their involvement in the project for more than five years. Commitment of other stakeholders, like translators or documentation writers, is not secured. This is evidenced by incomplete translations and slow progress on writing a Task Coach user manual. Practice partly implemented.
The practices of the third specific PP goal are only partly implemented, so this goal is not achieved.
Tuesday, July 13, 2010
Configuration Management for Task Coach
As announced in a previous posting, I am investigating which of the CMMI goals are achieved by the Task Coach project. The second process area I am looking at is Configuration Management (CM). CM is a Support process area at level two of the CMMI for Development. Because CM is part of the core model foundation, it is also part of the other two CMMI constellations, CMMI for Services and CMMI for Acquisition. But here, I'll be looking at CM from a development perspective.
According to the CMMI, "The purpose of Configuration Management (CM) is to establish and maintain the integrity of work products using configuration identification, configuration control, configuration status accounting, and configuration audits." Work products are defined as "In the CMMI Product Suite, a useful result of a process. This can include files, documents, products, parts of a product, services, process descriptions, specifications, and invoices. A key distinction between a work product and a product component is that a work product is not necessarily part of the product." So, work products for Task Coach are obviously product components like source code and tests, but also include things like the Task Coach website and announcement emails. The purpose of CM is to establish and maintain the integrity of these work products.
CM has three specific goals and, like all CMMI process areas, five generic goals. For now, I'll only be looking at specific goals.
The first specific goal (SG1) of CM reads "Baselines of identified work products are established." CMMI defines a baseline as "A set of specifications or work products that has been formally reviewed and agreed on, which thereafter serves as the basis for further development, and which can be changed only through change control procedures." CMMI expects organisations to perform three practices to achieve this goal.
The first specific practice (SP1.1) is to "Identify the configuration items, components, and related work products that will be placed under configuration management." Almost all Task Coach work products are placed under some form of configuration management:
The second specific practice (SP1.2) for SG1 is "Establish and maintain a configuration management and change management system for controlling work products." Subversion is the configuration management system for source code and the website. For bug reports and support requests we use the Sourceforge trackers. For feature requests we use UserVoice. For translations we use Launchpad. For mailinglists we use Yahoo Groups and its archives. Old releases of Task Coach are archived at Sourceforge. I think we've got this one covered as well.
The third specific practice (SP1.3) is "Create or release baselines for internal use and for delivery to the customer." As specified in our developer info, under the Subversion usage conventions heading, we create branches in Subversion for each feature (x.y) release and tag each bug fix (x.y.z) release. The change history details for each release (=baseline) which features are added and which bugs are fixed. CMMI says that baselines have to be formally reviewed. That's not something we do. Instead, we make sure Task Coach is always ready for release. The head of a release branch is always ready for release. The trunk most of the time. The reason for this approach is that users can benefit from fixed bugs and new features as soon as possible. I guess that makes this practice largely implemented.
Since SP1.1 and SP1.2 are fully implemented and SP1.3 largely, I would say that the first specific goal of CM is achieved.
The second specific goal of CM is "Changes to the work products under configuration management are
tracked and controlled." CMMI expects organisations to implement two practices to achieve this goal.
The first practice of the second goal (SP2.1) of CM says "Track change requests for the configuration items." Change requests include changed or new requirements as well as bug reports. As mentioned above, we track change requests in the UserVoice system and the Sourceforge bug tracker. For each change request we keep track of the status, closing it when a new release of Task Coach is available that includes the new feature or fix. Conclusion, this practice is implemented by the Task Coach project.
The second practice of the second goal (SP2.2) is "Control changes to the configuration items." CMMI explains that control means: "Control is maintained over the configuration of the work product baseline. This control includes tracking the configuration of each of the configuration items, approving a new configuration if necessary, and updating the baseline." We track changes to the Subversion repository by means of a commit-message mailinglist. Whenever a developer checks in changes to the repository, an email message is mailed to that mailinglist, notifying the other developer (and possible other interested parties) of the changes. Only developers are allowed to make changes to the source code. Users are allowed to submit bug reports and feature requests, but the status is monitored and updated by the developers. The baseline is updated as part of the release process. Practice fully implemented, I'd say.
Since both specific practices of SG2 are implemented, this must mean SG2 is achieved.
The third specific goal (SG3) of CM says that "Integrity of baselines is established and maintained." To achieve this goal, CMMI expects organisations to implement another two specific practices.
The first specific practice (SP3.1) of the third goal is "Establish and maintain records describing configuration items." CMMI suggests, in the subpractices of SP3.1, to record configuration management actions and ensure that relevant stakeholders have access to and knowledge of the configuration status of the configuration items. We do this via Subversion and the commit-messages mailinglist mentioned above for the source code and via the Sourceforge bug tracker, UserVoice feature requests, and the change history. CMMI also suggests to specify the latest version of the baselines. This is done quite prominently on the Task Coach website, by means of announcements via the Task Coach users mailinglist and via twitter. CMMI also suggests to identify the versions of configuration items that constitute a particular baseline. We do this by tagging each release in Subversion. Differences between baselines are described in the change history mentioned before. And finally, CMMI advices us to revise the status and history of configuration items as necessary. Again, Subversion supports this for source code. For other configuration items, such as bug reports and feature requests, the status is updated by the developers using the administrative user interface provided by the Sourceforge and UserVoice websites. Conclusion, practice fully implemented.
The second specific practice (SP3.2) of SG3 reads "Perform configuration audits to maintain integrity of the configuration baselines." CMMI defines configuration audit as "An audit conducted to verify that a configuration item, or a collection of configuration items that make up a baseline, conforms to a specified standard or requirement." This something we do not do on a regular basis. We probably should, because we often run into bug reports that should be closed because the reported bug has been fixed, or feature requests that should be closed because the feature has been implemented. Often, this is caused by duplicate bug reports and feature requests. Anyhow, this practice is not implemented by the Task Coach project.
Since one practice of SG3 is fully implemented and one is not, SG3 is not satisfied.
According to the CMMI, "The purpose of Configuration Management (CM) is to establish and maintain the integrity of work products using configuration identification, configuration control, configuration status accounting, and configuration audits." Work products are defined as "In the CMMI Product Suite, a useful result of a process. This can include files, documents, products, parts of a product, services, process descriptions, specifications, and invoices. A key distinction between a work product and a product component is that a work product is not necessarily part of the product." So, work products for Task Coach are obviously product components like source code and tests, but also include things like the Task Coach website and announcement emails. The purpose of CM is to establish and maintain the integrity of these work products.
CM has three specific goals and, like all CMMI process areas, five generic goals. For now, I'll only be looking at specific goals.
The first specific goal (SG1) of CM reads "Baselines of identified work products are established." CMMI defines a baseline as "A set of specifications or work products that has been formally reviewed and agreed on, which thereafter serves as the basis for further development, and which can be changed only through change control procedures." CMMI expects organisations to perform three practices to achieve this goal.
The first specific practice (SP1.1) is to "Identify the configuration items, components, and related work products that will be placed under configuration management." Almost all Task Coach work products are placed under some form of configuration management:
- Source code, including automated tests, third party sources, and release scripts, are put in the Sourceforge Subversion repository.
- The website sources are in the Subversion repository as well.
- Bug reports are kept in the Sourceforge bug tracker.
- Feature requests are tracked using the UserVoice website.
- Support requests are tracked with the Sourceforge support request tracker.
- Translations are both in the Subversion repository and Launchpad so that translators can edit them.
- Mailinglist discussions are archived.
- Old releases of Task Coach are archived.
The second specific practice (SP1.2) for SG1 is "Establish and maintain a configuration management and change management system for controlling work products." Subversion is the configuration management system for source code and the website. For bug reports and support requests we use the Sourceforge trackers. For feature requests we use UserVoice. For translations we use Launchpad. For mailinglists we use Yahoo Groups and its archives. Old releases of Task Coach are archived at Sourceforge. I think we've got this one covered as well.
The third specific practice (SP1.3) is "Create or release baselines for internal use and for delivery to the customer." As specified in our developer info, under the Subversion usage conventions heading, we create branches in Subversion for each feature (x.y) release and tag each bug fix (x.y.z) release. The change history details for each release (=baseline) which features are added and which bugs are fixed. CMMI says that baselines have to be formally reviewed. That's not something we do. Instead, we make sure Task Coach is always ready for release. The head of a release branch is always ready for release. The trunk most of the time. The reason for this approach is that users can benefit from fixed bugs and new features as soon as possible. I guess that makes this practice largely implemented.
Since SP1.1 and SP1.2 are fully implemented and SP1.3 largely, I would say that the first specific goal of CM is achieved.
The second specific goal of CM is "Changes to the work products under configuration management are
tracked and controlled." CMMI expects organisations to implement two practices to achieve this goal.
The first practice of the second goal (SP2.1) of CM says "Track change requests for the configuration items." Change requests include changed or new requirements as well as bug reports. As mentioned above, we track change requests in the UserVoice system and the Sourceforge bug tracker. For each change request we keep track of the status, closing it when a new release of Task Coach is available that includes the new feature or fix. Conclusion, this practice is implemented by the Task Coach project.
The second practice of the second goal (SP2.2) is "Control changes to the configuration items." CMMI explains that control means: "Control is maintained over the configuration of the work product baseline. This control includes tracking the configuration of each of the configuration items, approving a new configuration if necessary, and updating the baseline." We track changes to the Subversion repository by means of a commit-message mailinglist. Whenever a developer checks in changes to the repository, an email message is mailed to that mailinglist, notifying the other developer (and possible other interested parties) of the changes. Only developers are allowed to make changes to the source code. Users are allowed to submit bug reports and feature requests, but the status is monitored and updated by the developers. The baseline is updated as part of the release process. Practice fully implemented, I'd say.
Since both specific practices of SG2 are implemented, this must mean SG2 is achieved.
The third specific goal (SG3) of CM says that "Integrity of baselines is established and maintained." To achieve this goal, CMMI expects organisations to implement another two specific practices.
The first specific practice (SP3.1) of the third goal is "Establish and maintain records describing configuration items." CMMI suggests, in the subpractices of SP3.1, to record configuration management actions and ensure that relevant stakeholders have access to and knowledge of the configuration status of the configuration items. We do this via Subversion and the commit-messages mailinglist mentioned above for the source code and via the Sourceforge bug tracker, UserVoice feature requests, and the change history. CMMI also suggests to specify the latest version of the baselines. This is done quite prominently on the Task Coach website, by means of announcements via the Task Coach users mailinglist and via twitter. CMMI also suggests to identify the versions of configuration items that constitute a particular baseline. We do this by tagging each release in Subversion. Differences between baselines are described in the change history mentioned before. And finally, CMMI advices us to revise the status and history of configuration items as necessary. Again, Subversion supports this for source code. For other configuration items, such as bug reports and feature requests, the status is updated by the developers using the administrative user interface provided by the Sourceforge and UserVoice websites. Conclusion, practice fully implemented.
The second specific practice (SP3.2) of SG3 reads "Perform configuration audits to maintain integrity of the configuration baselines." CMMI defines configuration audit as "An audit conducted to verify that a configuration item, or a collection of configuration items that make up a baseline, conforms to a specified standard or requirement." This something we do not do on a regular basis. We probably should, because we often run into bug reports that should be closed because the reported bug has been fixed, or feature requests that should be closed because the feature has been implemented. Often, this is caused by duplicate bug reports and feature requests. Anyhow, this practice is not implemented by the Task Coach project.
Since one practice of SG3 is fully implemented and one is not, SG3 is not satisfied.
Monday, July 12, 2010
Requirements management for Task Coach
As announced in a previous posting, I am investigating which of the CMMI goals are achieved by the Task Coach project. The first process area I am looking at is Requirements Management (REQM). REQM is an Engineering process area at level two of the CMMI for Development. Because REQM is part of the core model foundation, it is also part of the other two CMMI constellations, CMMI for Services and CMMI for Acquisition. But here, I'll be looking at REQM from a development perspective.
According to the CMMI, "The purpose of Requirements Management (REQM) is to manage the requirements of the project’s products and product components and to identify inconsistencies between those requirements and the project’s plans and work products." (emphasis mine). The purpose has four interesting concepts: (a) a project, (b) the project's products and product components, (c) requirements of the project's products and product components that need to be managed and (d) the project's plans and work products that need to be kept consistent with the requirements.
(a) We need to decide what the project is. Since we're using Task Coach as a guinea pig, the project must be the Task Coach project. The Task Coach project is staffed by two developers (Jérôme and myself) and supported by dozens of translators, beta testers, bug reporters, etc.
(b) The Task Coach project has two main products: the desktop version of Task Coach and the iPhone/iPod/iPad version of Task Coach. Because Jérôme is the only one working on the iPhone/iPod/iPad version at the moment, I'll limit the scope of this investigation to the desktop version of Task Coach, currently at release 1.0.7.
(c) Now on to the requirements. CMMI defines requirements as "(1) A condition or capability needed by a user to solve a problem or achieve an objective. (2) A condition or capability that must be met or possessed by a product or product component to satisfy a contract, standard, specification, or other formally imposed documents. (3) A documented representation of a condition or capability as in (1) or (2)." Obviously, (2) doesn't apply for Task Coach since there are no formally imposed documents. (1) does apply. We collect requirements on UserVoice, a website that allows users to request features and vote for existing feature requests.
(d) The final concept in the purpose of REQM is the project's plans and work products that need to be kept consistent with the requirements. Since this is an open source project, there is not much planning going on. Progress is mostly determined by the amount of time the developers have available and are willing to spend on the project. Work products are defined as "In the CMMI Product Suite, a useful result of a process. This can include files, documents, products, parts of a product, services, process descriptions, specifications, and invoices. A key distinction between a work product and a product component is that a work product is not necessarily part of the product." So, work products for Task Coach are obviously product components like source code and tests, but also include things like the Task Coach website and announcement emails.
Having determined how to interpret the main concepts from REQM in the light of Task Coach, we move on to the goals and practices of REQM. REQM has one specific goal and, like all CMMI process areas, five generic goals. For now, I'll only be looking at specific goals.
The specific goal (SG1) of REQM is quite similar to the purpose and reads "Requirements are managed and inconsistencies with project plans and work products are identified." CMMI expects organisations to perform five practices to achieve this goal.
The first specific practice (SP1.1) is "Develop an understanding with the requirements providers on the meaning of the requirements." As mentioned before, Task Coach uses UserVoice to collect feature requests. UserVoice also allows for discussing feature requests. In the subpractices of SP1.1, CMMI suggests establishing criteria for determining appropriate requirements providers and criteria for evaluating and accepting requirements. The first we don't need at this time; everybody is allowed to submit feature requests. The second might be good to have so that people submit feature requests that have a higher chance of getting implemented. However, since the subpractices are informative material we don't have to implement them. My conclusion is that the Task Coach project has implemented this practice.
The second practice (SP1.2) of REQM is "Obtain commitment to the requirements from the project participants." Since there are only two developers, volunteering to work on a feature request is the main form of commitment. By starting to work on a specific requirement, a developer shows commitment to that requirement. Practice implemented, I conclude.
The third practice (SP1.3) is "Manage changes to the requirements as they evolve during the project." New and changed requirements are tracked on UserVoice by means of feature requests. The status of requirements (started, completed, declined) is tracked there as well. Released features are listed in the change history of the product, with links to the originating requests on UserVoice. Again, practice implemented, I think.
The fourth practice (SP1.4) of REQM is "Maintain bidirectional traceability among the requirements and work products." The idea of this practice is that by maintaining bidirectional traceability, the project is able to check that all requirements have been properly addressed in work products and that all work products can be traced to a valid source requirement. This is particularly important when doing impact analyses of changing requirements. Task Coach doesn't maintain requirements traceability, other than links from the change history to the feature requests on UserVoice. But that limited traceability doesn't help with impact analysis. So this practice is not implemented.
The fifth and last specific practice (SP1.5) is "Identify inconsistencies between the project plans and work products and the requirements." Task Coach doesn't have much of a project plan, so there can't be many inconsistencies there. Inconsistencies between requirements and work products are identified via the feature request submission "process". When people submit a feature request, we are notified by email and check whether the requested feature hasn't been implemented already. We also check for duplicate requests. When a features is completed, the corresponding feature request is marked as completed. Is this enough to identify inconsistencies between work products and requirements? I think for us it is.
To summarize, SP1.1, SP1.2 and SP1.3 have been implemented by the Task Coach project. SP1.4 has not. SP1.5 has been implemented partly, for the part that is relevant. Now, does the Task Coach project achieve the specific goal of REQM: "Requirements are managed and inconsistencies with project plans and work products are identified."? My judgement is that yes, we do. But, IANAL (I Am Not A Lead Appraiser) so comments are welcome.
According to the CMMI, "The purpose of Requirements Management (REQM) is to manage the requirements of the project’s products and product components and to identify inconsistencies between those requirements and the project’s plans and work products." (emphasis mine). The purpose has four interesting concepts: (a) a project, (b) the project's products and product components, (c) requirements of the project's products and product components that need to be managed and (d) the project's plans and work products that need to be kept consistent with the requirements.
(a) We need to decide what the project is. Since we're using Task Coach as a guinea pig, the project must be the Task Coach project. The Task Coach project is staffed by two developers (Jérôme and myself) and supported by dozens of translators, beta testers, bug reporters, etc.
(b) The Task Coach project has two main products: the desktop version of Task Coach and the iPhone/iPod/iPad version of Task Coach. Because Jérôme is the only one working on the iPhone/iPod/iPad version at the moment, I'll limit the scope of this investigation to the desktop version of Task Coach, currently at release 1.0.7.
(c) Now on to the requirements. CMMI defines requirements as "(1) A condition or capability needed by a user to solve a problem or achieve an objective. (2) A condition or capability that must be met or possessed by a product or product component to satisfy a contract, standard, specification, or other formally imposed documents. (3) A documented representation of a condition or capability as in (1) or (2)." Obviously, (2) doesn't apply for Task Coach since there are no formally imposed documents. (1) does apply. We collect requirements on UserVoice, a website that allows users to request features and vote for existing feature requests.
(d) The final concept in the purpose of REQM is the project's plans and work products that need to be kept consistent with the requirements. Since this is an open source project, there is not much planning going on. Progress is mostly determined by the amount of time the developers have available and are willing to spend on the project. Work products are defined as "In the CMMI Product Suite, a useful result of a process. This can include files, documents, products, parts of a product, services, process descriptions, specifications, and invoices. A key distinction between a work product and a product component is that a work product is not necessarily part of the product." So, work products for Task Coach are obviously product components like source code and tests, but also include things like the Task Coach website and announcement emails.
Having determined how to interpret the main concepts from REQM in the light of Task Coach, we move on to the goals and practices of REQM. REQM has one specific goal and, like all CMMI process areas, five generic goals. For now, I'll only be looking at specific goals.
The specific goal (SG1) of REQM is quite similar to the purpose and reads "Requirements are managed and inconsistencies with project plans and work products are identified." CMMI expects organisations to perform five practices to achieve this goal.
The first specific practice (SP1.1) is "Develop an understanding with the requirements providers on the meaning of the requirements." As mentioned before, Task Coach uses UserVoice to collect feature requests. UserVoice also allows for discussing feature requests. In the subpractices of SP1.1, CMMI suggests establishing criteria for determining appropriate requirements providers and criteria for evaluating and accepting requirements. The first we don't need at this time; everybody is allowed to submit feature requests. The second might be good to have so that people submit feature requests that have a higher chance of getting implemented. However, since the subpractices are informative material we don't have to implement them. My conclusion is that the Task Coach project has implemented this practice.
The second practice (SP1.2) of REQM is "Obtain commitment to the requirements from the project participants." Since there are only two developers, volunteering to work on a feature request is the main form of commitment. By starting to work on a specific requirement, a developer shows commitment to that requirement. Practice implemented, I conclude.
The third practice (SP1.3) is "Manage changes to the requirements as they evolve during the project." New and changed requirements are tracked on UserVoice by means of feature requests. The status of requirements (started, completed, declined) is tracked there as well. Released features are listed in the change history of the product, with links to the originating requests on UserVoice. Again, practice implemented, I think.
The fourth practice (SP1.4) of REQM is "Maintain bidirectional traceability among the requirements and work products." The idea of this practice is that by maintaining bidirectional traceability, the project is able to check that all requirements have been properly addressed in work products and that all work products can be traced to a valid source requirement. This is particularly important when doing impact analyses of changing requirements. Task Coach doesn't maintain requirements traceability, other than links from the change history to the feature requests on UserVoice. But that limited traceability doesn't help with impact analysis. So this practice is not implemented.
The fifth and last specific practice (SP1.5) is "Identify inconsistencies between the project plans and work products and the requirements." Task Coach doesn't have much of a project plan, so there can't be many inconsistencies there. Inconsistencies between requirements and work products are identified via the feature request submission "process". When people submit a feature request, we are notified by email and check whether the requested feature hasn't been implemented already. We also check for duplicate requests. When a features is completed, the corresponding feature request is marked as completed. Is this enough to identify inconsistencies between work products and requirements? I think for us it is.
To summarize, SP1.1, SP1.2 and SP1.3 have been implemented by the Task Coach project. SP1.4 has not. SP1.5 has been implemented partly, for the part that is relevant. Now, does the Task Coach project achieve the specific goal of REQM: "Requirements are managed and inconsistencies with project plans and work products are identified."? My judgement is that yes, we do. But, IANAL (I Am Not A Lead Appraiser) so comments are welcome.
Maturity of open source projects
In my work as an IT consultant I sometimes use open source projects as reference for judging the quality and maturity of in-house or commercial software projects. Task Coach, for example, has more than 3500 automated unit tests that cover 63% of the 100.000 Python lines of code. Since Task Coach is just a hobby project, this in my mind makes it a lower boundary for assessing the amount and coverage of unit tests in other projects.
Last week I was attending the official Introduction to the CMMI course taught by André Heijstek (of Improvement Focus). While we were discussing the different process areas in the CMMI for Development, I started wondering if and how CMMI would apply to open source organizations and projects. Maybe the CMMI doesn't apply to open source projects at all. However, if a project like Task Coach does achieve a significant portion of the CMMI goals, then that would be another stick in the ground to compare other organization and projects against.
So, the plan is to investigate which of the CMMI for Development v1.2 (and Services too probably; user support is an integral part of open source projects) goals are met by the Task Coach organization and project. Since there are many CMMI process areas I will assess each of the CMMI process areas in a separate posting. As I'm obviously biased, I'll invite André, who is a certified lead appraiser for CMMI, to review my assessments.
Last week I was attending the official Introduction to the CMMI course taught by André Heijstek (of Improvement Focus). While we were discussing the different process areas in the CMMI for Development, I started wondering if and how CMMI would apply to open source organizations and projects. Maybe the CMMI doesn't apply to open source projects at all. However, if a project like Task Coach does achieve a significant portion of the CMMI goals, then that would be another stick in the ground to compare other organization and projects against.
So, the plan is to investigate which of the CMMI for Development v1.2 (and Services too probably; user support is an integral part of open source projects) goals are met by the Task Coach organization and project. Since there are many CMMI process areas I will assess each of the CMMI process areas in a separate posting. As I'm obviously biased, I'll invite André, who is a certified lead appraiser for CMMI, to review my assessments.
Sunday, November 22, 2009
A bug caused by "make clean"
A Task Coach user reported today that one of the translations (Simplified Chinese) was not working. Not working meaning that instead of the translated texts in the user interface, the original English texts would be shown. A little investigation showed that most translations were OK, but a few were not.
Now, it is necessary to know how Task Coach deals with translations. Task Coach uses Launchpad for translations. Launchpad provides the translations as .po files. These .po files are transformed into Python sources files that are in turn bundled with the different Task Coach installers/packages.
I noticed that a few of these generated Python source files were missing from the folder where the translations are stored. Asking myself how some of these could be missing while others were not, I decided that one possible explanation would be the Makefile not removing all files. So I checked the "clean" target in the Makefile and indeed, it would only remove the "??_??.py" files and not the "??.py" files. That means zh_CN.py would get removed, but nl.py not. So that explains why some translations, such as Simplified Chinese (zh_CN) and Brazilian Portuguese (pt_BR), were missing and others, such as Dutch (nl) and French (fr), not.
The final question is, of course, how to prevent this from happening ever again. We already have a set of release tests. I guess that adding a release test that checks whether all translations are included in the Task Coach installers and packages should do it.
Now, it is necessary to know how Task Coach deals with translations. Task Coach uses Launchpad for translations. Launchpad provides the translations as .po files. These .po files are transformed into Python sources files that are in turn bundled with the different Task Coach installers/packages.
I noticed that a few of these generated Python source files were missing from the folder where the translations are stored. Asking myself how some of these could be missing while others were not, I decided that one possible explanation would be the Makefile not removing all files. So I checked the "clean" target in the Makefile and indeed, it would only remove the "??_??.py" files and not the "??.py" files. That means zh_CN.py would get removed, but nl.py not. So that explains why some translations, such as Simplified Chinese (zh_CN) and Brazilian Portuguese (pt_BR), were missing and others, such as Dutch (nl) and French (fr), not.
The final question is, of course, how to prevent this from happening ever again. We already have a set of release tests. I guess that adding a release test that checks whether all translations are included in the Task Coach installers and packages should do it.
Subscribe to:
Posts (Atom)