Monday, September 06, 2010

Measurement and Analysis for Task Coach

As announced in a previous posting, I am investigating which of the CMMI goals are achieved by the Task Coach project. The fifth process area I am looking at is Measurement and Analysis (MA). MA is a support process area at level two of the CMMI for Development. Because MA is part of the core model foundation, it is also part of the other two CMMI constellations, CMMI for Services and CMMI for Acquisition. But here, I'll be looking at MA from a development perspective.


According to the CMMI, "The purpose of Measurement and Analysis (MA) is to develop and sustain a measurement capability that is used to support management information needs." The first question is who "management" is in an open source project. CMMI doesn't define management. It does define "manager" however, as "... a person who provides technical and administrative direction and control to those performing tasks or activities within the manager’s area of responsibility."  I guess management in the case of the Task Coach project equals the developers. So the purpose of MA in our case is to develop and sustain a measurement capability that is used to support the information needs of the developers. Well, let's see.


MA has two specific goals and, like all CMMI process areas, five generic goals. For now, I'll only be looking at specific goals. 


The first specific goal of MA states "Measurement objectives and activities are aligned with identified information needs and objectives." To achieve this goal, CMMI expects us to perform four specific practices. 


The first specific practice (SP1.1) of MA is to "Establish and maintain measurement objectives that are derived from identified information needs and objectives." We as a project don't have any formally identified and documented information needs and objectives. I guess one implicit objective is to have as many happy Task Coach users as possible, but we have no measurement objectives derived from this objective that are "established and maintained".


The second specific practice (SP1.2) of MA reads "Specify measures to address the measurement objectives." We have no specified measures. 


The third specific practice (SP1.3) states "Specify how measurement data will be obtained and stored." Again, we have no specifications on how to obtain and store measurement data.


The fourth specific practice (SP1.4) of MA expects us to "Specify how measurement data will be analyzed and reported." It gets boring, but we don´t have specifications on how to analyze and report measurement data.


The conclusion is clear: since we don´t do any of the practices, the first goal of MA is not satisfied.


The second goal of MA states "Measurement results, which address identified information needs and
objectives, are provided." To achieve this goal, CMMI expects us to perform four specific practices. 


The first specific practice (SP2.1) of the second goal of MA is to "Obtain specified measurement data." Measurement data we obtain include number of downloads, build status, test coverage, source code volume and translation completeness. However, which measures we collect is partly determined by what different tools and websites collect for us automatically. I think the intent of this practice is to obtain measurement data that fulfills the specified information needs as discussed in SG1 of MA. However, if we would have made our information needs explicit I think these measurement data would still be in line with those information needs, so I´d judge this practice as largely implemented.


The second specific practice (SP2.2) expects us to "Analyze and interpret measurement data." We do analyze some of the data, but mostly on an ad hoc basis. For example, we changed the coverage measurement when analysis showed that third party software included in the Task Coach source code repository was included in the coverage measurement. One analysis is always done, and that is when the build bot fails at building one or more of the distributions. However, other measurements like number of downloads and source code volume are not analysed at all. Conclusion: this practice is only implemented a little bit.


The third specific practice of SG2 of MA says that we should " Manage and store measurement data, measurement specifications, and analysis results." Due to the open nature of the project and the tools used, all measurement data is stored and available. However, measurement specifications and analysis results are not explicitly stored, except maybe in emails between the developers. Practice partly implemented.


The fourth and last specific practice of SG2 reads "Report results of measurement and analysis activities to all relevant stakeholders." Some of the measurements are reported (emailed) automatically, such as build failures. Other measurements are available via the web on demand. Since we report (actively or passively) all measurement activities that we do, I guess that makes this practice largely implemented. By the way, it is interesting to note that the intent of this practice is to "to support decision making and assist in taking corrective action", but that this expectation is not part of the mandatory or expected content of the model.


Conclusion, the second goal is only partly achieved since most of the practices are only partly implemented.

No comments: