This Issue's Contents

BCS Logo

Project Management

Weighing the Options

Lu Lahodynskyj proposes a way of bringing much needed objective measurement to IT projects, system options - and whether the outcome really meets users' needs.

Projects raise some common issues which highlight a need for objectivity in areas that are often subjective. One of these issues is deciding between options, including ways to complete the project. Another is keeping people focused on what is important. Too often the emphasis is on who shouts loudest. A third problem is measuring success: IT says everything was implemented as signed off, so the project was a success, but the users are not happy.

Product evaluations measure different factors with a degree of objectivity, and adapting this to IT projects creates the rubric approach. The first step is to establish quantitative measurements for each area. For example, if one objective is to have a Web page, we might establish measures for the amount of useful information on the page: excellent - title, index, search bar and special offers shown; satisfactory - title, index and search bar; poor - title and index only.

We might also establish measures for the loading speed: excellent - loading in less than two seconds; satisfactory - two to four seconds; poor - more than four seconds. Information presentation might be another area, where excellent means the whole page is viewable; satisfactory means at least half the page is viewable; and poor means less than half is viewable. We now give a rating to each quality classification: for example we might count excellent as five, satisfactory as three, and poor as zero.

The next step is to establish some weighting for each area being measured. For the Web page we might assign 50% to showing useful information, 30% to loading in a reasonable time, and 20% to information presentation. Multiplying the rating by the weighting, and then adding up the values for all objectives, produces a total score.

So if it is highly important to have useful information on the page, we multiply 50% (the weighting given to showing useful information) by five (the 'excellent' rating), to get 2.5; the fastest loading means 30% multiplied by five. We might be able to compromise on displaying just over half the page, so here we can multiply 20% by three.

Used in this way, rubrics generate discussion and highlight the important and less important factors of a project. They thus help the project team reach their goal - or help them realise that they are on an impossible mission. It is important to keep records of discussions and the reasoning that went into the decisions. These will help answer questions from other people who are not directly involved but who have influence or an interest.

The rubric steps

  • List the important factors
  • Assign measurable targets
  • Give each factor a weighting, typically a percentage figure
  • Assign values to the measurements, such as five for excellent, three for satisfactory
  • Create a score for each factor by multiplying the weighting by the selected value
  • Add up the scores to get a total
  • Make comparisons
It is useful to set out a rubric in a grid, with the first column headed 'factor to be measured'; the second 'weighting'; the next three headed 'measurement rating poor' 'measurement rating satisfactory' and 'measurement rating excellent', with their appropriate marks (zero, three, five); and the last column headed 'total score'.
Each factor to be measured then gets a line across the grid. In the example there would be a line for the factor Web page loading time. This would have 30% in the weighting column. The three ratings columns would respectively be filled in with 'more than four seconds', 'two to four seconds' and 'under two seconds'.

Lu Lahodynskyj is a professional Member of the BCS and a Chartered Engineer. He works in Canada, where he has used the rubric approach and now also teaches it to students.

This Issue's Contents

Copyright British Computer Society 2001