Wednesday, April 6, 2011

Why I should choose (not chase) a vision

I have always had a vision - professionally. And I have always tried very hard to chase that vision. And as visions go, they were lofty. In trying to go after my vision, it now seems that I have constantly ignored the smaller problems that I am faced with. This is a critical understanding that has to sink in.

I can't become a Calculus guru unless I learn additions first. Just like how I learnt math at school, it is only when I became good at basic math that the next level is exposed. I didn't even know Calculus existed until high school. You can only peel the outer-most layer of an onion. And when you do that, the inner layer is exposed. And so on. You can't peel the inner layers first.

I do have to choose the right onion of course.

Tuesday, August 25, 2009

States don't run businesses

If we could think of software development as an economy, team members would be businesses and project manager the state.

Let me explain the analogy.
1. Businesses demand laissez faire - they don't want state intervention in the running of the economy. Much the same as software development teams.
2. Over the past years, we have seen how businesses can run the system aground, left to themselves. Its not that businesses like to run systems aground. Its just that businesses tend to focus much more on their profits than the welfare of the system. Same with the software development teams.
3. Enter macroeconomics. There are times when the state has to intervene. This could be reactive, when faced with a recession. Or preventive. In either case, it is necessary that the state monitors the economy and businesses and intervenes in a timely and appropriate manner not to suppress enterprise (which is key to a free market economy), but to foster economic stability and growth. Can't state the responsibility of project managers better than that.
4. As somebody said - state should not do more (or less) of what businesses are supposed to do but do what businesses don't do. States aren't expected to run businesses. States, however, are expected to understand the overall economy. Project managers don't need to write narratives or code. They should, however, have a sound understanding of the overall software development eco-system.

Note - I am not criticizing the software development teams as much as enunciating the need for continuous monitoring and timely and appropriate intervention of the project managers.

Tuesday, July 28, 2009

Measuring Value of Automation Tests

Value and purpose of test automation

The value of test automation is often described in terms of the cost benefits due to reduction in manual testing effort (and the resources needed thereof) and also their ability to give fast feedback. However, this is based on a key assumption that the automated tests are serving their primary purpose – to repeatedly, consistently, and quickly validate that the application is within the threshold of acceptable defects.

Since it is impossible to know most of the defects in an application without using it over a period of time (either by a manual testing team or by users in production), we will need statistical concepts and models to help us design and confirm that the automated tests are indeed serving their primary purpose.

Definitions

  

Manual Confirmation of Defects

 
  

Is a defect

Is not a defect

 

Automation Test Results

Failure / Positive

Defective code correctly identified as defective –Caught Defects (CD)

Good code wrongly identified as defective - Not A Defect (NAD)

(aka Type I Error / False Positive)

Positive Predictive Value – CD/(CD+NAD)

Pass / Negative

Defective code wrongly identified as good - Missed Defects (MD)

(aka Type II Error / False Negative)

Good code correctly identified as good - Eureka! (E)

 
  

Sensitivity - CD/(CD+MD)

  


 

The sensitivity of a test is the probability that it will identify a defect when used on defective component. A sensitivity of 100% means that the tests recognize all defects as such. Thus in a high sensitivity test, a pass result is used to rule out defects.

The positive predictive value of a test is the probability that a component is indeed defective when the test fails. Predictive values are inherently dependent upon the prevalence of defects.

The threshold of acceptable number of defects has to be traded-off with the cost of achieving such a threshold - test development costs, test maintenance costs, higher test run times, etc.

Tests will involve a trade-off between the acceptable number of defects missed (false negatives) and the acceptable number of "Not a Defect" (false positives).

E.g. In order to prevent hijacking, airport security has to screen all baggage for arms being carried into the airplane. This can be done by manually checking all the cabin baggage. This was briefly done for domestic flights in India. However, this is prone to human error, increasing the probability of Missed Defects / false negative. Note - NAD / false positive would be low in this case. How would this change if the manual check is replaced with metal detectors?

Hypothesis

The efficacy of automated tests should be measured by their sensitivity and the probability of Missed Defects / false negatives when the application is subjected to these tests.

Data from a project

  

  

Manual Confirmation of Defects

  

  

  

Is a defect

Is not a defect

  

Automation Test Results

Failure / Positive

58

20

74%

Pass / Negative

113

Good code correctly identified as good - Eureka! (E)

  

  

  

34%

  

  

Monday, July 13, 2009

Re-defining Agile concepts in a non-agile context

The metrics I suggested for use in an agile project will be equally valuable for a non-agile project as well. The terms / concepts used there-in have to be re-defined, though.

0. Story - A work component; Could be a use case, a functional requirement, etc.
1. Value estimates - Value of the work component (story / use case, etc.) towards enhancing the product. If this is not defined for the work components, it could be temporarily substituted with their effort estimates
2. Complexity estimates - Relative estimate of the complexity of the work component, relates to the effort needed for delivering the work component. This could be the effort estimates for the work component
3. Iteration - Time between 2 successive status reports (in projects that have a fortnightly status report, iteration will be a fortnight)
4. Status - status of the story. E.g. Analysis complete, Coding Complete, Testing Complete, etc.
5. Done Status for stories - This is the last tracked status in the life cycle of the work component. In agile projects, this is often "Showcase Complete" / "Customer Accepted".
6. Velocity - Sum of Value / Complexity estimates of all "Done" stories in an iteration

Thursday, July 9, 2009

Metrics for an Agile project

Q. How are we doing on delivering agreed scope of the current release?
A. Burn-up chart by iteration for the release. Below is a burn-up chart Manju created for reporting status on one of our large programs.



Among other things, this graph shows:
1. Scope changes (demonstrated by fluctuations on the "Total Scope" line
2. The gaps between succeeding status lines reflects in-process / wait stories. Larger than normally accepted gaps indicate bottlenecks. E.g. Dev is a bottleneck due to the huge gap between Analysis Complete and Dev Complete
3. Inventory of stories that are ready to go live (demonstrated by the "Showcase Passed" line
4. Actual completion status (demonstrated by the "Showcase passed" status line)

Q. How are we doing on throughput? How much value are we delivering? What is the trend - running faster, slowing down?
A. Velocity graph by iteration for the project. Only "Done" stories considered for velocity calculations. Below is a velocity graph Manju created for tracking velocity on one of our large programs. The 3 iteration average was first brought to my notice by Santosh, who was using it in one of his projects. I find this extremely valuable, as it balances the ups and downs into something like a trend line.

Why iterative development?

"Until you have seen some of the rest, you can't make sense of any part" - Marvin Minsky.

Minsky says this in the context of describing complex systems. This applies as much to software systems as to intelligence. How can we help users describe a complex system? Wouldn't building some of the rest help them in making sense of the parts.

Monday, July 6, 2009

Bottleneck - Cont.

Below is some data from my previous project:

Wait stages:
Ready for Dev 78
Ready for BA Acceptance 25
Ready for QA 14
Ready for Showcase 12
Ready for SAT 50

In-process stages:
In Analysis 71
In Dev 92
In QA 10

Its clear that Development is the bottleneck. Development takes the longest time among all the stages. Things just don't move as fast here. So, we push more work into this stage. That is the reason for the high in-process work. And more work means multi-tasking for the developers and consequently, diluted focus. That further adds to the time stories take to move out of this stage. And of course, you just cant push enough through the Development stage, so the inventory piles up. This could be the situation in most software development projects. The symptoms of this bottleneck sometimes showed a high inventory in other stages. But they could be traced back to the Development stage in most cases.

I wonder why we didn't look into the Development stage itself and saw what was happening WITHIN the stage. That could have helped us understand how to speed up the Development process.