Wednesday, December 2, 2009

Black Screen of Death

While the "blue screen of death" has made it into the IT failure and quality vocabulary, could we see a new addition - a colour change perhaps!

In Computerworld article, "Microsoft denies blame for black screens of death" (2 Dec 2009). User have claimed that November windows updates have caused lock outs of users' PCs.

Microsoft have denied claims, and investigation of failure reports have found that none of the behaviour is related to that its November Windows updates are causing a widespread "black screen" lock-out of users' PCs. Furthermore technical support have not seen this as a broad customer issue.

Microsoft have some extraordinary approaches to configuration and patch testing. It is a huge challenge when you consider the miriad of configurations and applications dependencies that exist. In the whole scheme of things I think they have done a good job in recent years, and a testament that we dont see these kinds of problems proliferating every day.

While most of us have experienced the "blue screen of death" and enjoyed the quality awareness that this colourful metaphor has raised. From the article, it sounds like a widespread mutant of the term is not yet ready to surface???

Tuesday, November 17, 2009

Computer glitch delays Qantas flights

I thought I should find a place to log interesting software failures. So here is the first one, which hit the news last night and this morning.

"Computer glitch delays Qantas flights"


Amadeus (http://en.wikipedia.org/wiki/Amadeus_CRS), a reservation system, failed for multiple airlines, and Qantas reportedly had to switch to manual processes. This failure was reported to have affected many other airlines internationally.

Tuesday, July 21, 2009

Immature in Using Metrics to Support Investment in Software Testing

In the current economic environment, one would think that we would be using all means to secure investment in software testing. However at a recent industry forum it became apparent to me that the level of maturity in using metrics is somewhat limited.

At a recent ACS Testing SIG meeting on metrics, we discussed what metrics could be used to support the business case for investment in testing. The meeting was largely an open forum discussion seeking opinions and contributions rather than a presentation.

Of the participants, many admitted to using defect metrics, but rarely used other metrics such as time and budget allocation, effort, coverage, and test outputs. This is somewhat disappointing to have a poor measure of the work that we undertake. It leaves us vulnerable to cutbacks, as other managers who may be more verbally gifted may take away our budget. Metrics provide a useful way of supporting our benefit.

The meeting went on to review the kinds of metrics that testers use. The following list provides some suggested metrics:
  • Defects
    • Type
    • Cost
    • Age
    • Status
    • proportion found in stage
  • Coverage
    • automated v manual
    • planned v executed
    • requirements
    • code
  • Effort, time, cost, schedule
  • Test outputs
    • test designed
    • test executed
  • Test inputs
    • size
    • complexity
    • developer hours
    • project budget
  • Risk
Some comments were made that metrics can be broadly grouped into two categories:
  • Efficiency - can we do more testing using less effort
  • Effectiveness - can we acheive a better outcome from our testing effort
While the discussion focused on supporting the business case for software testing in tighter economic times, it is important to note different uses of metrics. Metrics are used by test managers for the following:
  • Managing Progress - producing estimates, how complete is testing, how much more to go, ...
  • Quality Decisions - is the product good, bad or indifferent, are we ready for release, ...
  • Process Improvement - what are the areas that future improvement should target, how do we know our process has improved, ...
  • Business Value - what benefit has testing produced for the business in terms of reduced costs, increase revenue, reduced risk, ...
The choice of what metric to use can be daunting. It is not a good idea to collect everything, as you get overwhelmed in what data to use to make a management recommendation. It is worth looking at Goal-Question-Metric (GQM) promoted by Victor Basili to help choose appropriate metrics.

There was a lot of discussion that metrics assessing productivity can be dangerous. My personal view is that we should you use metrics to influence productivity, but that the following points need to be kept in mind:
  • Productivity or performance measures do change behaviour. People will start aligning behaviour towards meeting the metric target. A poorly chosen metric or target could create behaviour that you never intended.
  • One metric wont tell the correct story. Measuring productivity in terms of tests per hour completed, may mean that people run poor tests that are just quick to run. You may need several metrics to get a better perspective, for instance collecting information on defect yields, defect removal ratios from earlier phases and so on to get better picture.
  • Given that metrics will change behaviour, you may change your metrics from time-to-time to place emphasis on improving or changing other parts of your process or performance.
  • Metrics should be used by the manager to ask more questions, not a hard and fast rule to make a decision. A metric may lead you to make deeper enquiries with individual testers or developers.
  • Managers need to build trust with the team in how they us metrics. If the team don't trust how you will use the metrics the will likely subvert the metrics process.
When we were discussing using metrics to justify the business case for testing it is very easy to get caught up in the technical metric, that is important to the test manager. However, when discussing with other business stakeholders, you need to talk their language. You may need to explain the metric in terms of what it means in terms of cost saving or increased revenue for it to have its impact. Don't explain it in terms of how many more tests cases are executed per hour, instead explain it is $'s per hour saved. Other business may need it explain in other terms, such as risk reduction or satisfying compliance.

It appears that we still have a way to go with software testing metrics. As a profession we need some clarity about how as Test Managers we can use metrics to gain support, and provide evidence of our benefit. Lets hope that as we mature we will start stepping up to sell our return-on-investment more effectively.

Thursday, May 14, 2009

Certification - Should be more than a piece of paper

I just saw a link to "Expert Software Testing Certification" for only US$9.95.  http://softwaretestingwinners.blogspot.com/2009/05/expert-rating-certification-in-software.html

It cracked me up!!!  Will 40 multiple choice questions ensure that I am an expert?

There is a real danger with certification becoming meaningless, and it WILL damage our profession.  The key to a profession I believe includes some of the following elements: 
  • Common terminology and standards
  • Shared body of knowledge
  • Competency development programmes and recognition schemes (e.g. certification)
  • Representative bodies
  • Recognition by employers and legislators

Yes, I created a Certification program a long time ago and we continue to offer it widely (http://www.kjross.com.au/page/Training/Certified_Software_Test_Professional/).  And I believe Certification can add value, but if Certification doesn't help to develop and assess competency then it will be doomed.  If people become certified and they are not competent, then the recognition of the profession will be damaged.

I think focusing purely on the exam is a big problem.  For individuals this is the main driver, they want the piece of paper!  I remember as a university student, the endless swotting & cramming, and many students not caring about the subject they were studying, avoiding lectures, and doing whatever it took to pass the exam and get the credit. 

For the most part, participants in our certification are funded by the employer.  For the employer, they want to see skills development and recognition.  If their employees sit in a training course, they want them coming out being more competent.  They want to ensure that the certification process ensures that recipients exhibits and develop competency in specific areas.

The exam should be considered a secondary goal, just an assessment method that measures competency.  Yet for many it is the primary goal.  The primary goal should be competency development.

I dont like multiple choice exams, but any exam on its own is not enough.  There needs to be multiple methods to assess competency, this should include:
  • participation in practical exercises, understanding and demonstrating skills
  • demonstration of workplace competency
  • proof of understanding through completion of individual assignments
  • engagement with experts where they can demonstrate understanding of key concepts
It is costly to build these into a certification programme, but that should be the goal.  We need to make sure people dont slip through, those that do will devalue the certification, and others that have completed it.

It really frustrates me when I hear people say "let's do the 1 day course and then sit the exam".  I would be happy if they felt they were already competent, and they just wanted the recognition.  But for most, they do not have the competency in the specified areas, and they are just looking for the fastest way to swot for the piece of paper.  Will they be competent???







Thursday, April 30, 2009

SOA Challenges

Service Oriented Architecture (SOA) places signficant priority on software quality and testing.

I just read a vendor whitepaper about SOA.  The paper emphasised a number of points about SOA and software quality, as indicated by the quote:

"Developing an SOA that guarantees service performance, scalable throughput, high availability, and reliability is both a critical imperative and a huge challenge for today’s large enterprises."

This whitepaper from Oracle confirms for me my experience that SOA is placing greater emphasis on the importance of software quality and testing.

Testing becomes the pivotal role in integration activity where SOA is used to glue together larger systems.  It is at testing where we see all the components and the fabric that binds them come together into a working solution.

Not only must testing support function, end-to-end, and acceptance testing of the SOA integration, but as this paper highlights other quality attributes are critical to the success of SOA:
  • Performance
  • Scalability
  • High availability
  • Reliability
These are difficult areas to test and evaluate particularly where SOA is integrating many and large separate applications.

My experience on testing projects involving SOA is:
  • Building end-to-end test environments is difficult, and we must use stubbing and harness to contain the scope of testing.
  • Performance testing is a must, as SOA communication between applications impact user response times significantly.
  • SOA is distributed, which means vulnerability to failure of a single component can bring down other interdependent systems and applications.
  • Performance monitoring should run tests of user experience periodically.  Many of these tests can be drawn from automated tests used for performance or functional testing.
  • Testing is more technical, as tests are constructed around SOA messages.  Tool support (either commercial or open-source) is essential.  Frequently manual / business testers aren't suitable in this areas, as there is no user interface, and instead tests must be coded to send and retrieve messages.
  • Asynchronous behaviour of messaging makes functional testing more complex.  We must deal with messages coming back in different orders, it is not always send then receive, other messages may be interleaved.
When adopting SOA, testing teams will need to evolve their strategy regarding how the evaluate and test.  New methodology and technology will need to be incorporated to address the points above.


Saturday, April 25, 2009

7th Australian Test Managers Forum

The 7th Australian Test Managers Forum concluded on Friday 24 April.


Numbers were down on previous years.  Previous years have always been a sellout 5 or 6 weeks before the event.  This was clearly a sign of the current economic conditions, with most frequent attendees citing tight travel, training and conference budgets, as well as reluctance on staff to be out of the office in uncertain times.  However we had pretty close to the same number of representative companies, just less multiple attendees.  Nevertheless, the interaction was the same as in the past, and a rewarding time was had by all.

Particularly enjoyable for me each year is the ideas exchange with people who are equally passionate.  What I like best is the information interchange and sharing of what has and hasn't worked for each other.


The most critical challenges this year (from the participat challenges survey) :
  1. Requirements - still number 1 from last year.  We often experience poor quality requirements leading into testing, and no doubt the same is happening for development.   Testing needs to work more in the requirements phase to have clear expectations on what is being tested.  Similar our role in this phase means that we test the requirements specifications and find defects earlier.
  2. Return-On-Investment: the global economic crisis is putting more pressure on us to demonstrate business value and to justify costs.  With ROI, if we dont show the "R" (the return to business), all they see is the "I" (investment) as a "C" (cost).
  3. Environments: with the introduction of larger integration and technologies such as SOA, environment management is becoming more complex and downtime wastes testing effort.  While virtualisation (like VMWare and others) is changing test environment management significantly, there is still significant challenges in specifying and controlling testing environments.
  4. Early involvment: the biggest cost savings that testing will make will come from working earlier in the lifecycle.  Yet it is a challenge for test teams to get engagement in early phases.  We need to use test results to show defects coming out of early phases, such as requirements bugs, then offer more cost effective ways that testing can find these bugs, e.g. through requirements and specification evaluation (such as inspections, modelling, walkthroughs).
  5. People: while the pressure has come off staffing with the economic downturn, selecting the right candidate, and having succession plans from existing staff leaving remains a critical issue.  Here focus on skills definition, evaluation and staff development techniques are required.
  6. Scheduling: the schedule squeeze on testing at the end of the lifecycle remains.  Development still slips, yet the release date doesn't.  Many organisations discussed the greater move towards agile, which has other challenges for testing, to help alleviate this.  Also discussed we techniques are sourcing strategies, flexible resourcing, and risk-based prioritisation of testing.
The were other challenges discussed, but the survey showed these as not as critical to the audience as a whole:
  • Resourcing
  • Estimation
  • Automation
  • Process Improvement
  • Methodology
  • Coordination
  • Stakeholder Engagement
  • Governance
  • Agile
  • Release Management
Much of the information, surveys and presentations will be collated and progressively added to the website at http://www.kjross.com.au/page/News_and_Events/Australian_Test_Managers_Forum/Test_Managers_Forum_2009_