Monday, November 17, 2014

Testing Contracts In Integration

For continuous integration and delivery, test automation is key to speed, repeatability and reliability.  We all dream of a testing nirvana where we have push button automation, that gives immediate feedback that we have just broken the build.

Reality is different though.  Many test automation approaches take a system-wide approach.  Then you have lots of unwieldy test environment and test data issues to manage.  You spend a lot of time trying to get all this under control.  Next minute you are finding that it takes a long time to get a stable environment, automation ends up just focused on a old regression set, and there is no immediate feedback to the development team as it takes a specialist squad to decipher the results and clean up the false negatives.

In more recent times, there has been greater emphasis on "shift left".  Get more of the automation at the unit level, make developers responsible for automation, make it part of the build process.  In my view, this is all heading in the right direction.

Since getting involved in development in my own startup, I have been reflecting on my own experiences in this space.  I am finding more of my time writing automated unit tests, as opposed to automation system tests.  I am still not quite sure that Test Driven Development (TDD) delivers as much ROI as believed, but feel I am making ground in my unit tests adding value to speed up my delivery.

A big focus of my unit tests recently has been using Test Doubles to mock out underlying object behaviour.  This leads to interesting object design to mock out dependency injections which would otherwise lead to external influences in the test.  The mocks allow you to have good control over the piece of code you are testing.  But then there remains the question, how do I know when I take away the mock and integrate with the real implementation I don't have defects?

My thoughts now are turning to the idea that we need tests of the actual behaviour that match up with the behaviour within the mock.  Ultimately expected behaviour in the mock must match up with actual behaviour of the implementation.  We need assertions on both side to match up.  We need to ensure the contract behaviour of the mock and implementation are equivalent.

If we aren't satisfied that we have this covered we will need to invest in integration testing where we combine the two components.  But then automation becomes much more challenging in this domain, coverage is much more difficult, and manual testing is frustrating.

Check out J.B. Rainsberger, Integrated Tests Are A Scam for a good overview of the challenge and the resolution.  Unfortunately "jbrains" keeps the secret sauce to himself in terms of a solution, but as you can see from the presentation, collaboration/contract test automation is a key to reducing the integration challenge.

Some months ago I was at a presentation at SEEK given by REA, where they discussed Pact, which provides consumer driven contract testing.  Pact addresses the challenge of how to ensure that when mocks are used to help test a consumer of an interface or service, then tests of the actual interface or service are also created to ensure that the contract is preserved.  This is what I need, but Pact is in Ruby, so now I am searching for a similar process for Php without learning a new platform - doh!

In the meantime I will provide traceability between my mocks and my contract tests.  I will search out other frameworks which will assist me in synchronising the mocks and contract tests.

My experiences in this space though are leading me to believe that this is an approach to address the larger scale integration challenge.  If we use this approach to test service interfaces between systems, then we reduce the accumulation of system integration testing effort.  This will have benefit to both product quality, as well as delivery schedules.  Hopefully less time waiting for system delivery schedules to align.  More ability to get the component right in isolation before integration.  No doubt we will have some challenges on agreements of the service contracts between systems, getting communication between teams, etc.  

Perhaps however our mocks and unit tests are the basis for specifying that behaviour and defining the contract agreement?

Tuesday, September 23, 2014

Independent Testers Are Like Parents Of Drug Addicts



Ever heard of "codependence"?

Codependence is a type of dysfunctional helping relationship where one person supports or enables another person’s addiction, poor mental health, immaturity, irresponsibility, or under-achievement [Wikipedia].

It is like being a parent of a drug addict. You are trying to help them beat the addiction, but in helping you can actually feed the addiction. It can be destructive for both parties.
Having independent software testing often leads into the same trap of codependence.

Back in the "good old days" software engineers tested their own code, there weren't software testers, sometimes as a software engineer you got assigned to testing. In my first job in avionics systems you would spend most of your time testing, but you were still a software engineer.

Then in the late 90s, influenced by Y2K, independent testing really got traction. Independent testers look at things differently to the developer, it provides a quality gate. All these things are true and beneficial.

But then software engineers stopped testing! 

That is the independent testers job. They own quality. They will find the bugs because they are good at it. The crappy details nit-picking checking is better done by anybody other than me!

The result is development no longer own quality. Testers find lots of defects that should have been found more efficiently earlier. Buckets of time and schedule is lost as testing is passed over the fence and down the line. Developers are not getting informed of how to improve quality and reduce putting the bugs in the first place. And we are just beating them over the head with lots of public failures that frustrates everyone and puts developers and testers in conflict.

There are seeds of change though. Many agile and continuous quality approaches are putting ownership of testing back in with development and delivery teams, rather than sitting only with testing. Developers are taking on board greater testing responsibilities. Testing is becoming more of a coach and a safety net, providing input to development processes where they can be improved internally to deliver software faster and more reliably.

Last week I reviewed two great pieces of information which made me reflect more on correcting the codependence, I highly recommend you check them out:
So lets take on a role where we both engage in quality. Developers own quality. Testers can provide improvement feedback and act as a safety net. But lets not feed the addiction to poor quality by passing the buck.

And finally a plug for our upcoming conference iqnite Australia, where we are leading the thinking to reshape how organisations are approaching their testing. In 2014 we have a big focus around DevOps, Agile and Continuous Quality, including ideas discussed here.

Saturday, May 31, 2014

Feature Toggles

In Continuous Delivery, Feature Toggles provide a useful way to turn on or off behaviour within your application.  It provides tremendous value to QA by allowing progressive deployment of new features, enabling early customer testing while minimising impact of failure.

While most QA and test managers want to delay release until certain that there are no major defects in the system, this can significantly stymie delivery and inhibit innovation.  Feature Toggles allow new features to be deployed, but only exposed to limited users.  In some cases this may be named users, internal user groups, or a limited slice or percent of the customer base.  In this way, defects in new features have limited exposure and impact on the user base.  This enables the new features to be tested on production environments, and also allows feedback from alpha and beta users on the effectiveness of new features.

Simplistically, a Feature Toggle is a if statement surrounding a new feature that determines whether the feature is exposed to the user or not.  However, more powerful Feature Toggle libraries can use various aspects to determine whether the feature is exposed, such as:

  • user credentials and groups
  • employees, 
  • GEO location, 
  • Browser user-agent,
  • percentage of users
  • server load
  • etc

Enabling features earlier in a controlled approach speeds up feedback from customers.  For startups, and applications requiring innovation, often it is a greater risk that we are building the wrong product.  Often much more significant that we are building the product right.  In these situations we want to prioritise testing with real customers, getting feedback on the effectiveness of the features.  This may be higher priority on whether there are defects in the feature construction itself.  Getting user feedback early allows the user design of the feature to pivot, and we can return to other system tests once the feature user value has been fully realised.

Feature Toggles can be key to the A/B Testing process.  Toggles can partition features depending on the A/B or multivariate segments.  Performance measurements can then be compared between features exposed or hidden.

Adopting Feature Toggles has its gotchas that must be carefully managed.  At some stage, features should be mainlined and the toggle taken out of the code, or where the feature is unsuccessful and toggles are off, the feature is removed from the code base.

Testing feature toggles also requires care.  Integration tests will need to flick toggles between on and off to expose or hide the feature within tests.

Further reading: