If my work was a person i would want it to be proud, stubborn and lazy.
Mr. Work should be proud to be reliable, fast and robust. He should be stubborn and think he was always right, at least until something or someone proves him otherwise. Mr. Work should then act as fast as possible to once again always be right.
Most importantly he should be lazy like no one else. If Mr. Work solves a task I want him to do that exactly once.
I do not want Mr. Work to reinvent the wheel. If he needs a wheel and it’s already there, he should be lazy and just use what’s already there. If no one has invented the “wheel” yet Mr. Work should take on that responsibility with pride.
If there are two tasks that involves similar subtasks Mr. Work should automate the subtasks and let the automation care about repeating the tasks.
Some might say that I’m a bit like Mr. Work and I know the people I’ve managed in different projects will say I try to conduct my work accordingly. I know this because I set up rules and guidelines to ensure those qualities.
One of the things I try to enforce to ensure reliability and robustness is making sure that what ever code I write or ask some one to write should fail fast.
To me failing fast includes:
- Failing close to the source of the problem
- Failing in the same development circle/iteration as it was created
- Failing in the first possible project phase
When I think of phases in a development iteration I most often think of the v-model. The V-model not only describes the possible phases it also gives you a way to calculate the cost of not failing fast.
Each step you take in the model after creating an error and not realizing so, will make it 10 times as expensive to fix. That is if you create an error in the specifications, that would have taken 5 minutes to correct but you do not spot the error until you are done coding, it will take you 3,5 day in average to correct it.
That is a serious amount of work time wasted. So one of the things I like to do is to test documents. The usual specification is best described as a list of functions the user is guessing the system should be able to perform but very little on which tasks the users wish to solve and how they would like to solve it.
I personally do not accept this kind of specifications. For several reasons.
- They are usually not very specific, and hence not specifications at all
I’ve had such a document that said “The system should include a variety of functions, emails and such” I never learned, if they wanted to be able to write and send emails, wanted a web mail client, an web server or an email based service architecture. - They are close to impossible to test
For some reason I’ve had several customers that thought that a good idea. AS they said it made for a more “Agile” project. What!, it just makes it easier for the customer to change their minds but harder to complete the project. - I don’t think the user should guess at how best to make a system solve certain tasks.
The user knows a lot about the tasks they need solved. They should describe them and have faith that the IT professionals knows a lot about making systems that solves specified tasks
Use cases and User acceptance tests to the rescue. Having UCs and UATs you can’t just lean back you still have to work but at least now we have a proper specification on which tasks the user want to solve and how they want to solve them. We can use these documents as the basis of our work and we have a way of testing the feasability of those cases before even writing a single line of code.
Try to complete each UAT based on one or more UCs.
I’ve seen systems with UATs that had absolutely no corresponding UCs and I mean no UCs that came even close. Often it turns out that the first version of the UCs are poorly written.
I’ve been working on a system with a UC describing how to export data from the system. Everything until “press export button” was described in detail.
What should happen when the button was pressed was left out. Why? because it was obvious to all parties what should happen. As it turned out later; unfortunately the parties did not agree on what was so obvious.
In the project there was no UATs to begin with so when the first version was delivered the data was exported to an Excel spreadsheet however that was not what the users wanted. They actually just wanted to be able to print the data “exporting” it to paper.
A serious amount of code had been written to make exporting to spreadsheets (and other office formats) possible had been written and none of it was part of what the users wanted.
The rate of this kind of errors has fallen dramatically in that project since we introduced UATs giving us self the possibility of testing the specifications against how they will eventually be tested.
In the next part I’ll blog on failing close to the source
[…] it gave me an idea of another way to describe what I blogged about a few weeks ago in my post on UATs and my series on Requirement […]