Archive for the ‘Testing’ Category

Following a discussion on testing and architecture I thought I’d write a post. The statement was: When architecture is not informed by tests, a mess ensues. That’s of course nonsense. Versaille was never tested but is still recognized for it’s architecture.

The statement got me started. Rhetorically that’s a clever statement. It uses an old trick most salesmen has under their skin. The statement associates the item being sold (in this case that item is testing) with something with objective positive value. That would be information in this case. The statement is of course also logically incorrect but marketing never worries about mathematical correctness as long as the statement is either mathematically incomplete or ambiguous. However the statement was not taken from a marketing campaign but a discussion of engineering  practice between J “Cope” Coplien and Uncle Bob (the latter wrote it) and in that context it’s incorrect. The key is not the test but the information. How the information came to be is irrelevant to the value of the information.  So a more correct version of the statement would be “Without information mess would ensue” and that is of course correct but also tautological.

The real value of information is in it’s use. If you don’t use the information but disregard it all together it had no value. Since testing is retroactive you get the information after you are done with your work, you can’t improve anything retroactively. Unless of course you are the secret inventor of the time machine in which case retroactively becomes entirely fussy.

If you do not intent to create another version, the information you gained from testing has no value in respect to the architecture. So using test as a means to produce value in the context of architecture  requires an upfront commitment to produce at least one more version and take the cost of a potential re-implementation.  If your are not making this commitment the value you  gain from the information produced by your tests might be zero.

In short you have a cost to acquire some information, the value of which is potentially zero.

It’s time to revisit the statement that got it all stated and to try and formulate it somewhat more helpful than a tautology.

“If you do not invest in the acquisition of information your architecture will become messy” 

You can try and asses the cost of acquiring information using different approaches and the choose the one that yields the most valuable information the cheapest.

There are a lot of tools to use to acquire information. One such tool is prototyping (or even pretotyping). Prototyping is the act of building something you know doesn’t work and then build another version that does. In other words prototyping is when you commit to implement a version, learn from it by (user) testing and then build a new version. Might that be the best approach? at some stage for some projects, sure. Always? of course not. Prototyping and pretotyping are good for figuring out what you want to build. So if you do not know what you (or your business) wants then use the appropriate tool. In innovative shops pretotyping might be the way to go. When you have figured out what to build, then you need to figure out how to build it. The act of figuring out how to solve a concrete task is call analysis. Analysis is good at producing information of how to do something in the best possible way.

Bottom line: There’s no silver bullet, you will always have to think and choose the right tool for the job

Fine print: You can’t improve anything with testing. You can improve the next version with what you learn by testing but only if there’s a next version

Advertisements

..and I’ll have you writing unit test until the next paradigm shift.

Today I was reading a some what oldish article on why not to use singletons. It had me laugh. I totally agree with a lot of his points and even if you do not agree it’s great fun.

It made me laugh even more because I’ve been on a project where I’m sure they at one point had a design meeting that went like this:

Manager: “We’re going to model the world”

Dev. Lead: “Cool and there’s only one world so we need a singleton for that”

Manager: “We need to be able to model Denmark, UK and USA”

Dev. Lead: “Ok there’s only one Denmark so we’ll need a Denmark singleton”

Manager: “We need to model Kastrup airport in Denmark, LAX in the states and Gatwick in UK”

Dev. Lead: “Well there’s only one of each of them so we’ll create a singleton for each”

Basically every single class in the design ended up being a singleton. the denmark had a method called getKastrup() which in turn returned the Kastrup Airport singleton and the US singleton had a similiar method called getLax().

My first line of questions went something like this:

Q:  “Why don’t you have a city base class with a getAirport method?”

A: “Well making a getKastrup() method on the US singleton returning the LAX singleton really doesn’t make any sense”

(Me think: “I absolutely agree but the US singleton it self doesn’t make any sense and you still have that one and others”)

Q: “Well what if all the countries was actually of the same type with a property holding the name?”

A: “No no we can’t do that our development manual states we should use singletons when ever we only need one instance of the object and we only need one of each of the UK, US and DK objects”

Q: “Well isn’t that 3 instances of the same type?”

A: “No not at all the US has a method called getLax(), the UK has a getGatwick() and the DK has a getKastrup() so if they were to be one type it would need 3 methods and for each instance only one of them would be valid, that would be bad design”

Q: “what about just having a method called getAirport() that could return the correct airport based on arguments passed on construction of the object?”

A: “Ah see that would be hard to read code, since there’s really no city called airport and the object we’re returning is named after the city, so you would expect to get an airport but either Lax. Kastrup or Gatwick and you can’t see that from the getAirport method name”

This went on and on. The really funny thing was for every decision they had an argument for having chosen as they had and almost every argument sounded right but made no sense what so ever.

I ended up having them write unit test for all there singleton based code, they had some test code already but that was mainly sunny day tests so I had them write code to actually find errors. It’s didn’t take them long to realize that singleton based code is not ment for unit testing. No need for arguments against singletons afterwards. Every single person that had had to write tests for singleton based code loathed them afterwards and all of them came with suggestions or requests for having the caode changed to some thing else.

As an a side I use strategy a lot, so reading the takes on strategy in the above mentioned article was just as fun. I totally agree it’s functional programing and I like it 🙂 but it’s very not OOP but a nice why of mixing the two.

I will be paying more attention to when I’m using that pattern in the future

Writing robust code

Posted: November 14, 2008 in Testing, Thoughts on development

A little more than a week ago I gave a speech on writing robust code to the dev’s of the team Im currently working for. I think I learned more from that speech than did the listeners.

Im used to highly object oriented people and hadn’t realized that most of the team have never developed OO style, so we had a lot of misunderstandings and strange looks but slowing we got closer and closer to the point I was trying to make and in the end we had a plan for the next session.

Having a second seesion gave me the option of integrating the knowledge I had gotten from the first session and rephrasing the goal in none OO terms, we still debated but ended up with a very easy to remember conclusion: “Only implement the needed functionnality”

That might seem very simple but take a look at you own code and see if you in any way can provoke the code to go down an unexpected path. If you have a switch modelling different states in your application with a very high certainty the answer is yes.

As one of the listernes worngly thought Im not advocating never to use switches, but I am advocating they make loosy state machines.

take the switch

switch(state)
   case StepOne:

  break;

  case StepTwo:

  break;

There’s absolutely nothing that enforces that stepone is handled before steptwo is even valid. If that’s the intention, fine no worries.

In our particular case we had a 4 case switch called 3 consecutive times giving a possible 64 paths through the code but only 2 of them was actually valid.

Changing the implementation from a swicth to an simple state machine reduced the possible paths through the system to 2. the state machine was implemented with a simple class
state
   state Next;
   IHandler handler;
   void Enter();
   void Leaver();

 IHandler
    void Execute();

That way it was very easy to link the states to give us the only two possible ways it was actually valid to travers our potentially 64 path execution graph.

The neat thing about the solution comes when you start testing your solution.
If you have an undetermined number of paths you’re at risk that one of the paths you hadn’t realized existed fails.

You can write code to handle those situations but if you forget to do so or just didn’t cover everything of what you didn’t know existed, it’s very unlikely you will spot it.

Wheras if you only implement what you need you will not have to worry about all those cases you dont even know about, instead if you mess up and forget something, you will find it in your test every time. You simply can test all the sunny day scenarios with out realizing that the functionality for one of those scenarios isn’t implemented.

I’ve recently had a period where I didn’t do much else than writing unit tests. Instead of just writing them like a robot I tried breaking as much of the code as possible while getting our coverage percentage up.

There was one construct that kept coming back: Implicit comparisons. When I say implicit comparison what I basically mean is a comparison using a different comparison operator than ‘==’ or where at least one side of the comparison is an expression where not all possible values are meaningful.
//implicit
x < 8 //implicit
“ok” == IsUserLoggedIn()
//Not all possible values of IsUserLoggedIn are meaningful
//“I’m Santa Claus” for one is probably not meaningful.
//This also serves as an example of why not to represent state as strings

Sometimes the implicitness is hard to spot and sometimes the result of not spotting them might make the system rather vulnerable.
Let’s say that we in a system have a permissions check using an external method call GetPermissions().
Let’s assume that the possible values for PermissionFlags are None, Read, Write and Full (integer values 1,2,4,8).
GetPermissions returns a PermissionsFlags value.
The implicit comparison could then be similar to:
var neededPermission = PermissionFlags.Full;
if (neededPermission == GetPermissions(currentUser) & neededPermission) {
//Do something that requires PermissionFlags.Full permissions
}

The above code is pretty hard to test even though it’s only 2 lines, mostly because the “ugly” cases might not be easily spotted.
If GetPermissions behaves nicely it should only return even values from 2-14 or 1 but it’s external so we have no way of ensuring that it is well-behaved.

For uneven numbers the comparison might still work as long as it’s ok to ignore that the none bit is set high.
A value of PermissionsFlags.None | PermissionsFlags.Full is rather ambiguous but might be meaningful based on specifications.
What happens then if GetPermissions, when passed an unknown user returns -1 as an error code, expecting the caller to handle the undefined value?
The above comparison would then work fine for all known users but might (depending on how integer values are represented) return true for all unknown users

My point with this example is twofold. Always use explicit comparisons (especially in security code) and always return a well defined set of values or if the method is external always validate the returned values before relying on them being within certain boundaries.

Pit falls in testing

Posted: June 13, 2008 in c#, Testing

Yesterday I wrote a post on default(T) not always being valid, that realization made us change the signature on the mentioned method.

Working on that rather simple method made me once again think about testing. We have Asserts like

Assert.AreEqual(Enumeration.Valid,Enum<Enumeration>.Parse(“Valid”))
Assert.AreEqual(Enumeration.Valid,Enum<Enumeration>.Parse((int)Enumeration.Valid))
Assert.AreEqual(default(Enumeration),Enum<Enumeration>.Parse(“not valid))

This gives 100% statement coverage and it might look as it gives a 100% branch coverage as well, which unfortunately is not true. You don’t necessarily have to have code to have a new branch.

object obj = “Valid”;
Assert.AreEqual(Enumeration.Valid,Enum<Enumeration>.Parse(obj));

a more common example of a hidden branch is an if with no else. Even though you have not explicitly written an else clause you should test that branch none the less.

the code being tested might look like this:

class Guard{

public static void ArgumentNull(object argument,string name){
if(argument == null)
throw new ArgumentNullException(name);
}
}

we might then have an Assert like:
Assert.Throws(typeof(ArgumentNullException),Guard.ArgumentNull(null));
we have 100% statement coverage but the quality is not very high. At some point we want to log the call stack when we have a call with a null argument. However the implementation has an error which is not caught due to the lack of testing of the “invisible” branch.

class Guard{

public static void ArgumentNull(object argument,string name){
if(argument == null)
Logger.Log(GetCallStack().ToString());
throw new ArgumentNullException(name);
}

}

We still have 100% statement coverage and our test still succeeds but unfortuantely any call to Guard.ArgumentNull now throws an ArgumentNullException no matter whether the argument is null or not.

When in doubt if more test cases are needed make a Cyclomatic Complexity Analysis of the code being tested. The number of tests needed is in most cases proportinal to the cyclomatic complexity of the code being tested.

for more information on how to apply CCA as a quality measuring mechanism for unit tests take a look at this blog. I do not agree on their actually “algorithmn” for figuring out the number of tests needed but the point of creating an algorithm based on CCA is well thought.

A rule of thumb says that you need 4 incorrect values for each correct value you need to test each decision point in your code.

Since CCA in essence is a measurement for the number of decision points in your code I go for a higher number than .5 * CCA. What that constant would be, would depend on the project. In my current project the constant is between 1 and 2 depending on factors such as source of the code (generated or written), the complexity (it’s not a linear relation ship for us but exponential) and the severity of an error in the tested code. (An error in the security is a lot more severe than in the part that does debug tracing)

If my work was a person i would want it to be proud, stubborn and lazy.

Mr. Work should be proud to be reliable, fast and robust. He should be stubborn and think he was always right, at least until something or someone proves him otherwise. Mr. Work should then act as fast as possible to once again always be right.

Most importantly he should be lazy like no one else. If Mr. Work solves a task I want him to do that exactly once.
I do not want Mr. Work to reinvent the wheel. If he needs a wheel and it’s already there, he should be lazy and just use what’s already there. If no one has invented the “wheel” yet Mr. Work should take on that responsibility with pride.

If there are two tasks that involves similar subtasks Mr. Work should automate the subtasks and let the automation care about repeating the tasks.

Some might say that I’m a bit like Mr. Work and I know the people I’ve managed in different projects will say I try to conduct my work accordingly. I know this because I set up rules and guidelines to ensure those qualities.

One of the things I try to enforce to ensure reliability and robustness is making sure that what ever code I write or ask some one to write should fail fast.

To me failing fast includes:

  • Failing close to the source of the problem
  • Failing in the same development circle/iteration as it was created
  • Failing in the first possible project phase

When I think of phases in a development iteration I most often think of the v-model. The V-model not only describes the possible phases it also gives you a way to calculate the cost of not failing fast.

Each step you take in the model after creating an error and not realizing so, will make it 10 times as expensive to fix. That is if you create an error in the specifications, that would have taken 5 minutes to correct but you do not spot the error until you are done coding, it will take you 3,5 day in average to correct it.

That is a serious amount of work time wasted. So one of the things I like to do is to test documents. The usual specification is best described as a list of functions the user is guessing the system should be able to perform but very little on which tasks the users wish to solve and how they would like to solve it.
I personally do not accept this kind of specifications. For several reasons.

  • They are usually not very specific, and hence not specifications at all
    I’ve had such a document that said “The system should include a variety of functions, emails and such” I never learned, if they wanted to be able to write and send emails, wanted a web mail client, an web server or an email based service architecture.
  • They are close to impossible to test
    For some reason I’ve had several customers that thought that a good idea. AS they said it made for a more “Agile” project. What!, it just makes it easier for the customer to change their minds but harder to complete the project.
  • I don’t think the user should guess at how best to make a system solve certain tasks.
    The user knows a lot about the tasks they need solved. They should describe them and have faith that the IT professionals knows a lot about making systems that solves specified tasks

Use cases and User acceptance tests to the rescue. Having UCs and UATs you can’t just lean back you still have to work but at least now we have a proper specification on which tasks the user want to solve and how they want to solve them. We can use these documents as the basis of our work and we have a way of testing the feasability of those cases before even writing a single line of code.

Try to complete each UAT based on one or more UCs.
I’ve seen systems with UATs that had absolutely no corresponding UCs and I mean no UCs that came even close. Often it turns out that the first version of the UCs are poorly written.
I’ve been working on a system with a UC describing how to export data from the system. Everything until “press export button” was described in detail.

What should happen when the button was pressed was left out. Why? because it was obvious to all parties what should happen. As it turned out later; unfortunately the parties did not agree on what was so obvious.

In the project there was no UATs to begin with so when the first version was delivered the data was exported to an Excel spreadsheet however that was not what the users wanted. They actually just wanted to be able to print the data “exporting” it to paper.
A serious amount of code had been written to make exporting to spreadsheets (and other office formats) possible had been written and none of it was part of what the users wanted.

The rate of this kind of errors has fallen dramatically in that project since we introduced UATs giving us self the possibility of testing the specifications against how they will eventually be tested.

In the next part I’ll blog on failing close to the source