..and I’ll have you writing unit test until the next paradigm shift.

Today I was reading a some what oldish article on why not to use singletons. It had me laugh. I totally agree with a lot of his points and even if you do not agree it’s great fun.

It made me laugh even more because I’ve been on a project where I’m sure they at one point had a design meeting that went like this:

Manager: “We’re going to model the world”

Dev. Lead: “Cool and there’s only one world so we need a singleton for that”

Manager: “We need to be able to model Denmark, UK and USA”

Dev. Lead: “Ok there’s only one Denmark so we’ll need a Denmark singleton”

Manager: “We need to model Kastrup airport in Denmark, LAX in the states and Gatwick in UK”

Dev. Lead: “Well there’s only one of each of them so we’ll create a singleton for each”

Basically every single class in the design ended up being a singleton. the denmark had a method called getKastrup() which in turn returned the Kastrup Airport singleton and the US singleton had a similiar method called getLax().

My first line of questions went something like this:

Q:  “Why don’t you have a city base class with a getAirport method?”

A: “Well making a getKastrup() method on the US singleton returning the LAX singleton really doesn’t make any sense”

(Me think: “I absolutely agree but the US singleton it self doesn’t make any sense and you still have that one and others”)

Q: “Well what if all the countries was actually of the same type with a property holding the name?”

A: “No no we can’t do that our development manual states we should use singletons when ever we only need one instance of the object and we only need one of each of the UK, US and DK objects”

Q: “Well isn’t that 3 instances of the same type?”

A: “No not at all the US has a method called getLax(), the UK has a getGatwick() and the DK has a getKastrup() so if they were to be one type it would need 3 methods and for each instance only one of them would be valid, that would be bad design”

Q: “what about just having a method called getAirport() that could return the correct airport based on arguments passed on construction of the object?”

A: “Ah see that would be hard to read code, since there’s really no city called airport and the object we’re returning is named after the city, so you would expect to get an airport but either Lax. Kastrup or Gatwick and you can’t see that from the getAirport method name”

This went on and on. The really funny thing was for every decision they had an argument for having chosen as they had and almost every argument sounded right but made no sense what so ever.

I ended up having them write unit test for all there singleton based code, they had some test code already but that was mainly sunny day tests so I had them write code to actually find errors. It’s didn’t take them long to realize that singleton based code is not ment for unit testing. No need for arguments against singletons afterwards. Every single person that had had to write tests for singleton based code loathed them afterwards and all of them came with suggestions or requests for having the caode changed to some thing else.

As an a side I use strategy a lot, so reading the takes on strategy in the above mentioned article was just as fun. I totally agree it’s functional programing and I like it 🙂 but it’s very not OOP but a nice why of mixing the two.

I will be paying more attention to when I’m using that pattern in the future

Being part of an embedded project team I was originally looking forward to
a lot of complex technical challenges but as time passed I realized, as
often is the case, that the success of a project is a lot more about
people, communication and motivation than about hardcore tech skills and
plain competencies.

To set the scene
The project team I’m part of consists of 18 persons where only a few had
done agile development be-fore they got introduced to in halfway through
this project.

The main feedback the development group is getting and has historically
been getting is from the system test group.

That feedback could in the past be boiled down to “We’ve discovered X new
faults and the following Y faults block our progress” repeated a few times
a week whenever the situation changed.

Over say 14 time boxes where the predominant feedback have been “What
you’ve done is not good enough for us to continue with our system work”
even the most hardcore developers I know would have suffered a blow or two
to their motivation. After all we all want to hear “Good job” once in a
while.

The above description is a bit exaggerated but looking at how people
behave and talk I’d say it’s reasonably close to the essence of how
people feel.

So what I originally thought would be a technically challenging architect
task has turned into a motivational task more than creating UML diagrams
and code structures.
And when it comes to being an architect the way every new idea is
presented needs leave a feeling appreciation and motivation in the
involved developers.

The first of two main focuses we’ve looked at so far is communication
within the project.
The system group now emphasizes how many observations (positive word for
bug) the developers have resolved and the testers have verified and no so
much how many unresolved bugs the system has.

First day they did that even the Head of the department was pleased with
the “progress”. I’ve put progress in quotes since there weren’t any
progress from Friday to Monday but the nonexistent progress was valued
because it was perceived in a different manner.
Knowing that you’ve solved 250 bugs and there’s only 40 known issues just
sound a lot better than “There’s still 40 new/unresolved observations”.

The second one is changing how people think about testing and finding
bugs. At present people fear bugs they are considered feedback of poor
performance and hence people test to prove that the code works (which is a
mathematical impossibility within finite time)

This time box we’re going to have a war game of testing. We’ve got 3
development teams and they are all delivering code 4 days before the end
of the iteration and after that deadline, every group is permitted to
write unit tests and unit integrations test to the code of the other
groups. The group that end up finding the most bugs wins the great prize.
The war game has a few objectives seen from a project perspective.
1. We want to change how people write tests. They should be written to
find bugs not to “prove” that the code works.
By actively rewarding the act of finding bugs we’ll change the focus from
“proving that it works” to “finding what doesn’t”
2. We want the coverage up and since there’s two ways of winning: Either
write a lot of tests finding “all” the bugs yourself or write a lot of
unit tests finding a lot of bugs in everyone elses code we’re sure to get
a higher coverage

As secondary objectives it’s worth mentioning that focusing intensely upon
writing unit test while at the same time being educated in writing unit
test hopefully makes the developers realize which code structures are more
robust than others and finding some of the qualities of testable code.

For me it’s back to holding my thumbs, crossing my limbs and hoping it all
works. Luckily I’ve played the games before so I know they do, and I guess
all there’s left to say is:”Let the games begin”

Writing robust code

Posted: November 14, 2008 in Testing, Thoughts on development

A little more than a week ago I gave a speech on writing robust code to the dev’s of the team Im currently working for. I think I learned more from that speech than did the listeners.

Im used to highly object oriented people and hadn’t realized that most of the team have never developed OO style, so we had a lot of misunderstandings and strange looks but slowing we got closer and closer to the point I was trying to make and in the end we had a plan for the next session.

Having a second seesion gave me the option of integrating the knowledge I had gotten from the first session and rephrasing the goal in none OO terms, we still debated but ended up with a very easy to remember conclusion: “Only implement the needed functionnality”

That might seem very simple but take a look at you own code and see if you in any way can provoke the code to go down an unexpected path. If you have a switch modelling different states in your application with a very high certainty the answer is yes.

As one of the listernes worngly thought Im not advocating never to use switches, but I am advocating they make loosy state machines.

take the switch

switch(state)
   case StepOne:

  break;

  case StepTwo:

  break;

There’s absolutely nothing that enforces that stepone is handled before steptwo is even valid. If that’s the intention, fine no worries.

In our particular case we had a 4 case switch called 3 consecutive times giving a possible 64 paths through the code but only 2 of them was actually valid.

Changing the implementation from a swicth to an simple state machine reduced the possible paths through the system to 2. the state machine was implemented with a simple class
state
   state Next;
   IHandler handler;
   void Enter();
   void Leaver();

 IHandler
    void Execute();

That way it was very easy to link the states to give us the only two possible ways it was actually valid to travers our potentially 64 path execution graph.

The neat thing about the solution comes when you start testing your solution.
If you have an undetermined number of paths you’re at risk that one of the paths you hadn’t realized existed fails.

You can write code to handle those situations but if you forget to do so or just didn’t cover everything of what you didn’t know existed, it’s very unlikely you will spot it.

Wheras if you only implement what you need you will not have to worry about all those cases you dont even know about, instead if you mess up and forget something, you will find it in your test every time. You simply can test all the sunny day scenarios with out realizing that the functionality for one of those scenarios isn’t implemented.

Single Responsibility

Posted: September 17, 2008 in Uncategorized

A few days ago I was debating some architectural changes with one of the developers in the team Im currently working in. I had to make it apparent to him what values it would give to our project if we adheared to the single Responsibility Principle.

After giving the theory behind the principle ago we hadn’t really progressed. However he gave me an idea that made it possible to convey the importance of the stated principle.

Imaging we have a class that hold the outside temperature and that same class can communicate via radio with the nearest weather station.

This class is used in an observer pattern, so some where in our application other parts will be updated (I.e the screen reflects the latest changes in temperature)

Our class has three states

valueAccepted
communicating
idle

valueAccepted is set whenever we get a new temperature reading and reverted to idle and the temperature reading is invalidated  by the first read thereafter.

communicating is the state when we are communicating with the weather station.
After testing the first pilot of the code it’s realized that everything works 100% as expected and every one is happy.

So far so good but the class has at least two reasons to change. The communication protocol with the weather station changes or the temperature functionality changes.

To see why this might be a problem let’s change the communicatio protocol slightly. The only change is that every time we sent a command we need to accept a value. This causes one of the developers in our team to sent the object into valueAccepted state.

He thereafter tests that the radio communicatio works again and since the only change had to do with the radio funtionality no one thinks of testing the temperature reading before the new versio is released.

Shortly after it’s realized that the temperature readings are invalid during radio communication.

Had the single responsibility principle not been violated the radio communication would never have been able to alter the temperature flow.

I’ve recently had a period where I didn’t do much else than writing unit tests. Instead of just writing them like a robot I tried breaking as much of the code as possible while getting our coverage percentage up.

There was one construct that kept coming back: Implicit comparisons. When I say implicit comparison what I basically mean is a comparison using a different comparison operator than ‘==’ or where at least one side of the comparison is an expression where not all possible values are meaningful.
//implicit
x < 8 //implicit
“ok” == IsUserLoggedIn()
//Not all possible values of IsUserLoggedIn are meaningful
//“I’m Santa Claus” for one is probably not meaningful.
//This also serves as an example of why not to represent state as strings

Sometimes the implicitness is hard to spot and sometimes the result of not spotting them might make the system rather vulnerable.
Let’s say that we in a system have a permissions check using an external method call GetPermissions().
Let’s assume that the possible values for PermissionFlags are None, Read, Write and Full (integer values 1,2,4,8).
GetPermissions returns a PermissionsFlags value.
The implicit comparison could then be similar to:
var neededPermission = PermissionFlags.Full;
if (neededPermission == GetPermissions(currentUser) & neededPermission) {
//Do something that requires PermissionFlags.Full permissions
}

The above code is pretty hard to test even though it’s only 2 lines, mostly because the “ugly” cases might not be easily spotted.
If GetPermissions behaves nicely it should only return even values from 2-14 or 1 but it’s external so we have no way of ensuring that it is well-behaved.

For uneven numbers the comparison might still work as long as it’s ok to ignore that the none bit is set high.
A value of PermissionsFlags.None | PermissionsFlags.Full is rather ambiguous but might be meaningful based on specifications.
What happens then if GetPermissions, when passed an unknown user returns -1 as an error code, expecting the caller to handle the undefined value?
The above comparison would then work fine for all known users but might (depending on how integer values are represented) return true for all unknown users

My point with this example is twofold. Always use explicit comparisons (especially in security code) and always return a well defined set of values or if the method is external always validate the returned values before relying on them being within certain boundaries.

I was listening to one of my favorite bloggers today, the podcast from Hanselman is about lean development and starts out about defining success. The first statement is that defining success for a software project to be “on time” and “on budget” is rather misleading. And then goes on to defining success as wether the project is a business success, that is a business case with a positive outcome.

I really like the podcast and it gave me an idea of another way to describe what I blogged about a few weeks ago in my post on UATs and my series on Requirement specification.

I tryed explaining the usual short comings of requirement specifications based on Change management theory, funnyly enough applying the metrics of succes that the above podcast is about would yield the same conclusion.

Usually the requirement specifications are very low level describing small steps needed to complete some function but generally that “some function” is not described at all. “Some function” in this context is a task that the end user needs to perform to complete everyday work.

If you wrongly define success as just being “on time” and “on budget” you might define a project as a success even though that project was such a lousy business case that the company went backroupt, you know that happens everyday. (Just talk to your manager, also know as the  #if the project is not a success we’ll go backroupt” management style)
Same goes for requirement specifications that have forgotten that the most important part is actually the tasks the users need to complete, not every single step on the way.
I’ve seeen numerous projects that met every single requirement but the user found them at best difficult to use. Defining them as a success even though they met all requirements goes against the logic of the users im sure.

I find a good way to measure the success is to “user metrics”. If the system is build to help users perform their tasks faster, then let’s measure how fast the users are when we start the project and how fast they have gotten when we’re done.
If the system is build to make our products easier to use than the competitors, let’s ask the users what they think in regards to usability and so on for the high level goals each project has.

If we start out every project by documenting these high level “user goals” we make it easier for us selfs to review requirements.

When you’ve read the requirements you should be able to do one of three things with each requirement

  • Relate them to a high level goal
  • Discard them
  • Create a new high level goal

When all requirements are related to a high level goal, look at every high level goal and ask your self and the project “do we have all the information we need to meet that high level goal” The answer is probably “no :-)”. At least I found my self time and time again needing information. To be honest I often find my self in that situation when i did not to as I “preach” in this post. If the answer is “no” I for one will in the future try and remember to go out get that information

Pit falls in testing

Posted: June 13, 2008 in c#, Testing

Yesterday I wrote a post on default(T) not always being valid, that realization made us change the signature on the mentioned method.

Working on that rather simple method made me once again think about testing. We have Asserts like

Assert.AreEqual(Enumeration.Valid,Enum<Enumeration>.Parse(“Valid”))
Assert.AreEqual(Enumeration.Valid,Enum<Enumeration>.Parse((int)Enumeration.Valid))
Assert.AreEqual(default(Enumeration),Enum<Enumeration>.Parse(“not valid))

This gives 100% statement coverage and it might look as it gives a 100% branch coverage as well, which unfortunately is not true. You don’t necessarily have to have code to have a new branch.

object obj = “Valid”;
Assert.AreEqual(Enumeration.Valid,Enum<Enumeration>.Parse(obj));

a more common example of a hidden branch is an if with no else. Even though you have not explicitly written an else clause you should test that branch none the less.

the code being tested might look like this:

class Guard{

public static void ArgumentNull(object argument,string name){
if(argument == null)
throw new ArgumentNullException(name);
}
}

we might then have an Assert like:
Assert.Throws(typeof(ArgumentNullException),Guard.ArgumentNull(null));
we have 100% statement coverage but the quality is not very high. At some point we want to log the call stack when we have a call with a null argument. However the implementation has an error which is not caught due to the lack of testing of the “invisible” branch.

class Guard{

public static void ArgumentNull(object argument,string name){
if(argument == null)
Logger.Log(GetCallStack().ToString());
throw new ArgumentNullException(name);
}

}

We still have 100% statement coverage and our test still succeeds but unfortuantely any call to Guard.ArgumentNull now throws an ArgumentNullException no matter whether the argument is null or not.

When in doubt if more test cases are needed make a Cyclomatic Complexity Analysis of the code being tested. The number of tests needed is in most cases proportinal to the cyclomatic complexity of the code being tested.

for more information on how to apply CCA as a quality measuring mechanism for unit tests take a look at this blog. I do not agree on their actually “algorithmn” for figuring out the number of tests needed but the point of creating an algorithm based on CCA is well thought.

A rule of thumb says that you need 4 incorrect values for each correct value you need to test each decision point in your code.

Since CCA in essence is a measurement for the number of decision points in your code I go for a higher number than .5 * CCA. What that constant would be, would depend on the project. In my current project the constant is between 1 and 2 depending on factors such as source of the code (generated or written), the complexity (it’s not a linear relation ship for us but exponential) and the severity of an error in the tested code. (An error in the security is a lot more severe than in the part that does debug tracing)

I was coding a simple method today it looked like this:

public static T Parse(object value){
if (value != null && Enum.IsDefined(typeof(T), value)) {
return (T)value;
}
return default(T);
}(We have a method that handles string values, which is why the cast is safe if the value is defined for T)

We didn’t like it, mostly because netiher of us liked to hide the invalid/malformed input by returning a default value. But it made us wonder about this statement:

Enum.IsDefined(typeof(T), default(T)) if T is an enumeration would that statement always be true?
The answer is actually no.

To see why let’s define an enum:
public enum MyEnum{
firstValue = 4,
secondValue = 3
}

Then the statement would be false where as default(MyEnum) == 0 is true.
so the lesson is: Don’t count on default(T) being a valid value.

I would have liked the default value to be either 3 (because it’s the lowest) or 4 because it’s the first value declared.
If you don’t declare the value explicitly the value declared first in the enum is the default value, so I preferre the later to the first.

Update: Part of the reason why we had the talk on enums in the first place was a refactoring process very much like the one described in this nice post

Update: For other coding surprises you might want to have a look at ‘things that make me go hmm’

In the first part I wrote about common pitfalls when creating requirements documents. In part 2 I tryed to give an example and describe the outcome should one fall into one or more of the pitfalls.

This final part is based on a speech, I had to give a few months ago. The goal of the speech was to get the listernes to value the points, I’ve written about in my series on Mr. Work, especially the points from The Story of Mr. Work and the Love for UATs

Basically I wanted the audience to experience some of the challenges, that developers and Architects face every day.

Instead of just giving a speech, I decided to take a chance (this was when I was preparring the speec h not some last minute decisions, which would not have work).
I asked one of the listerners (they were 6) to come to my white board and describe Bathroom of his dreams (you know you had dreams about bathrooms, so of cause he’d had them too). He was to describe it in the form of a bullet list.

He wrote stuff like: 2 sinks, a lot of light and spacious. Actually most of the bullets were rather detailed, so I think every one a picture in their mind of a bathroom, that fullfilled that list just nicely.

I had given him a minute and when the time was up, I asked the next guy to draw a blue print of that bathroom. I hadn’t told anybody at the moment but as you might have guessed, that list was going to be my spec analogy later on and the blueprint the architectural documents analogy.

3rd one at the white board was the only woman present and her task was choosing all the elements based on the two previous “documents”. She had to decide what specific sink should be used (brand/model), which tiles, basically all the hardware, or you could call them all the modules for her job was the module design analogy. The last person at the board had to do all the plumbing and wireing.

With those 4 “documents” on the white boards I asked each of the first three, starting with the woman, if the result of the previous stage was as she expected and if not what was different. In the end I asked the first guy, if it was the bathroom he’d dreamt of. Of cause the answer was no.

In plan of my head that was the first goal to have him say no. So we were still on track.
Keeping to my prepaared plan, I started from the buttom again, asking about what we could do prevented there erros they had pointed out at each transition.
All my questions had been preparred before hand and was all questions I ask my self, when I do peer review or write unit tests, so I was pretty certain, that the answers I got, would fit nicely into and entire hierachi of arguments for using the w-model. (which was my over all goal)

When we were done, all the listeners had a clear Idea of why so many errors are actually based in lack of requirements.
We all agreed, that what they had first thought to be requirement specifications for a bathroom at best was requirements but no way near specifications and that the lack of specifications was the source of the problem or in the terms of the first part by making those dreaded decisions they would actually increase the PPV not decrease it as they had first thought.

I’ve tryed getting that point across so many times and fail horribly in most tries. Taking something every one can relate to, a bathroom and making them make the errors, forcing them to take decisions based on lack of information, the point was suddenly much clearere.

They realized that no matter how hard you try to do your work as an achitect if you only have informations like: “spacious”,”lots of light” and “2 sinks” you are bound to fail. You can be the best Bathroom architect but you will never design the bathroom the guy who wrote the bullet list envisioned.

What I learned, was to use analogies everyone can related to and instead of telling people the conclusions of some arbitrary scientific study, to guide them with preparred questions to make those same conclusions them self.

So in the future when I’m have to tell people about write requirement documents. I’m not going to talk about building Applications or abstract things such as the v-model. I’m going to tell them about building bathrooms

In part 1, I wrote about some of the reasons I think requirement specifications often have a less than optimal quality. In this post I’ll try to proliferate.

Let’s look at a possible scenario:
The user wants to be able to create/edit and format texts online, no Flash or Silverlight is allowed only DHTML. The editor is going to be part of the company’s CMS.

It should be possible to:

  • Create new text documents
  • Edit old text documents
  • Choose the font type,size and color
  • See the contents in all browsers
  • Include pictures and sounds
  • Insert links
  • Use all Unicode characters

At first this might look simple but ok. However just digging a little we’ll find several problems. Most of them are based in an All-the-cool-stuff-should-reflect-back-on-us-and-al-the-bad-things-on-somebody-else way of thinking. So let’s dig and look at a few of those problems

“See the contents in all browsers”
The coolness of this requirement is of cause that the text will seems as intended in all browsers giving all end users the same experience.
The cost of the requirement would seem to be an incredible amount of testing but hi, thats somebody else’s problem.
A better requirement would be a fixed set of browsers. That would probably end up with a more user friendly editor increasing the coolness but that’s not so obvious to the person specifying the system so the fear of lowering the value will win.

Fonts
How about choosing font type? This requirement leaves a lot of decisions to the developers
Should the font changes apply to the entire text or just a part of the text? Should the editor validate against design guide lines? believing that the developers will make the exact decision the users would want each and every time is a bit too naive.
Again the specification is leaving more unanswered questions than it’s answering.

Actually every single requirement in the list above has the same basic problem. I like to think of it as the “Demand a lot and make no decisions” problem.

The above requirements could be implemented as a online version of notepad, where the user would have to write the HTML by hand.
Even though it would meet the requirements it would be absolutely useless to the customer.
Then the developers and the customer could argue who were to blame.
One could argue that every requirement is met but for the customer to accept that argument would mean they had to acknowledge a loss of Personal Value, which they will be very unlikely to do.
Getting out of a situation like that with every body smiling is a skill on it’s own. My point is that the project ended up there and stays there for the same reason: PPV.

The trick is to act early to make the users realize that there’s a higher PPV in taking those domain specific decisions only leaving the IT decisions to the IT professionals. That’s where building bathrooms comes in which will be the subject of part 3.