Over the last several years I’ve written quite a few posts on DCI in various fora. Especially in response to someone trying to learn DCI. I’ve come upon the same problem over and over again and last night it happened once more. Essentially, it comes to this line of Q&A

Q: “I’ve started learning DCI and want to understand how it’s superior to OO could you please write an example so that I can understand”
A: “There are quite a few examples on http://fulloo.info/Examples. Have you studied those?”
Q: “Yes, but those didn’t compare OO and DCI, so I still don’t understand. Could you please write an example that will help me understand?”

The questioner is genuine in his wish to understand DCI but the line of questioning is in my view odd, but trying to tell anyone that, will usually end with an unfruitful discussion about the mentors way of teaching.

This post is me trying to shed some light on why this line of questioning doesn’t lead to the desired outcome for the questioner and to give an alternate approach.

Let’s try something else than a paradigm shift because they are inherently hard.  Let’s try comparing encryption schemes

Q: “I’ve started learning encryption and I’ve seen AES and DES. I don’t understand why AES is better than DES, I need an example that shows why”

A: “I could give you an example of a text encrypted with both but you want to be able to see why one is better than the result from the other. They are different in how they accomplish the result. If you understood some of the theory, such as entropy we could probably have an informed discussion”

Q: “I don’t want to have a theory discussion, I just need an example”

A: “There are plenty of examples out there, have you studied those?”

Q: “Yes, but they didn’t tell me why AES is better than DES”

A: “No and that’s because the difference is in the process if you studied some of the theory and experimented a bit we could have an informed discussion”

Q: “I don’t want to discuss the theory I want to understand it from looking at the result”


I don’t think anyone believes you would learn encryption from looking at the result. Even for extremely simple stuff. Stuff we can teach to small kids you can’t even as an adult learn from looking at the result. Below I’ve represented a concept you know, in an unfamiliar notation. It’s extremely simple and you even know the concept and theory, but I’m confident that you will have a pretty hard time answering the question

3 3 44

44 11 6

44 2 9

3 44 88

3 2 5

88 44 ?



It’s simple math. Stuff my daughter was able to do in her head before starting in school. But from looking at the examples you didn’t learn much. However, if you know just a little number theory and know the notation I’ve used, then it would be very easy for me to explain addition to you. Let’s repeat: This is a simple addition, none of the results are larger than 9 and you couldn’t understand it from five examples. However, if we had spent 25s for you to understand the notation and maybe 1 minute to understand numbers (if you didn’t already) then I could explain addition to you in 10s and you could use the examples to test your understanding. That is what examples are for. You test your understanding against them. Maybe your understanding matches and maybe it doesn’t. If it doesn’t you are likely to have a ton of informed questions that can help you further, but those are all questions you wouldn’t have been able to ask if you didn’t understand the basic theory.

Now imagine that we are not talking about the addition of one digit positive integer but an entire believe system.  First of all to discus which is more suitable we need to agree on what’s suitable. If we can agree on that we can then discuss which believe system is more stable. So let’s just say for now that we agree that comprehensibility makes a programming paradigm more suitable. Then we need to agree on what makes code comprehensible. There’s quite some research and theory into what makes something hard or difficult to reason about and theories can be found in  Neuro-/psychology, didactics. Philosophy and Neurology but also in math. The first four can tell us how we learn, remember and understand information, whereas the latter can tell us about abstraction and compression and other more abstract perspectives on information theory. If I can’t make you understand single digit math in five complete examples, then how am I going to show differences in comprehensibility based on theories in at least 5 different branches of science spanning both human and natural sciences, in one example that has to be small and therefor can have no chance of being complete?


I’m all for challenges, but this one is not one I think I’m going to solve presently. This is made even harder because most of the people coming to learn DCI has a preconceived idea that they know what OO is, however they usually know so technical merits that has nothing to do with the mainly psychological background and basis for OO. So they’ll first have to admit that what they thought they knew about OO is at best a half truth and sometimes even a lie.


If you have come this far you must be tenacious and I promise I’m almost done. I will wrap up with a step by step guide to understanding of DCI

  1. Accept that what you know about OO, probably has very little to do with the foundation of OO. After all the inventor of OO claims there’s only one real OO system in the world (the internet)
  2. Either familiarize yourself with research into how we learn, modeling and information theory or take the easy route and accept the postulates from authorities at face value
  3. Read articles about how to apply DCI, there’s a few on this site with likes to even more
  4. Experiment and don’t forget to reflect upon the result
  5. Ask questions based on your experiments and reflections
  6. Publish the resulting examples, at best as part of the Trygve repository
  7. Repeat step 2-5 until you gained the level of understanding you desire

I recently acquired a Mac and wanted to try out developing a pet project of mine on my new machinery. It’s a WebSharper project(*). This post however is not about WebSharper but about installing xsp

xsp is a lightweight web server, that can be used to server ASP.NET web sites. Installing xsp is a little more work than just fetching a package. You will need to clone it from github

git clone https://github.com/mono/xsp.git

and then change to the directory of xsp and run the below command


If you get this error

aclocal is not installed, and is required to configure

It’s because you don’t have the required tools installed. If you didn’t get that error, then you can probably skip all the way to configuring xsp.

Installing the required tools

It’s pretty straight forward to install the required tools. You will simply have to run a series of commands. One for each tool being installed. (These are tools often used for building open source projects, that used to be installed together with Xcode, but are no longer part of that installation.)
After trying to install the tools manually where I kept getting missing dependencies errors I went the homebrew way. If you haven’t already installed homebrew don’t fret. Installing homebrew is a one command exercise:

ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"

After installing homebrew we can get to installing the required tools

brew install autoconf
brew link --overwrite autoconf

If the link command fails with an error stating that a given path is not writable, then do as follows, before repeating the linking

chown -R yourusername unwritableDirectoryPath
chmod -R u+w /usr/local

When you have successfully linked autoconf, then proceed with the installation of the required tools/libraries

brew install automake
brew install pkg-config
brew install mono

That should have installed the prerequisite, and you should now be able to follow the installation instructions on the xsp homepage. Start by running


again. This time there should be no errors.

To complete the installation, run the below commands

./Configure --prefix=/usr
make install

And with all that under our belt, you should now be able to navigate to the root of your website and execute the command

xsp4 --port=####

This will start a webserver listening on the specified port (if you just want the default port 8080 you can omit the –port option)
*) WebSharper itself is an awesome tool for building web sites. The only real problem I’ve had is the fact that it’s not as widely used as other tools and therefore lacks on the documentation/sample code side. However, it does reduce the amount of work I’ve had to do to accomplish my work regardless of the lack of stackoverflow Q&As.

I recently had to work with data visualization in JavaScript. Of course the obvious choice was to go with d3 for the visualization, but I needed something for the data manipulation. Preferably something declarative and having substantial experience with Linq using something similar came to mind. This post is meant as a getting started guide for using query-js. query-js is the npm module that resulted from my efforts.  query-js is a series of methods that lets you perform sequence operations on arrays and sequences.
Instead of giving a theoretical description of the module I’m going to work through a few simple examples on how to use some of the more important methods.

There’s a lot of public data available using REST APIs and often the data is in an easy to use JOSN format. One of these sources is EU statistics on Gross District Product. GDP is the district version of the Gross National Product or in other words an indication of prosperity in the district.

To get the data we would need to do a http request to the end point of the service and we are also going to use query-js (surprise!) so start by installing both request and query-js and the requiring both as well


npm install request
npm install query-js

var request = require("request"),
 Query = require("../query.js"),
 url = "https://inforegio.azure-westeurope-prod.socrata.com/resource/j8wb-jxec.json?$offset=0";

The URL that I sneaked in is the URL for the service endpoint. It returns an array of objects. Each object is in the format o

  "ipps28_2011" : "221.7",
  "nuts_id" : "BE10",
  "nuts_name" : "Région de Bruxelles-Capitale / Brussels Hoofdstedelijk Gewest"

The first property (ipps28_2011) is the actual GDP figure. The second one (nuts_is) is an identification of the district, where the two first letters are the country identification. With those information let’s see what it would require to get all the districts, and find all countries that have at least one poor region.

//get the data
request.get(url,function(error, response, body){
 var data = JSON.parse(body),
     query = new Query(data),
     //group by the first two letters in the district code aka the country
     byCountry = query.groupBy(function(d){ return d.nuts_id.substr(0,2); });

In the code we firstly request the data and parses it and then we can start on the Querying part. The first query we perform is to group by country (or actually by the first two letters of the district identification). The result of that is an object that can either be treated as a regular object or as a sequence of key value pairs. In other words all the sequence operations of query-js are available. So we could find the values for the nordic countries like this

    var nordics = byCountry.where(function(country){
        var countryId = country.key;
        return countryId === "SE" || 
               countryId === "FI" ||
               countryId === "DK";

That would filter out all values for Sweden (SE), Finland (FI) and Denmark (DK). Norway is part of the Nordics but are not part of EU, so no data for them.

We could also look for all regions in the lowest category (ipps28_2011 < 50)

    var lowIncomeDistricts = query.where(function(gdp){return gdp.ipps28_2011 < 50;});

or what if we wanted to get all countries with at least one poor region?

     var countryWithLowIncomeDistricts = byCountry.where(function(country){
          return country.any(function(gdp){ return gdp.ipps28_2011 < 50); });

That uses another of the sequence operations that query-js provides. the. any(predicate) method. It will return true if at least one element in the sequence satisfies the predicate. So in this case it will return true if at least one district in a given country has an ipps of less than 50.

now with a bit of query-js dirt under our finger nails, let’s tak a slightly more complex task. How about we find the average ipps28 for all nordic countries in the EU? We’ve already seen how to get the values for all Nordic countries so that should be easy. However this time we want to have all entries in the same collection instead of grouped by country. Then we’d want to extract the ipps28 from each of item and lastly we’d want to compute the average of them all. In list form:

  1. Filter on country
  2. extract ipps28
  3. compute average

We are going to use where for the filtering. Extracting data from an object is a projection, so for step two we are using select and there’s a method for computing the average.

   var avg = query.Where(function(gdp){
      var countryId = gdp.nuts_id.substring(0,2);
      return countryId === "SE" || 
             countryId === "FI" ||
             countryId === "DK";
      return parseFloat(gdp.ipps28_2011);

You should be able to see the three steps from above well represented in the code however we can shorten it a bit if we’d like. THe execution will actually be exactly the same in both scenarios (aka the performance will be the same)

   var avg = query.Where(function(gdp){
      var countryId = gdp.nuts_id.substring(0,2);
      return countryId === "SE" || 
             countryId === "FI" ||
             countryId === "DK";
      return parseFloat(gdp.ipps28_2011);

As you can see, the only difference is that the projection is now passed to the average method instead of having a specific projection step.

There are many ways to skin a cat. We’ve looked at two different approach for finding the average of the GDP in the Nordic countries of EU. There’s a third one that will let me introduce another important method, namely concat.

    nordics.select(function(country){ return country.value: })
           .select(function(gdp){return gdp.ipps28_2011;})

concat comes in three flavours. We will look at two of them. One that takes no arguments and concatenates the elements of the sequence (the elements themselves have to be sequences) and the other one is used below and is a short hand for first projecting and then concatenating. THe performance of them is the same since the second version is implemented based on a select and a concatenation.

    nordics.concat(function(country){ return country.value: })
           .select(function(gdp){return gdp.ipps28_2011;})

There are still more ways to skin this cat. Often you would want to concatenate then do a project of the elements of a sequence of sequences and concatenate the result. The method selectMany does just that. Instead of iterating over the elements of the sequence it iterates over the elements of the elements of the sequence and produces a new sequence. So the above could also be written as

    nordics.select(function(country){ return country.value: })
           .selectMany(function(gdp){return gdp.ipps28_2011;})

First we have to project the sequence we already have, because what nordics holds a sequence of key-value pairs where the value is another sequence. By selecting the value of each we end with a sequence of sequences. On which we then call selectManhy on and we then get the average of the projected values.

We can actually shorten this slightly. selectMany can accept two projections. The first one then being for the value of the outer sequence and the second one being for the elements in the inner sequences. THat is the above could also be written as following

    nordics.selectMany(function(country){ return country.value: }, function(gdp){return gdp.ipps28_2011;})

and since select has a shorter form we can rewritten slightly. If the argument provide to select is not a function but a string, select will treat it as a simple projection returning the value of the property with the name given by the string. That is

sequence.select(function(e){ return e.value;});

is semantically equivalent to


and since selectMany internally uses select we can write our example as follows

    var averageForDistricsInTheNordics = nordics.selectMany("value", "ipps28_2011")

(*) This is likely going to change, so that you will have to explicitly add them.

Working on a project where we need more control of our development environment and especially needed a way to make sure that configurations were consistent across all developers machines and our test environment I look into setting up a vagrant environment.

I started out by installing vagrant on my windows machine and after installing I wanted to set up our environment to use  Ubuntu boxes and in the vagrant cloud I found a box I could use named hashicorp/precise64. So to add this box to my local Vagrant environment I ran

$ vagrant box add hashicorp/precise64

Howver the download was blogged by a proxy, so instead I downloaded it and installed from file

$ vagrant box add --name="hashicorp/prcise64" file:///fully/qualified/path/to/file

After adding the box I updated the Vagrantfile to look like this

Vagrant.configure("2") do |config|
  config.vm.box = "hashicorp/precise64"

To check that everything was working I saved the file and ran

$ vagrant up

This worked like a charm and I followed up with

$ vagrant ssh

but alas this requires an ssh client. Vagrant will suggest several that can be used and I chose to install cygwin

after installing cygwin I ran the ssh command again. and got a ssh session to my default box. Vagrant is smart enough to let you ssh to the default box if there’s just one. When you get to having a multi-machine setup you will need to provide the name of the box you wish to ssh to like so

$ vagrant ssh webserver

to ssh to a box named webserver

By now Id installed

  • Vagrant
  • cygwin

The first thing I wanted to do was update the box so I ran

$ sudo apt-get update

but again this was block by the corporate proxy.

Searching around the net trying to figure out how to configure vagrant to use a given proxy. It turns out there’s a plugin called vagrant-proxyconf and to install plugin ins in vagrant you would typically run

$ vagrant plugin install vagrant-proxyconf

however that still requires vagrant to be able to authenticate towards the proxy so I ended up downloading the gem. The latest version was  version 1.4 which I found on from rubygems.org. When you want to install a plugin from source vagrant will let you do so

$ vagrant plugin install vagrant-proxyconf --plugin-source file://fully/qualified/path/vagrant-proxyconf-1.4.0.gem

this will install from the newly downloaded gem instead of those getting around the proxies blocking the usual way to install plugins. So now it was time to set up the proxy for use by the box. This is a simple change to the Vagrantfile

if Vagrant.has_plugin?("vagrant-proxyconf")
    config.proxy.http     = ""
    config.proxy.https    = ""
    config.proxy.no_proxy = "localhost,, 192.168.56.*"

The check to see if the plugin is not required and I found that while debugging it’s actually a good thing not to check because then you will know whether it’s because the installation of the plugin failed or your error is in the configuration itself. However for portability it’s a good thing to check for the plugin. The Vagrantfile is supposed to be something you can share across various environments and some of those might not need the plugin.

So now we should be able to ssh into the box again and update. However for the changes to take place we need to reload the box

$ vagrant reload

and when the reload is done we can ssh to the box again and run apt-get update. However when the proxy requires NTLM authentication this will fail because even though the proxy is now configured correctly for the box it can authenticate and will get a HTTP 407 back. This particular step took me quite sometime to resolve but there’s a solution to the problem called CNTLM. I thought about installing it on the boxes and then repackaging the boxes making them self contained or to install CNTLM on the host. I decided for the latter because I then would have a setup that would allow for other applications to use this same infrastructure. The down side of course being that everyone in the project will need to install CNTLM on their host as well. The installation was pretty strain forward. In the ini file I had to change a few things

  • user name
  • password
  • domain
  • address CNTLM should listen to. If you follow the rest of the examples here it should listen to
  • corporate proxy

After getting it to work I highly recommend to follow the CNTLM recommendation of hashing your credentials

after installing CNTLM I once again opened a ssh-session to the box and once again I was blocked. This time it was not so much the proxy but the network. For the box to be able to connect to the CNTLM proxy I needed to configure a host only network. Which is a network the box can use to communicate with the host and vice verse. It’ rather simple to setup and simply requires a change to the Vagrantfile. Add the below line somewhere in the configuration block

config.vm.network "private_network", ip: ""

This will give the box a static IP of and since the host by default will have a static IP of they are on the same subnet and should be able to communicate but we’re not their yet. You will need to setup a firewall rule on the host. There’s a good guide on how to set this up that helped me to be found on serverfault.com.

Now with that in place you should finally be able to access the outside world from your vagrant boxes.

To sum up

  1. Download and Install vagrant
  2. Download the box you require from the cloud
  3. Add the box with
    $ vagrant box add --name="name of box" file:///fully/qualified/path/to/file
  4. install cygwin (including the ssh package)
  5. install cntlm and configure it
  6. download vagrant-proxyconf from rubygems.org
  7. install the plugin with
    $ vagrant plugin install vagrant-proxyconf --plugin-source file://fully/qualified/path/vagrant-proxyconf-1.4.0.gem
  8. configure the proxy in the Vagrantfile to point to your CNTLM proxy (see example above)
  9. Add an internal network between guest and host by adding this line to the Vagrantfile
    config.vm.network "private_network", ip: ""
  10. open the firewall on host for the guest to be able to connect to CNTLM (guide)

Being able to respond

Posted: July 25, 2014 in People
Tags: ,

Being a consultant and a father I often get to meet new people. Either at my various projects or the kids and parents I meet in my capacity as a father. Between all those people it’s interesting to see how some can use one word with out fear and others practically will never use the same word.

When trying to build a project culture or when raising kids I value peoples ability to respond to a certain situation. E.g. If there’s a problem with a production system, are my project peers capable of handling the situation in a constructive manner. E.g. does it matter to the individual who’s to blame or do they respond to the situation at hand?

Numerous studies have shown that the ability to respond, is one of the corner stones in having a successful carrier as well as an enjoyable life. People, that are afraid of taking action for whatever reason, will usually make excuses for doing so and often those excuses will be in the form of explanations of why they are not required to or incapable of taking actions or they will simply blame someone else for the problem and the blame might even be valid. That however doesn’t change the result of making excuses.
People who don’t make excuses but take action, are also take control and are thereby empowering themselves. By acting and being in control they make it possible to feel the sense of accomplishment and with that comes higher self-esteem whereas with excuses for being incapable of acting comes lower self-esteem.

Not surprisingly the people with high self-esteem turns to the empowering actions but the odd factor is that quite often the people with low self-esteem out of fear for being at fault decides on inaction.
Having low self-esteem they certainly do not wish to be the one to blame. Then better to explain and take the sting out of the blame or simply blame someone else. Both are essentially simple defense mechanisms. It’s a defense against being blamed and as such do not solve the real issue of feeling inferior/having low self-esteem. On the contrary they add to the issue. The gain of excuses are short term. Excuses relieves the pain right here right now but ignores the long term consequences of asserting one’s inferiority.

We try to raise our kids to take action instead of blaming. E.g. if they’ve had a fight with each other we are more likely to ask each of them “What could you have done differently to avoid this situation” or “How can you fix it?” than “Who’s at fault”.

Not only are we trying to help the kids avoid similar situations, we’re also showing them that they are in control of what happens to them and how they feel. In the words of Eleanor Roosevelt

No one can make you feel inferior without your consent.

Giving excuses or explanations is also giving consent. You are unconsciously saying “My inferiority makes me unable to respond constructively to the situation at hand” and when someone else responds to the situation that assertion is verified.

Meeting all sorts of different cultures I often find that it all boils down to a cultural skewing of the meaning of “responsibility”. Those that make excuses are often interpreting “I’m responsible” as “I’m to blame
The persons who time and time again demonstrate their ability to respond, seldom equate responsibility with being at fault and why would they, the word literally means response-ability.

Working on a project we’re trying out rest.li and we’re using windows as our OS that makes the installation a little more cumbersome and I’d like to be able to redo the installation at a future point so in any case I needed to document the process and hi, why not do that so some one else than me could benefit from it. I’ll also install HTTPie since testing the generated services is a lot easier with that tool.

So let’s get rolling

The first subtask will be to install HTTPie and this post from Scott Hanselman help me but things seems to have changed slightly since then, so I’ll include a revised version

Firstly you’ll need to download Python I chose the 64-bit MSI installer when it’s installed then add the installation directory to your path. To do so follow this process

  1. Right click “Computer” in file explorer
  2. Choose “properties”
  3. Choose “Advance system settings”
  4. Press the “Environment variables” button

Depending on whether you installed for all users or just you you should then change the corresponding path variable (either you or system) and add the installation directory and “installation directory\Scripts” to the path

The former makes python available on the command prompt and the latter makes pip (a package management tool we’ll be using later) available.

To test that you’ve succeeded open a command prompt (windows-r and type cmd) first type python and hit enter there should be no error telling you that python is an unknown command, then repeat with pip

Next step is to install curl make sure that the executable is in a directory that’s also in your path variable and test in the same manner as with python and pip.

Next step is to execute distribute_setup.py you can do this using curl that you just installed

curl http://python-distribute.org/distribute_setup.py | python
  • or if you have issues connecting using curl then download the file from the link above and simply double click it

Last step in installing HTTPie is to use pip

pip install -U https://github.com/jkbr/httpie/tarball/master

That will down load HTTPie and install it on your machine. Pip will place the httpie script in your python script folder so the tool is ready to be used from the command promt when it’s down downloading and you can test this by typing


at the command prompt and then hit enter.

We are going to need to install

  • A JDK (1.6+ as of this writing)
  • Gradle
  • Conscript
  • Giter8
  • rest.li

First make sure that you have the JDK installed. You can find the installer on Oracles site. Gradle on the other hand can be found at the Gradle download page. With both the JDK and Gradle installed we’ll install Conscript which you can find on the github page. It’s a jar file and it should be sufficient to download and double click it
once you have conscript installed you should add that to your path as well and test it by executing the command


on the command line. With conscript installed it’s time to install giter8 to do so execute the following command

cs n8han/giter8

If you get an error saying jansi or some other lib could not be found. You might be blocked in a proxy due to the fact that your traffic can’t be inspected since sbt-launch will use a repository url starting with https. It might just work for you if you change it to http. You can do this (and will need to do it both for cs and g8) in the launchconfig file that can be found under your conscript installation folder in .conscript\n8han\conscript\cs for conscript and .conscript\n8han\giter8\g8 for giter8. If that does not work then you can try a slightly more complicated version which you can read about here

When g8 is installed and everything is working as it should. You can create your first project by simply executing

g8 linkedin/rest.li-skeleton in your command shell


We haven’t used gradle yet but you will have to when you wish to compile your rest.li project. The skeleton created in the previous step includes three gradle build files that will handle the code generation tasks for you. For how to actually work with rest.li and gradle see the rest.li site

Following a discussion on testing and architecture I thought I’d write a post. The statement was: When architecture is not informed by tests, a mess ensues. That’s of course nonsense. Versaille was never tested but is still recognized for it’s architecture.

The statement got me started. Rhetorically that’s a clever statement. It uses an old trick most salesmen has under their skin. The statement associates the item being sold (in this case that item is testing) with something with objective positive value. That would be information in this case. The statement is of course also logically incorrect but marketing never worries about mathematical correctness as long as the statement is either mathematically incomplete or ambiguous. However the statement was not taken from a marketing campaign but a discussion of engineering  practice between J “Cope” Coplien and Uncle Bob (the latter wrote it) and in that context it’s incorrect. The key is not the test but the information. How the information came to be is irrelevant to the value of the information.  So a more correct version of the statement would be “Without information mess would ensue” and that is of course correct but also tautological.

The real value of information is in it’s use. If you don’t use the information but disregard it all together it had no value. Since testing is retroactive you get the information after you are done with your work, you can’t improve anything retroactively. Unless of course you are the secret inventor of the time machine in which case retroactively becomes entirely fussy.

If you do not intent to create another version, the information you gained from testing has no value in respect to the architecture. So using test as a means to produce value in the context of architecture  requires an upfront commitment to produce at least one more version and take the cost of a potential re-implementation.  If your are not making this commitment the value you  gain from the information produced by your tests might be zero.

In short you have a cost to acquire some information, the value of which is potentially zero.

It’s time to revisit the statement that got it all stated and to try and formulate it somewhat more helpful than a tautology.

“If you do not invest in the acquisition of information your architecture will become messy” 

You can try and asses the cost of acquiring information using different approaches and the choose the one that yields the most valuable information the cheapest.

There are a lot of tools to use to acquire information. One such tool is prototyping (or even pretotyping). Prototyping is the act of building something you know doesn’t work and then build another version that does. In other words prototyping is when you commit to implement a version, learn from it by (user) testing and then build a new version. Might that be the best approach? at some stage for some projects, sure. Always? of course not. Prototyping and pretotyping are good for figuring out what you want to build. So if you do not know what you (or your business) wants then use the appropriate tool. In innovative shops pretotyping might be the way to go. When you have figured out what to build, then you need to figure out how to build it. The act of figuring out how to solve a concrete task is call analysis. Analysis is good at producing information of how to do something in the best possible way.

Bottom line: There’s no silver bullet, you will always have to think and choose the right tool for the job

Fine print: You can’t improve anything with testing. You can improve the next version with what you learn by testing but only if there’s a next version

Pure DCI in C#

Posted: August 16, 2013 in c#, DCI
Tags: , , ,

I’ve seen several try to do DCI in C# but with the usual drawbacks from languages that are inherently class oriented. However Microsoft has a project Called Roslyn which is currently a CTP but I decided to try it out to see if I could use that to do tricks similar to what I’ve done with maroon.

It turned out to be very easy to work with and within a few hours I was able to translate my first DCI program written fully in C#. The trick as with maroon (and essentially Marvin as well) is that I rewrite the code before it get’s compiled.

A context class is declared as a regular class but with the Context attribute

A role is declared as an inner class with a role attribute and can be used as a variable.

The MoneyTransfer might then look like this

    public class MoneyTransfer<TSource, TDestination>
        where TSource : ICollection<LedgerEntry>
        where TDestination : ICollection<LedgerEntry>
        public MoneyTransfer(Account<TSource> source, Account<TDestination> destination, decimal amount)
            Source = source;
            Destination = destination;
            Amount = amount;

        private class Source : Account<TSource>
            void Withdraw(decimal amount)
            void Transfer(decimal amount)
                Console.WriteLine("Source balance is: " + this.Balance);
                Console.WriteLine("Destination balance is: " + Destination.Balance);


                Console.WriteLine("Source balance is now: " + this.Balance);
                Console.WriteLine("Destination balance is now: " + Destination.Balance);

        private class Destination : Account<TDestination>
            void Deposit(decimal amount)

        public class Amount { }

        public void Trans()

If a base class is declared for the inner classes then these will be used as the type of the role field, if no base class is provided then the field will be declared dynamic. The source for Interact as the tool is called can be found at github

Roslyn made this very easy and I plan to see if I can make Interact feature complete compared to Marvin. The syntax will not be as fluid because I can’t change the grammar but the upside will be a more stable solution with the same or less effort.

New version of maroon

Posted: May 22, 2013 in DCI
Tags: , ,

Just a few seconds ago I pressed enter after writing

gem push maroon-0.7.1.gem

and I’m pretty satisfied with this version of maroon. The major chnaes are that you now define methods with regular method definition syntax I.e.

def methodname(arg1,...,argn)

The other change I made is in how the abstract syntax tree is treated. Maroon has 3 stages

  1. Reading the source
  2. Creating the AST for each method. The AST is represented as a context with the productions playing the roles and productions are in the form of S-expressions
  3. Traversal of the AST and performing the source transformation. In this stage there’s a transformation context where the individual ASTs for each method in turn play a role

One of the reasons why I changed the stages was the AST build as a seperate step, so that other work could be build on top of that. E.g. one of the requirements I wrote of in a previous post. Namely creating a visual representation of the network of interconnected objects that make up a system. That is visually show how the roles in a context connect to each other in various interactions and role methods.

I also feel that Maroon now itself is a pretty good example on how to use maroon and also on how to do DCI. As always the source can be found at at github

Bootstrapping maroon

Posted: April 9, 2013 in DCI, Thoughts on development
Tags: , ,

At #wrocloverb in the begining of March I started to bootstrap Maroon and have now released a fully bootstrapped version. It took longer than I had anticipated based on the time it took me to write maroon in the first place. The main reason for this was that part of the process lead me to find bugs needing fixing or features required to implement maroon that maroon didn’t support(1). Both tasks some what mind boggling. How do you fix a bug in the tool you use to fix said bug? a lot of the time I ended up having two mental models of the same concept and to be honest that’s one more than my mind can handle.
One model was of what the current version was capable of and another was what it was supposed to do. All this was part of the motivation for bootstrapping in the first place and often the reason why a lot of compilers are bootstrapped too. The current state of the code is a mess. I knew it would be. I’m often asked questions like “Is the code then in fact a lot cleaner when using DCI?” and my answer is that it for me depends on how I design the system. If I design as I presented at wroc_love.rb 2013 (top-down) then yes the code is a lot cleaner, however if I design as I often do when not using DCI (bottom-up) then I often find that the code becomes more messy when I try to rewrite it using DCI and maroon was designed bottom-up.
I’ve done some cleaning up but postponed a large part of the cleaning up till the bootstrapping was actually complete so that I had a foundation to test my refactorings against. So the now live version of maroon is a fully bootstrapped version making maroon the most complete example of how to use maroon. The goal of the version thereafter is to clean the code up and make maroon a good DCI example as well. An example I hope to have both James and Trygve review if their time schedules permit. What bugs did I find?

  • Indexers on self didn’t work in role methods, so if an array played a  role you couldn’t do self[1] to get the second element
  • You couldn’t name a class method the same as an instance method. This wasn’t really a big deal because you couldn’t define class methods in the first place but since that’s one of the features added it did become an issue

What features were added?

  • You can define initialize for context classes
  • You can define role methods and interactions with the same name as methods defined for objects of the Context class. E.g. you can define a role method called ‘define’
  • You can define class methods (The syntax for that is still messy but you can do it)
  • You can use splat arguments in role methods or interactions. In general arguments for role methods are discouraged which I heavily violate in the current version of maroon
  • You can use block arguments (&b). The syntax for this is even worse than the syntax for class methods
  • You can choose whether to build the classes in memory or to output a classname.rb file for each generated class. That’s the way the bootstrapping works. I use the current stable version to generate the files of the next version. I then use those generated files to generate an additional set of generated files and the last set of files, which are generated using the upcoming version are then packaged as a gem and pushed.

What features would I still like to add?

  • Default values for arguments to role and interaction methods
  • Make it possible to use all valid method names for role and interaction methods E.g. field_name=, which are currently not possible
  • Cleaner syntax for both block arguments and for defining class methods. I have an idea for this but it won’t be until after I’m done with the cleaning up. It’s going to be an interesting task since it’s probably not going to be backwards compatible. The changes will be small and make the syntax intuitive to ruby programmers (which the current is not for the features in question) but since I’m bootstrapping I’m breaking my own build before I’m done.
  • a strict flag that would enforce a strict adherence to DCI guidelines and a warn flag that would warn if any of those guidelines were violated
  • Add a third option, that instead of creating classes in memory or class files will visualize the network of interconnected roles for each interaction.

(1) Yes it’s a bit strange to talk about bootstrapping. You think/say stuff like ” features required to implement maroon that maroon didn’t support” that sounds kind of nonsensical a lot. The most mind boggling part of that is when debugging…