Working on a project where we need more control of our development environment and especially needed a way to make sure that configurations were consistent across all developers machines and our test environment I look into setting up a vagrant environment.

I started out by installing vagrant on my windows machine and after installing I wanted to set up our environment to use  Ubuntu boxes and in the vagrant cloud I found a box I could use named hashicorp/precise64. So to add this box to my local Vagrant environment I ran

$ vagrant box add hashicorp/precise64

Howver the download was blogged by a proxy, so instead I downloaded it and installed from file

$ vagrant box add --name="hashicorp/prcise64" file:///fully/qualified/path/to/file

After adding the box I updated the Vagrantfile to look like this

Vagrant.configure("2") do |config|
  config.vm.box = "hashicorp/precise64"
end

To check that everything was working I saved the file and ran

$ vagrant up

This worked like a charm and I followed up with

$ vagrant ssh

but alas this requires an ssh client. Vagrant will suggest several that can be used and I chose to install cygwin

after installing cygwin I ran the ssh command again. and got a ssh session to my default box. Vagrant is smart enough to let you ssh to the default box if there’s just one. When you get to having a multi-machine setup you will need to provide the name of the box you wish to ssh to like so

$ vagrant ssh webserver

to ssh to a box named webserver

By now Id installed

  • Vagrant
  • cygwin

The first thing I wanted to do was update the box so I ran

$ sudo apt-get update

but again this was block by the corporate proxy.

Searching around the net trying to figure out how to configure vagrant to use a given proxy. It turns out there’s a plugin called vagrant-proxyconf and to install plugin ins in vagrant you would typically run

$ vagrant plugin install vagrant-proxyconf

however that still requires vagrant to be able to authenticate towards the proxy so I ended up downloading the gem. The latest version was  version 1.4 which I found on from rubygems.org. When you want to install a plugin from source vagrant will let you do so

$ vagrant plugin install vagrant-proxyconf --plugin-source file://fully/qualified/path/vagrant-proxyconf-1.4.0.gem

this will install from the newly downloaded gem instead of those getting around the proxies blocking the usual way to install plugins. So now it was time to set up the proxy for use by the box. This is a simple change to the Vagrantfile

if Vagrant.has_plugin?("vagrant-proxyconf")
    config.proxy.http     = "http://192.168.56.1:3128/"
    config.proxy.https    = "http://192.168.56.1:3128/"
    config.proxy.no_proxy = "localhost,127.0.0.1, 192.168.56.*"
  end

The check to see if the plugin is not required and I found that while debugging it’s actually a good thing not to check because then you will know whether it’s because the installation of the plugin failed or your error is in the configuration itself. However for portability it’s a good thing to check for the plugin. The Vagrantfile is supposed to be something you can share across various environments and some of those might not need the plugin.

So now we should be able to ssh into the box again and update. However for the changes to take place we need to reload the box

$ vagrant reload

and when the reload is done we can ssh to the box again and run apt-get update. However when the proxy requires NTLM authentication this will fail because even though the proxy is now configured correctly for the box it can authenticate and will get a HTTP 407 back. This particular step took me quite sometime to resolve but there’s a solution to the problem called CNTLM. I thought about installing it on the boxes and then repackaging the boxes making them self contained or to install CNTLM on the host. I decided for the latter because I then would have a setup that would allow for other applications to use this same infrastructure. The down side of course being that everyone in the project will need to install CNTLM on their host as well. The installation was pretty strain forward. In the ini file I had to change a few things

  • user name
  • password
  • domain
  • address CNTLM should listen to. If you follow the rest of the examples here it should listen to 192.168.56.1:3128
  • corporate proxy

After getting it to work I highly recommend to follow the CNTLM recommendation of hashing your credentials

after installing CNTLM I once again opened a ssh-session to the box and once again I was blocked. This time it was not so much the proxy but the network. For the box to be able to connect to the CNTLM proxy I needed to configure a host only network. Which is a network the box can use to communicate with the host and vice verse. It’ rather simple to setup and simply requires a change to the Vagrantfile. Add the below line somewhere in the configuration block

config.vm.network "private_network", ip: "192.168.56.2"

This will give the box a static IP of 192.168.56.56 and since the host by default will have a static IP of 192.168.56.1 they are on the same subnet and should be able to communicate but we’re not their yet. You will need to setup a firewall rule on the host. There’s a good guide on how to set this up that helped me to be found on serverfault.com.

Now with that in place you should finally be able to access the outside world from your vagrant boxes.

To sum up

  1. Download and Install vagrant
  2. Download the box you require from the cloud
  3. Add the box with
    $ vagrant box add --name="name of box" file:///fully/qualified/path/to/file
  4. install cygwin (including the ssh package)
  5. install cntlm and configure it
  6. download vagrant-proxyconf from rubygems.org
  7. install the plugin with
    $ vagrant plugin install vagrant-proxyconf --plugin-source file://fully/qualified/path/vagrant-proxyconf-1.4.0.gem
  8. configure the proxy in the Vagrantfile to point to your CNTLM proxy (see example above)
  9. Add an internal network between guest and host by adding this line to the Vagrantfile
    config.vm.network "private_network", ip: "192.168.56.2"
  10. open the firewall on host for the guest to be able to connect to CNTLM (guide)

Being able to respond

Posted: July 25, 2014 in People
Tags: ,

Being a consultant and a father I often get to meet new people. Either at my various projects or the kids and parents I meet in my capacity as a father. Between all those people it’s interesting to see how some can use one word with out fear and others practically will never use the same word.

When trying to build a project culture or when raising kids I value peoples ability to respond to a certain situation. E.g. If there’s a problem with a production system, are my project peers capable of handling the situation in a constructive manner. E.g. does it matter to the individual who’s to blame or do they respond to the situation at hand?

Numerous studies have shown that the ability to respond, is one of the corner stones in having a successful carrier as well as an enjoyable life. People, that are afraid of taking action for whatever reason, will usually make excuses for doing so and often those excuses will be in the form of explanations of why they are not required to or incapable of taking actions or they will simply blame someone else for the problem and the blame might even be valid. That however doesn’t change the result of making excuses.
People who don’t make excuses but take action, are also take control and are thereby empowering themselves. By acting and being in control they make it possible to feel the sense of accomplishment and with that comes higher self-esteem whereas with excuses for being incapable of acting comes lower self-esteem.

Not surprisingly the people with high self-esteem turns to the empowering actions but the odd factor is that quite often the people with low self-esteem out of fear for being at fault decides on inaction.
Having low self-esteem they certainly do not wish to be the one to blame. Then better to explain and take the sting out of the blame or simply blame someone else. Both are essentially simple defense mechanisms. It’s a defense against being blamed and as such do not solve the real issue of feeling inferior/having low self-esteem. On the contrary they add to the issue. The gain of excuses are short term. Excuses relieves the pain right here right now but ignores the long term consequences of asserting one’s inferiority.

We try to raise our kids to take action instead of blaming. E.g. if they’ve had a fight with each other we are more likely to ask each of them “What could you have done differently to avoid this situation” or “How can you fix it?” than “Who’s at fault”.

Not only are we trying to help the kids avoid similar situations, we’re also showing them that they are in control of what happens to them and how they feel. In the words of Eleanor Roosevelt

No one can make you feel inferior without your consent.

Giving excuses or explanations is also giving consent. You are unconsciously saying “My inferiority makes me unable to respond constructively to the situation at hand” and when someone else responds to the situation that assertion is verified.

Meeting all sorts of different cultures I often find that it all boils down to a cultural skewing of the meaning of “responsibility”. Those that make excuses are often interpreting “I’m responsible” as “I’m to blame
The persons who time and time again demonstrate their ability to respond, seldom equate responsibility with being at fault and why would they, the word literally means response-ability.

Working on a project we’re trying out rest.li and we’re using windows as our OS that makes the installation a little more cumbersome and I’d like to be able to redo the installation at a future point so in any case I needed to document the process and hi, why not do that so some one else than me could benefit from it. I’ll also install HTTPie since testing the generated services is a lot easier with that tool.

So let’s get rolling

The first subtask will be to install HTTPie and this post from Scott Hanselman help me but things seems to have changed slightly since then, so I’ll include a revised version

Firstly you’ll need to download Python I chose the 64-bit MSI installer when it’s installed then add the installation directory to your path. To do so follow this process

  1. Right click “Computer” in file explorer
  2. Choose “properties”
  3. Choose “Advance system settings”
  4. Press the “Environment variables” button

Depending on whether you installed for all users or just you you should then change the corresponding path variable (either you or system) and add the installation directory and “installation directory\Scripts” to the path

The former makes python available on the command prompt and the latter makes pip (a package management tool we’ll be using later) available.

To test that you’ve succeeded open a command prompt (windows-r and type cmd) first type python and hit enter there should be no error telling you that python is an unknown command, then repeat with pip

Next step is to install curl make sure that the executable is in a directory that’s also in your path variable and test in the same manner as with python and pip.

Next step is to execute distribute_setup.py you can do this using curl that you just installed

curl http://python-distribute.org/distribute_setup.py | python
  • or if you have issues connecting using curl then download the file from the link above and simply double click it

Last step in installing HTTPie is to use pip

pip install -U https://github.com/jkbr/httpie/tarball/master

That will down load HTTPie and install it on your machine. Pip will place the httpie script in your python script folder so the tool is ready to be used from the command promt when it’s down downloading and you can test this by typing

http

at the command prompt and then hit enter.

We are going to need to install

  • A JDK (1.6+ as of this writing)
  • Gradle
  • Conscript
  • Giter8
  • rest.li

First make sure that you have the JDK installed. You can find the installer on Oracles site. Gradle on the other hand can be found at the Gradle download page. With both the JDK and Gradle installed we’ll install Conscript which you can find on the github page. It’s a jar file and it should be sufficient to download and double click it
once you have conscript installed you should add that to your path as well and test it by executing the command

cs

on the command line. With conscript installed it’s time to install giter8 to do so execute the following command

cs n8han/giter8

If you get an error saying jansi or some other lib could not be found. You might be blocked in a proxy due to the fact that your traffic can’t be inspected since sbt-launch will use a repository url starting with https. It might just work for you if you change it to http. You can do this (and will need to do it both for cs and g8) in the launchconfig file that can be found under your conscript installation folder in .conscript\n8han\conscript\cs for conscript and .conscript\n8han\giter8\g8 for giter8. If that does not work then you can try a slightly more complicated version which you can read about here

When g8 is installed and everything is working as it should. You can create your first project by simply executing

g8 linkedin/rest.li-skeleton in your command shell

.

We haven’t used gradle yet but you will have to when you wish to compile your rest.li project. The skeleton created in the previous step includes three gradle build files that will handle the code generation tasks for you. For how to actually work with rest.li and gradle see the rest.li site

Following a discussion on testing and architecture I thought I’d write a post. The statement was: When architecture is not informed by tests, a mess ensues. That’s of course nonsense. Versaille was never tested but is still recognized for it’s architecture.

The statement got me started. Rhetorically that’s a clever statement. It uses an old trick most salesmen has under their skin. The statement associates the item being sold (in this case that item is testing) with something with objective positive value. That would be information in this case. The statement is of course also logically incorrect but marketing never worries about mathematical correctness as long as the statement is either mathematically incomplete or ambiguous. However the statement was not taken from a marketing campaign but a discussion of engineering  practice between J “Cope” Coplien and Uncle Bob (the latter wrote it) and in that context it’s incorrect. The key is not the test but the information. How the information came to be is irrelevant to the value of the information.  So a more correct version of the statement would be “Without information mess would ensue” and that is of course correct but also tautological.

The real value of information is in it’s use. If you don’t use the information but disregard it all together it had no value. Since testing is retroactive you get the information after you are done with your work, you can’t improve anything retroactively. Unless of course you are the secret inventor of the time machine in which case retroactively becomes entirely fussy.

If you do not intent to create another version, the information you gained from testing has no value in respect to the architecture. So using test as a means to produce value in the context of architecture  requires an upfront commitment to produce at least one more version and take the cost of a potential re-implementation.  If your are not making this commitment the value you  gain from the information produced by your tests might be zero.

In short you have a cost to acquire some information, the value of which is potentially zero.

It’s time to revisit the statement that got it all stated and to try and formulate it somewhat more helpful than a tautology.

“If you do not invest in the acquisition of information your architecture will become messy” 

You can try and asses the cost of acquiring information using different approaches and the choose the one that yields the most valuable information the cheapest.

There are a lot of tools to use to acquire information. One such tool is prototyping (or even pretotyping). Prototyping is the act of building something you know doesn’t work and then build another version that does. In other words prototyping is when you commit to implement a version, learn from it by (user) testing and then build a new version. Might that be the best approach? at some stage for some projects, sure. Always? of course not. Prototyping and pretotyping are good for figuring out what you want to build. So if you do not know what you (or your business) wants then use the appropriate tool. In innovative shops pretotyping might be the way to go. When you have figured out what to build, then you need to figure out how to build it. The act of figuring out how to solve a concrete task is call analysis. Analysis is good at producing information of how to do something in the best possible way.

Bottom line: There’s no silver bullet, you will always have to think and choose the right tool for the job

Fine print: You can’t improve anything with testing. You can improve the next version with what you learn by testing but only if there’s a next version

Pure DCI in C#

Posted: August 16, 2013 in c#, DCI
Tags: , , ,

I’ve seen several try to do DCI in C# but with the usual drawbacks from languages that are inherently class oriented. However Microsoft has a project Called Roslyn which is currently a CTP but I decided to try it out to see if I could use that to do tricks similar to what I’ve done with maroon.

It turned out to be very easy to work with and within a few hours I was able to translate my first DCI program written fully in C#. The trick as with maroon (and essentially Marvin as well) is that I rewrite the code before it get’s compiled.

A context class is declared as a regular class but with the Context attribute

A role is declared as an inner class with a role attribute and can be used as a variable.

The MoneyTransfer might then look like this

    [Context]
    public class MoneyTransfer<TSource, TDestination>
        where TSource : ICollection<LedgerEntry>
        where TDestination : ICollection<LedgerEntry>
    {
        public MoneyTransfer(Account<TSource> source, Account<TDestination> destination, decimal amount)
        {
            Source = source;
            Destination = destination;
            Amount = amount;
        }

        [Role]
        private class Source : Account<TSource>
        {
            void Withdraw(decimal amount)
            {
                this.DecreaseBalance(amount);
            }
            void Transfer(decimal amount)
            {
                Console.WriteLine("Source balance is: " + this.Balance);
                Console.WriteLine("Destination balance is: " + Destination.Balance);

                Destination.Deposit(amount);
                this.Withdraw(amount);

                Console.WriteLine("Source balance is now: " + this.Balance);
                Console.WriteLine("Destination balance is now: " + Destination.Balance);
            }
        }

        [Role]
        private class Destination : Account<TDestination>
        {
            void Deposit(decimal amount)
            {
                this.IncreaseBalance(amount);
            }
        }

        [Role]
        public class Amount { }

        public void Trans()
        {
            Source.Transfer(Amount);
        }
    }

If a base class is declared for the inner classes then these will be used as the type of the role field, if no base class is provided then the field will be declared dynamic. The source for Interact as the tool is called can be found at github

Roslyn made this very easy and I plan to see if I can make Interact feature complete compared to Marvin. The syntax will not be as fluid because I can’t change the grammar but the upside will be a more stable solution with the same or less effort.

New version of maroon

Posted: May 22, 2013 in DCI
Tags: , ,

Just a few seconds ago I pressed enter after writing

gem push maroon-0.7.1.gem

and I’m pretty satisfied with this version of maroon. The major chnaes are that you now define methods with regular method definition syntax I.e.

def methodname(arg1,...,argn)
end

The other change I made is in how the abstract syntax tree is treated. Maroon has 3 stages

  1. Reading the source
  2. Creating the AST for each method. The AST is represented as a context with the productions playing the roles and productions are in the form of S-expressions
  3. Traversal of the AST and performing the source transformation. In this stage there’s a transformation context where the individual ASTs for each method in turn play a role

One of the reasons why I changed the stages was the AST build as a seperate step, so that other work could be build on top of that. E.g. one of the requirements I wrote of in a previous post. Namely creating a visual representation of the network of interconnected objects that make up a system. That is visually show how the roles in a context connect to each other in various interactions and role methods.

I also feel that Maroon now itself is a pretty good example on how to use maroon and also on how to do DCI. As always the source can be found at at github

Bootstrapping maroon

Posted: April 9, 2013 in DCI, Thoughts on development
Tags: , ,

At #wrocloverb in the begining of March I started to bootstrap Maroon and have now released a fully bootstrapped version. It took longer than I had anticipated based on the time it took me to write maroon in the first place. The main reason for this was that part of the process lead me to find bugs needing fixing or features required to implement maroon that maroon didn’t support(1). Both tasks some what mind boggling. How do you fix a bug in the tool you use to fix said bug? a lot of the time I ended up having two mental models of the same concept and to be honest that’s one more than my mind can handle.
One model was of what the current version was capable of and another was what it was supposed to do. All this was part of the motivation for bootstrapping in the first place and often the reason why a lot of compilers are bootstrapped too. The current state of the code is a mess. I knew it would be. I’m often asked questions like “Is the code then in fact a lot cleaner when using DCI?” and my answer is that it for me depends on how I design the system. If I design as I presented at wroc_love.rb 2013 (top-down) then yes the code is a lot cleaner, however if I design as I often do when not using DCI (bottom-up) then I often find that the code becomes more messy when I try to rewrite it using DCI and maroon was designed bottom-up.
I’ve done some cleaning up but postponed a large part of the cleaning up till the bootstrapping was actually complete so that I had a foundation to test my refactorings against. So the now live version of maroon is a fully bootstrapped version making maroon the most complete example of how to use maroon. The goal of the version thereafter is to clean the code up and make maroon a good DCI example as well. An example I hope to have both James and Trygve review if their time schedules permit. What bugs did I find?

  • Indexers on self didn’t work in role methods, so if an array played a  role you couldn’t do self[1] to get the second element
  • You couldn’t name a class method the same as an instance method. This wasn’t really a big deal because you couldn’t define class methods in the first place but since that’s one of the features added it did become an issue

What features were added?

  • You can define initialize for context classes
  • You can define role methods and interactions with the same name as methods defined for objects of the Context class. E.g. you can define a role method called ‘define’
  • You can define class methods (The syntax for that is still messy but you can do it)
  • You can use splat arguments in role methods or interactions. In general arguments for role methods are discouraged which I heavily violate in the current version of maroon
  • You can use block arguments (&b). The syntax for this is even worse than the syntax for class methods
  • You can choose whether to build the classes in memory or to output a classname.rb file for each generated class. That’s the way the bootstrapping works. I use the current stable version to generate the files of the next version. I then use those generated files to generate an additional set of generated files and the last set of files, which are generated using the upcoming version are then packaged as a gem and pushed.

What features would I still like to add?

  • Default values for arguments to role and interaction methods
  • Make it possible to use all valid method names for role and interaction methods E.g. field_name=, which are currently not possible
  • Cleaner syntax for both block arguments and for defining class methods. I have an idea for this but it won’t be until after I’m done with the cleaning up. It’s going to be an interesting task since it’s probably not going to be backwards compatible. The changes will be small and make the syntax intuitive to ruby programmers (which the current is not for the features in question) but since I’m bootstrapping I’m breaking my own build before I’m done.
  • a strict flag that would enforce a strict adherence to DCI guidelines and a warn flag that would warn if any of those guidelines were violated
  • Add a third option, that instead of creating classes in memory or class files will visualize the network of interconnected roles for each interaction.

(1) Yes it’s a bit strange to talk about bootstrapping. You think/say stuff like ” features required to implement maroon that maroon didn’t support” that sounds kind of nonsensical a lot. The most mind boggling part of that is when debugging…