Archive for the ‘c#’ Category

Pure DCI in C#

Posted: August 16, 2013 in c#, DCI
Tags: , , ,

I’ve seen several try to do DCI in C# but with the usual drawbacks from languages that are inherently class oriented. However Microsoft has a project Called Roslyn which is currently a CTP but I decided to try it out to see if I could use that to do tricks similar to what I’ve done with maroon.

It turned out to be very easy to work with and within a few hours I was able to translate my first DCI program written fully in C#. The trick as with maroon (and essentially Marvin as well) is that I rewrite the code before it get’s compiled.

A context class is declared as a regular class but with the Context attribute

A role is declared as an inner class with a role attribute and can be used as a variable.

The MoneyTransfer might then look like this

    [Context]
    public class MoneyTransfer<TSource, TDestination>
        where TSource : ICollection<LedgerEntry>
        where TDestination : ICollection<LedgerEntry>
    {
        public MoneyTransfer(Account<TSource> source, Account<TDestination> destination, decimal amount)
        {
            Source = source;
            Destination = destination;
            Amount = amount;
        }

        [Role]
        private class Source : Account<TSource>
        {
            void Withdraw(decimal amount)
            {
                this.DecreaseBalance(amount);
            }
            void Transfer(decimal amount)
            {
                Console.WriteLine("Source balance is: " + this.Balance);
                Console.WriteLine("Destination balance is: " + Destination.Balance);

                Destination.Deposit(amount);
                this.Withdraw(amount);

                Console.WriteLine("Source balance is now: " + this.Balance);
                Console.WriteLine("Destination balance is now: " + Destination.Balance);
            }
        }

        [Role]
        private class Destination : Account<TDestination>
        {
            void Deposit(decimal amount)
            {
                this.IncreaseBalance(amount);
            }
        }

        [Role]
        public class Amount { }

        public void Trans()
        {
            Source.Transfer(Amount);
        }
    }

If a base class is declared for the inner classes then these will be used as the type of the role field, if no base class is provided then the field will be declared dynamic. The source for Interact as the tool is called can be found at github

Roslyn made this very easy and I plan to see if I can make Interact feature complete compared to Marvin. The syntax will not be as fluid because I can’t change the grammar but the upside will be a more stable solution with the same or less effort.

Continuation passing

Posted: June 3, 2009 in c#, ProS

While working on version 0.3 of ProS I had the following requirement:

When given a list of statements that all return a value of IProcessor (That interface defines a method called Process) call each processors process method in turn and pass the result of the execution to the next processor in the list if and only if the result is null. At the same time the sequence of calls needs to be wrapped in a new IProcessor object so that the entire sequence can be part of another sequence.

After trying a few different approaches I ended up with using continuation passing. But what is that?

given two methods:

public int GetRandomInt()

{

return new Random().GetNext();

}

public void DoSomethingWithAnInt(int i)

{

Console.WriteLine(i.ToString());

}

a main could look like this:

public int main()

{

var i = GetRandomInt();

DoSomethingWithAnInt(i);

}

if we wanted to turn that into continuation passing we would have to change the first method and make it something like

public void GetRandomInt(Action<int> doSomethingWithAnInt)

{

var i = new Random().GetNext();

doSomeThingWithAnInt(i);

}

and main would then simply be

public int main()

{

GetRandomInt(DoSomethingWithAnInt);

}

This is actually more common than what you might think at first. Think of any Async call in .NET you’ll probably remember that one of the arguments is an AsynCallback. That call back is the continuation of the Async call. So for implementing asyncromeous behaviour the continuation passing style is very straight forward, simply pass what you want executed when the first call completes.

I need to represent a tree where each node is a decision point. E.g I might need to represent the code structure

if(a())

{

DoA();

}

else if (b())

{

DoB();

}

else

{

throw new InvalioperationException(“No sensible default action”);

}

however a() and b() is not known compile time and the number of if’s is not known either.

I ended up with a class called ContinuationProcessor that looks similiar to this:

public class ContinuationProcessor<T> : Irpocessor<T>

{

public ContinuationProcessor(ContinuationProcessor<T> continuation, IProcessor<T> processor,Func<object[],bool> condition)

{

//initialize the private fields

}

//This method is defined in IProcessor

public T Process(T obj, params object[] arguments)

{

T processedObj = null;

if(condition((new object[]{processedObj}).Concat(arguments)))

{

processedObj = processor.Process(obj,arguments);

}

if(continuation != null)

{

processedObj =  continuation.Process(processedObj,arguments);

}

return processedObj;

}

}

So it ends up not being true continuation passing since I end up returning a value, but it does make it possible to represent an arbitrary number of decision points together with the code that should be executed in case the associated decision is taken.

It’s a lot of fun writing code that represent code, but it gives me a headache trying to debug it…

Just finished v0.2 actually there’s not that much new to v0.2. It simple adds a property production to the language as well. The syntax is similar to the one used in c#

A long the way I had quite a few strange experiences. The one that took me the longest to find a solution for was the fact that new LanguageCompiler(new Grammar()) (languageCompiler is a Irnoy class) ment to differrent things depending on how I used my compiler. If I used it with the Grammar Explorer from the Irony project or in the VS SDK hive everything worked fine, but using it stand alone to compile to a .dll new Grammar() suddenly turn it self into instanciating the Irony GRammar class instead of the PRoS Grammar.

Never figured out why but I can only say it’s a good thing to have the source code of the Libraries you’re using when you have to hunt down the reason to why they throw a null refererance exception.

Guess that the next thing I’ll include is a propperty setter or more advanced caching than the simple property

I’ve decided to write a post for if not all the incremental versions of ProS then at least quite a few of them. My goal is to make addition to ProS and the rebuild ProS using those additions. The reasons for this approach are mainly two. I get a good idea of what’s working and what’s not and I might just get new ideas a long the way.

Reading a book written by the team who originally created SharpDeveloper (an Open Source IDE for c#) I learn the concept of “eating your own shit” as they called it. The Search and replace functionality had been lacking and at some point they realized that the developer doing that part of the IDE was him self using UltraEdit (or some other tool) when he needed to search and replace. With that in mind they decided that going forth they could only use SharpDeveloper for their development.

It’s the same thing I’m trying to do here, though my goal is not to make a Turing_complete. Actually I’m not sure it’s even Turing incomplete 😉 So writing all of ProS in ProS will not be an option.

The first version is little more than a configurable object factory. Except for things such as using, types and values there’s really only one contruct to the language: The constructor.

The cunstructor gives the option of creating what well end up being a method returning an instance of an object. The syntax is

constructor returnType Name(arguments) : objectType

{

statements

}

The constructor is a keyword say: “next is the definition of a constructor. arguments hav the c like syntaks: type name and multiple arguments a seperated by a comma.

Statements can in this version only be used for setting ctor (which is the term I’ll use for class constructors to not confuse them with ProS constrcutor and by the way the term class is intentional) arguments. A stament could look like

number = 10;

That would tell the ProS compiler that each time this constructor is call the value passed for the arument number to the ctor must be 10. The argument names must match ctor argument names as well, and there must be a ctor with a list of arguments that matches the total of arguments and statements.

A complete example could look like the below:


using
Irony.Compiler;
using
Motologi.ProSLanguageService;
using
Motologi.ProSLanguageService.Compilers;
alias
tokenCompiler = ITokenCompiler;


class
Compilers{
constructor
tokenCompiler Alias(AstNode node) : AliasCompiler {

}
constructor
tokenCompiler Using(AstNode node) : UsingCompiler{

}
constructor
tokenCompiler Argument(AstNode node, TypeEmitter typeEmitter) : ArgumentCompiler{

}
constructor
tokenCompiler ArgumentValue(int position, bool isForStaticMethod):ArgumentValueCompiler{
}


constructor
tokenCompiler Class(AstNode node, TypeEmitter typeEmitter):ClassCompiler{

}
constructor
tokenCompiler Configuration(AstNode node, TypeEmitter typeEmitter):ConfigurationCompiler{

}
constructor
tokenCompiler Constructor(AstNode node, TypeEmitter typeEmitter):ConstructorCompiler{
}


constructor
tokenCompiler Identifier(AstNode node, TypeEmitter typeEmitter):IdentifierCompiler{

}
constructor
tokenCompiler Int(AstNode node, TypeEmitter typeEmitter):IntCompiler{
}


constructor
tokenCompiler Statement(AstNode node, TypeEmitter typeEmitter):StatementCompiler{

}
constructor
tokenCompiler String(AstNode node, TypeEmitter typeEmitter):StringCompiler{
}


constructor
tokenCompiler Type(AstNode node, TypeEmitter typeEmitter):TypeCompiler{
}
}

That class declaration is the inner workings of the ProS compiler ver 0.2. The result is two classes. One is called Injector and the other is called Compilers. The class Compilers is what is directly defined in the above code. For each of the construct there’ll be one static method on the class Compilers. The method will take the arguments listen in the argumentslist for the constructor and will return an instance of the type specified after ‘:’ I.e TypeCompiler for the last constructor.

The Injector class is nothing more than a container. For all the constructs defined in all classes there’ll be an over over of a method called Get. In essence it’s a large factory that given an ID and a set of arguments will return an object.

In contrast with say Windsor’s ServiceContainer or similar in other reflection based frameworks the list of arguments is type checked compile type.

The approach of compiling the dependency configuration does impose some other (sometimes strange) issues. So the gain in type safety and performance is not for free. But some of those issues can be resolved with delayed compilation. So that the types mentioned above wont be created until runtime. I have a plan of adressing just that in a later version of ProS. For now I’ll dig into ProS ver 0.2 using the above configuration

Well as I mentioned in Creating dynamic types in net im currently working on a language compiler. The language is called ProS. Thinking of it I might blog on the language name at some other point. The short version is that my thesis in compilers at Uni was called ‘ProS’ and whe I worked on the first version of this language I realized a lot of similarities to that project.

If you wander what happen to the first version: It died. My laptop died two days after I infected my back with Vira due to me making a backup of my moms HDD. Note to self. Use seperate backup media to own data in future.

The language is YANL (Yet anothe .Net language). you can write a program using the language only certain classes. I’m using dependency injection in all my projects and I’ve been using a few different and as noted in MAB ContainerModel / Funq: a transparent container most are painstackingly slow when it comes to creating objects. To me that’s a bit weird they are all used for just that!

The problem for most of them is that they are build on reflection. To me that’s two problems in one.  The main reason they are slow is due to reflection being slow. The second problem is that reflection is not safe.

(That is not entirely true:Typed Reflection but for the way reflection is used by ObjectBuilder, Windsor and similar the satement is true)

I like the way Funq mentioned above solves the problem with Lambda expression but I decided to take another approach to it by making a domain specific language to hanlde the injection. Basically what you can do is some thing like:

class Injector

{

constructor returnType GetMyObject([arguments])

{

//Here goes your dependency arguments

//Setting an argument named arg1 to 10:

//arg1 = 10;

//Then comes property setters.

//Same syntax as above

}

}

This will compile into a class named Injector with a set of methods called GetMyObject. The number of methods created depends on the number of constructors on the class returned by GetMyObject and the arguments given to GetMyObject. The arguments given to GetMyObject will be passed on as arguments to the constructor.

The result I expect is that this approach will be basically as fast as just simply calling the constructor. The method call is trivial and in the simple form is just one extra jump instruction.

The benefits of the ProS approach to the reflection approach is, not surprisingly, speed and type safty. After all that was my two main design goals. I’m very much looking forward to performing Daniel Cazzolino’s benchmark test to Funq and ProS.

Due to the fact that Irony and the Visual Studio SDK making it trivial to get highlighting, brace macthing and similar.  ProS comes with that as well.

Next step in the development is to take my version 0.1 and turn it on it self. Using ProS 0.1 as dependency Injector for ProS 0.2

A long the way I’ll make a few other features and add them to ProS v0.2.
A few of them are standard in most Ioc’s such as making caching possible but a few Is ProS only. One of them I had a lot of fun with when building version one. Making the Compiler plugable.

I’d write a post on  making the compiler plugable. For now I’ll just stick with saying it’s fun but im affraid it’s also the option of potentially really weird coding.

Pit falls in testing

Posted: June 13, 2008 in c#, Testing

Yesterday I wrote a post on default(T) not always being valid, that realization made us change the signature on the mentioned method.

Working on that rather simple method made me once again think about testing. We have Asserts like

Assert.AreEqual(Enumeration.Valid,Enum<Enumeration>.Parse(“Valid”))
Assert.AreEqual(Enumeration.Valid,Enum<Enumeration>.Parse((int)Enumeration.Valid))
Assert.AreEqual(default(Enumeration),Enum<Enumeration>.Parse(“not valid))

This gives 100% statement coverage and it might look as it gives a 100% branch coverage as well, which unfortunately is not true. You don’t necessarily have to have code to have a new branch.

object obj = “Valid”;
Assert.AreEqual(Enumeration.Valid,Enum<Enumeration>.Parse(obj));

a more common example of a hidden branch is an if with no else. Even though you have not explicitly written an else clause you should test that branch none the less.

the code being tested might look like this:

class Guard{

public static void ArgumentNull(object argument,string name){
if(argument == null)
throw new ArgumentNullException(name);
}
}

we might then have an Assert like:
Assert.Throws(typeof(ArgumentNullException),Guard.ArgumentNull(null));
we have 100% statement coverage but the quality is not very high. At some point we want to log the call stack when we have a call with a null argument. However the implementation has an error which is not caught due to the lack of testing of the “invisible” branch.

class Guard{

public static void ArgumentNull(object argument,string name){
if(argument == null)
Logger.Log(GetCallStack().ToString());
throw new ArgumentNullException(name);
}

}

We still have 100% statement coverage and our test still succeeds but unfortuantely any call to Guard.ArgumentNull now throws an ArgumentNullException no matter whether the argument is null or not.

When in doubt if more test cases are needed make a Cyclomatic Complexity Analysis of the code being tested. The number of tests needed is in most cases proportinal to the cyclomatic complexity of the code being tested.

for more information on how to apply CCA as a quality measuring mechanism for unit tests take a look at this blog. I do not agree on their actually “algorithmn” for figuring out the number of tests needed but the point of creating an algorithm based on CCA is well thought.

A rule of thumb says that you need 4 incorrect values for each correct value you need to test each decision point in your code.

Since CCA in essence is a measurement for the number of decision points in your code I go for a higher number than .5 * CCA. What that constant would be, would depend on the project. In my current project the constant is between 1 and 2 depending on factors such as source of the code (generated or written), the complexity (it’s not a linear relation ship for us but exponential) and the severity of an error in the tested code. (An error in the security is a lot more severe than in the part that does debug tracing)

I was coding a simple method today it looked like this:

public static T Parse(object value){
if (value != null && Enum.IsDefined(typeof(T), value)) {
return (T)value;
}
return default(T);
}(We have a method that handles string values, which is why the cast is safe if the value is defined for T)

We didn’t like it, mostly because netiher of us liked to hide the invalid/malformed input by returning a default value. But it made us wonder about this statement:

Enum.IsDefined(typeof(T), default(T)) if T is an enumeration would that statement always be true?
The answer is actually no.

To see why let’s define an enum:
public enum MyEnum{
firstValue = 4,
secondValue = 3
}

Then the statement would be false where as default(MyEnum) == 0 is true.
so the lesson is: Don’t count on default(T) being a valid value.

I would have liked the default value to be either 3 (because it’s the lowest) or 4 because it’s the first value declared.
If you don’t declare the value explicitly the value declared first in the enum is the default value, so I preferre the later to the first.

Update: Part of the reason why we had the talk on enums in the first place was a refactoring process very much like the one described in this nice post

Update: For other coding surprises you might want to have a look at ‘things that make me go hmm’