søndag 22. august 2010

Testing through interfaces

When at Vagif Abilov's BBQ while talking about testing Greg Young said something like "I wonder what tests would look like if we wrote tests against interfaces instead of their implementation". And really, what would they look like?

The scenario was that you have an interface that has multiple implementations and you would want to write tests against the interface testing all implementations. This would seriously reduce the number of tests you would have to write. So let's give it a try.

The first thing we need to keep in mind is that we're writing the tests on something abstract. This means that we don't really know what to expect. When passing x in to the various implementations it's not certain that all of them will answer y. Actually hopefully only one of them would answer y where x is act and y is assert. If not there would be multiple implementations doing the exact same thing and that would kind of defeat the purpose. Off the top of my head that leaves us with the following scenarios:

X stays the same while Y varies
This would be something like an ICalculator. The ICalculator would have implementations like DecimalCalculator and OctalCalculator. When running tests here we would end up with results like this:
  • DecimalCalculator: 7*7 = 49
  • OctalCalculator: 7*7 = 61
Which means that when writing these types of tests we need to be able to handle asserting on certain values pr. implementation.

X varies while Y stays the same
Let's imagine that we have some type of parser taking xml in returning a list of objects of a certain type. This would typically mean one implementation pr. xml schema while the output could be the same. So writing these types of tests  we'll have varying code for passing parameters while the assert would stay the same.

Ok, that wasn't too bad. We could probably make this look clean. Now over to some other aspects that we'll have to deal with.

Dependencies
With the right (wrong) implementation faking dependencies might be a hellish thing with this solution. I guess that's a good thing as it forces us to not make a mess of it. But still we need some way of handling setting up dependencies for the implementations.

Resolving implementations
We need a way to retrieve all implementations for an interface. Of course this is something we do all the time with DI containers so any DI container would provide us with what we need here. We could probably do something smart here to inject the faked dependencies we'll need for each implementation.

With this in mind let's set up a test for the calculator scenario. The first thing I did was creating a class for handling the plumbing. Right now this class takes care of resolving all implementations of the chosen interface, running the test on each implementation and performing specified assertions. My test ended up looking like this:

        [Test]
        public void Should_multiply()
        {
            var tester = new InterfaceTester<ICalculator>();
            tester.Test(c => c.Multiply(7, 7))
                .AssertThat<DecimalCalculator>().Returned(49)
                .AssertThat<OctalCalculator>().Returned(61);
        } 

I'm quite happy with that. This test is both extend able and readable. Now let's do the same to the scenario with the string parser. I'll just extend the plumbing class used in the previous example to handle varying input parameters. The implementation ended up looking like this:


        [Test]
        public void Should_parse_number()
        {
            var tester = new InterfaceTester<INumberParser>();
            tester
                .Test<XmlParser>(x => x.Parse("<number>14</number>"))
                .Test<StringParser>(x => x.Parse("14"))
                .Returned(14);
        }


I can't say I'm as happy with this one as the complete delegate is copied for both implementations and not just the part that differs. But still it's a huge simplification compared to writing a full test suite pr implementation.

I guess I'll leave it at that for now. What this does not cover is setting up dependencies which likely will complicate the implementation a bit. After doing this implementation I can really see the value of writing my tests like this. It would save me time and energy and would leave me with a cleaner simpler test suite. The implementation ended up being fairly simple. Initial conclusion: Writing tests against interfaces is a good idea!

I'd love to hear  your thoughts on this! And if you're interested in the full source code let me know and I'll upload it to github or something.


lørdag 21. august 2010

New challenges, starting my own business

Times are changing and in a bit more than a week I'll be starting up a company with four others. I was asked to join them and after some thinking I said yes. Now I'm throwing myself off the cliff having a firm belief that wings will grow before I hit the ground :)
So what is the company about? Our goal is providing skilled, experienced people specialized in their field. All five of us have our own specialties that go we'll together from developer to analyst. From a business point of view we have deep knowledge with enterprise software specially oil/energy trading and business applications. The company has gotten the name Contango Consulting AS.

I am very excited about realizing a dream of being responsible for my own future. Trying to do my own thing. It's going to be tough and I'll probably learn more in a year than I have done up to now. I'm also certain that this blog will be affected by this. Hopefully it can result in my learnings ending up here for others to enjoy. And of course if any of you are in need of a skilled .NET developer / architect / trainer don't hesitate to let me know :) You can view more detailed information about me here.

-Svein Arne

tirsdag 10. august 2010

Nu on Linux (debian based systems)

There's a lot of activity these days on the Nu project. Short the Nu project is for .Net what gems are for Ruby. In fact it uses gems. In my last post I talked about tooling and where I wish tooling would go in the future. Package management is definitely one of the tools that will help us in the future. If you want to read up on Nu Rob Reynolds has some good posts explaining what it is and how to use it. What I'm going to go through here is just what is needed to get it working on Linux. There are just some small tweaks that needs to be done for it to be working.

If you don't have Ruby on your system already you'll need it.
sudo apt-get install ruby-full build-essential
Next you'll need to get gems.
sudo apt-get install rubygems
Ok, then we have all dependencies required to get going. Now lets get Nu. Nu is installed through gems like this (NB! make sure you install nu with root priveleges or it won't work).
sudo gem install nu
Good, now we have all we need to start using Nu. Just one thing, when running Nu I got an error saying "no such file to load -- FileUtils". The reason for this is that Linux has a case sensitive file system and the file is called fileutils.rb not FileUtils.rb.  If you also end up having this issue go to the folder containing fileutils.rb (something like /usr/lib/ruby/1.8) and create a symlink by running the following command.
sudo ln fileutils.rb FileUtils.rb
Now to some real Nu action. Let's say we have a project and we want to start using NHibernate. What you have to do is to go to the root folder of your project and type this command.
nu install nhibernate
You'll get a couple of warnings but it's going to do what it's supposed to. When it's done you can go into the newly created lib directory and see NHibernate and it's dependencies in there. Neat huh!?

mandag 9. august 2010

Tooling visions

Lately I have felt more and more uncomfortable about the tooling I'm currently working with. I feel a lot of the tools are not helping me reaching my goal. Frankly their in my way. The whole thing started when I started using ReSharper and saw what an IDE is about. ReSharper truly helps you accomplish things. It keeps you focused on your real goal: producing quality code. The only bad thing about ReSharper is that it's tied to Visual Studio. Visual Studio has become a horrid beast of an application. It's packed with features that to me has nothing to do with the application I use to write code. It doesn't help me any more than it's in my way.
My second wakeup call was when I started using Git. Specially after watching Linus Torvalds talking about he's ideas behind it and why he made it like he did. One of the reasons he says was to create a source control system that does it's job well and doesn't get in your way. And he succeeded! When working with Git you have to deal with it when you pull, commit or push. And those are the times you're supposed to deal with source control. You shouldn't have to deal with source control because you want to edit a file or having the source control system insert padlocks left right and center. You shouldn't have to think about whether your computer is online or offline when you work with your code. Going back to working with TFS made me realize how much time I waste using a tool. Time that I could have spent solving real problems.

Ok, that was the venting part :) Now to something a little more constructive. I have been thinking about how my ideal set of tools would look and what would be important. There are some points I really want to focus on. The first point applies as well to code: SEPARATION OF CONCERNS! Each tool should help you solve a single problem and it should do it well, extremely well. Second, as mentioned it should stay out of your way. It should know what it's trying to help you solve and act like that twin that completes your sentences. Third, it should not compromise. When you work with a mess of a project having everything depending on everything it should be painful. The tool should keep it's focus on helping you produce quality code and not make compromises to help dealing with a festering pile like Uncle Bob puts it.

First off let's deal with what we call the IDE. To me an IDE is notepad+ReSharper+navigation, and that's what I think it should be. It should be there to help us produce quality code as efficiently as possible providing intellisense, auto complete, refactoring and everything that has to do with writing code. And that to me has nothing to do with building binaries, running tests, debugging and deploying. Though I understand why IDE's have ended up where they are it's time to move on. We're no longer hacking and hoping. We don't set breakpoints and step through half the application as part of our work pattern. We write code and watch tests fail and pass. To me the IDE is about efficiently writing code.

Of course we need to compile and run tests and that should be it's own tool. We already have continuous testing tools like JUnitMax, Ruby's autotest and AutoTest.NET which I'm currently working on (add cheesy commercial part here). This tool should basically stay out of your way. The only time we would want to interact with this tool is when we have broken something. It should build and run only what it needs to and only grab our attention when something has gone wrong. This is the tool that would bind the editor and the debugger together. When something has gone wrong we should be able to get the right and enough information. When builds or tests fail we should be able to easily move to the right file and position in the editor to fix whatever is wrong.

Now to the debugger. The way I see it debuggers the way they work today are optimized for full system debugging not the simple "now what the heck did I just do to fail this test". And that's what I'm looking for 95% of the time. For these types of tasks I don't think that debugging through the IDE is helping. I don't think displaying the code file by file, class by class, function by function like it's written is the best way. And certainly not stepping through it. Something I do think would be more efficient is analyzing a series of snapshots showing where the execution had it's turning point, what threw an exception and things like that. I have tons of idea's that I'm hoping to realize through the ClrSequencer project. I think I'm going to dive a little bit deeper into this in another blog post.

I guess that's enough rambling for tonight. It's probably not the last thing you'll hear from me on the subject. Tooling is very important and tooling should help you not fight you and I have been feeling a lot of the latter lately.

fredag 6. august 2010

Continous testing with AutoTest.Net

It's about time for me to do some writing about AutoTest.Net. This is a project I have been working on for the last 2-3 months. It's a continous testing tool for .Net originally based on ruby's autotest. After playing with Ruby for a couple of evenings I really enjoyed the way you could work with it. My usual work circle of "write code, build, wait, run tests, wait" was replaced with "write code, save". Luckily the code I write tends to work more often than it breaks so waiting for builds and tests to run is a waste of time. Specially when I build and run tests about every 30 seconds or so.
So after the joy of working like that in Ruby I was determined to find a tool like that for .Net. After a bit of searching I found AutoTest.Net. The project was hosted at code.google.com and was initiated by James Avery but because of not having enough time on his hands the project was put on hold. I really wanted a tool like that so I went and got his permission to continue the project. It's now hosted on github.com/acken/AutoTest.Net. Today it contains support for both .Net and mono and it's cross platform. NUnit, MSTest and XUnit are the testing frameworks supported today and MbUnit will be added soon. It supports running tests from multiple testing frameworks in the same assembly.

Now, how does it all work you say? The whole thing consists of a console application and a winforms application. By now I use the winforms app about 98% of the time so that's what I'm going to show here.
The first thing you do ofcourse is to go to this link and download the latest binaries and unzip them to the folder of your choice. Locate the file named AutoTest.config and open it in your favorite xml editor. Now let's edit a few settings:

  1. DirectoryToWatch: Set this property to the folder containing the source code you want to work with. AutoTest.Net will detect changes to this folder and it's subfolders.
  2. BuildExecutable: This is the path to msbuild or in mono's case xbuild. You have the possibility to specify a version of msbuild pr. framework or visual studio version.. For now let's just specify the default <BuildExecutable> property. Something like C:\Windows\Microsoft.NET\Framework\v3.5\MSBuild.exe.
  3. Now let's specify a testing framework. You can pick from zero to all though zero wouldn't be any fun. I'll go with NUnit in this example.
  4. The last thing we want to do is to specify a code editor (<CodeEditor>). Let's pick visual studio. We can pass visual studio the file to open and the line to go to. Sadly there's a bug in visual studio preventing it to go to the right line :( So for now we'll rely on Ctrl+G. Anyways the config has visual studio set up correctly by default. Just make sure the path to devenv.exe is the same as on your machine.
Now we're ready to start the AutoTest.WinForms.exe application and do some real work.The first thing you 'll see is a form looking like this.


The only interesting thing right after startup is the button in the top right corner. As you can see now (gonna do something about the colors) the button is yellow. Behind this button you'll find the status for AutoTest.Net application. It's yellow now because the configuration has generated a warning. If the button is red an error has occurred within AutoTest.Net. Right now the window will look like this.


So let's go ahead write some jibberish in one of the files inside the folder we're watching and save the file. AT.Net should start working right after you save the file.


And of course jibberish means errors which will result in this.


When selecting one of the lines in the list you'll get the build error/test details underneath and you can click on the links to open the file in visual studio. Now let's fix the error we just made and save the file and let's see what happens.


And as expected it goes green with 5 succeeded builds and 221 passed tests. That's basically it. From here on it's lather, rinse, repeat.

Right now it's in alpha and it will of course have some bugs here and there. I hope this post will tempt you to try it out. Even at an early stage like this it's a really effective way of working!