tag:blogger.com,1999:blog-60137348442944713642024-03-12T16:22:19.199-07:00Nonstop technobabbelSvein Arne Ackenhausenhttp://www.blogger.com/profile/12518116016333593695noreply@blogger.comBlogger25125tag:blogger.com,1999:blog-6013734844294471364.post-74685255527313364132014-08-18T15:11:00.002-07:002014-08-18T15:11:45.519-07:00Moving my blog activity to www.acken.noThis is post is sort of a manual HTTP 301 so head on over to www.acken.no :)Svein Arne Ackenhausenhttp://www.blogger.com/profile/12518116016333593695noreply@blogger.com0tag:blogger.com,1999:blog-6013734844294471364.post-6304813546825858612012-11-03T08:42:00.001-07:002012-11-03T08:52:51.286-07:00The ContinuousTests workflowThe biggest issue I have seen people having with ContinuousTests is understanding the workflow in which it provides highest value. And specially from users having previously used NCrunch. The two products take a different approach to how it deals with developer workflow. Let me just run you through the thought behind the ContinuousTests workflow.<br />
<br />
<b>No more running tests manually</b><br />
ContinuousTests is all about building and running your tests in the background. You write your code and hit save when you need feedback. We will build and run only tests affected by the change you made in the background so that you can keep on working and focusing on the task at hand. With fast enough tests you will have the result of your previous save available before you hit save again. In that way you will end up in a highly effective workflow of writing code, saving, writing code, saving. Working like this eliminates time previously spent building and testing. You will be surprised how eliminating that short wait time improves your focus.<br />
<br />
<b>Tests are the driving force behind writing code through TDD</b><br />
This is the mantra going through ContinuousTests. That effectively means that the state of the system is defined by what tests are failing. A passing test is the norm so knowing that a test passed is of no interest. What we want to know is what tests are not working as of now. The heartbeat of ContinuousTests is the "Run Output Window" below which at all times shows what tests are failing in the current solution.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-6hlcjVRJTUk/UJUuQI22OOI/AAAAAAAAD-w/ofF9q7j2Os4/s1600/runoutput.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="227" src="http://3.bp.blogspot.com/-6hlcjVRJTUk/UJUuQI22OOI/AAAAAAAAD-w/ofF9q7j2Os4/s640/runoutput.jpg" width="640" /></a></div>
<br />
<b>The tests are your backlog</b><br />
When I open a solution the first thing I do is to build and run all tests. When that operation has finished I know what tests are broken in the system. This will likely tell me where I left off yesterday. As it's your backlog you will keep picking failing tests from this window until it's empty. Whenever a problem needs solving make it into a failing test and it will appear in this window.<br />
There are two dimensions to this window. The text representing the state of your last test run and the list representing the state of the system. This is why you can end up with a last run that is green and still have failing tests in the list. Working with tests in this type of automatically updated backlog is something we have found very effective.<br />
<br />
<b>Risk margins</b><br />
Like mentioned the "Run Ouput Window" is the heartbeat of ContinuousTests. It gives you the primary information needed to work efficiently with tests. However writing code is a lot more than just flipping through red or green. Writing new code is fairly safe and highly effective through TDD. However changing existing code is a whole different chapter. Ok so you go to the step of changing some existing code. Tests ran and it all came out green. Did you break anything? Well green looks good. However the green is only as good as the quality of tests being run. AND whether or not they assert on affected code in the way you hoped it would. Boom!<br />
<br />
Conclusion is that knowing whether tests covering affected areas passes or not is not enough. When changing code we need to know more about the environment in which the code you are changing exists. How close are the tests to the affected code. What are the dependencies between this code and it's surroundings. These are all questions we are trying to answer through the risk margins and graphs. The color of the margin glyph tells you the calculated risk, green=low, yellow=medium, red=high and then of course the dragon. Inside the colored ring the number tells you how many tests cover this method. In addition to that there are graphs showing design time and run time coupling of the system. All this information is there to provide a grounds for deciding how to go about changing existing code in a safe manner.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-Hbu7d9XhvDk/UJU4rgNGOZI/AAAAAAAAD_A/fsFVc4jykA0/s1600/margins.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="210" src="http://2.bp.blogspot.com/-Hbu7d9XhvDk/UJU4rgNGOZI/AAAAAAAAD_A/fsFVc4jykA0/s640/margins.jpg" width="640" /></a></div>
<br />
<b>Conclusion</b><br />
ContinuousTests aims to provide a workflow that maximizes your efforts into writing code while working with your backlog of "tasks needed to be implemented". In addition it provides features that helps you make the right decisions when changing existing code.Svein Arne Ackenhausenhttp://www.blogger.com/profile/12518116016333593695noreply@blogger.com0tag:blogger.com,1999:blog-6013734844294471364.post-32240049841855466832012-01-11T13:53:00.000-08:002012-01-11T13:53:08.438-08:002 cups of messaging, 1 tablespoon of async and a whole lot of contextJust got some very interesting input from someone who had the patience to watch through my <a href="http://www.infoq.com/presentations/Asynchronous-Programming-Messaging">Monospace </a>talk. First off I was gutted after looking through it myself. I did not manage to communicate the content anywhere close to the way I had intended to. No matter how painful it was watching it I now know what to work on. I'll practice harder and to a better job next time :) Let me try to make a bit more sense here.<br />
<br />
Øyvind Teig left this comment on one of my <a href="http://ackenpacken.blogspot.com/2010/09/parallel-takes-you-half-way.html">previous blog posts</a>. After reading through some of his material I decided to reply through a new post. And for the record his papers are well worth reading!<br />
<blockquote class="tr_bq"><span style="background-color: #fff9ee; color: #222222; font-family: Georgia, Utopia, 'Palatino Linotype', Palatino, serif; font-size: 14px; line-height: 19px;">I just saw the Monospace talk, and read some of your blog notes. Just as I have been stating that "synchronous" programming is what we should do, and you say about "asynchronous" the same, we both perhaps, might be easily misunderstood. The problems you told about in the lecture (shared state and ordering etc.) and the fact that messaging is naturally distributable, and that “the world is asynchronous”, could in fact advocate any way. Synchronous and asynchronous are tools in the toolbox. I work in embedded safety-critical systems, where sending pointers internally is bad (if they are still in scope after sending), message buffer overflow is bad (if it restarts the system and not handled at application level), and WYSIWYG-semantics is good. Not trusting 50% of the code, as an argument for doing things asynchronous is ok. But it has to do with layering, not paradigm. If the spec says you must wait, you wait. If not, you don’t. Also, do observe that asynchronous messages (and not rendezvous as in Ada or channels as in Go) makes the messages / sequence diagrams tilting – good for some, not wanted in other situations. What happens after you “send and forget”? Is this what I want to do, always? Of course not. And “waiting” does not mean “don’t do anything”. It has to do with design and also “parallel slackness”. I have blogged some about this, see http://oyvteig.blogspot.com/, and published some, see http://www.teigfam.net/oyvind/pub/pub.html.</span> </blockquote><blockquote class="tr_bq"><span style="background-color: #fff9ee; color: #222222; font-family: Georgia, Utopia, 'Palatino Linotype', Palatino, serif; font-size: 14px; line-height: 19px;">Øyvind Teig, Norway</span></blockquote> As Øyvind points out interpreting a saying using your own knowledge can easily lead to different outcomes depending on the person. From his perspective (and please arrest me if I'm not interpreting you right) async functionality should be boxed in and modeled through what he calls the <a href="http://www.teigfam.net/oyvind/pub/CPA2007/paper.pdf">"Office Mapping Factor"</a> as an example. And it makes perfect sense!<br />
The basic idea of this type of modelling is using CSP (Communicating Sequential Processes). CSP defines separate processes/threads running in parallel while inter communication between the processes/threads is happening synchronously. The receiver of the message has to acknowledge that it can/want's to receive the message rather than the fire and forget / queuing approach.<br />
<br />
In my talk and blog posts I tend to talk more about fire and forget messaging. Systems that run asynchronously without the various parts of the system being aware of the asynchronous behaviors happening around them. Each part or rather block of functionality in the system works synchronously while message consumption happens asynchronously. Quite the opposite of what CSP states.<br />
<br />
And again the differences are all about context. Personally I write code almost exclusively for servers and desktop computers that has the amount of memory, cores and disk as is desired. Øyvind on the other hand has 128KB program memory and 32KB external memory at his disposal. Meaning all my assumptions of how stuff works goes right out the window. Like he explains: Yes you can use a queue but what happens when the queue is full and you get queue overflow (a scenario that has not even crossed my mind)? With 32KB of memory that is quite likely. The system would crash or halt wouldn't it? How about spawning processes? Same thing. In this context the environment plays a huge role and the architecture needs to reflect it. How you spend your resources is critical.<br />
<br />
Establishing a context around the problem you are trying to solve is crucial to how you end up implementing it. I quite enjoyed reading about how Øyvind reflects around the Office Mapping Factor. So how come I tend to approach asynchronous programming so different from Øyvind?<br />
<br />
Let's set the context for what kind of system that is usually on my mind when I'm talking about asynchronous systems. Everyday applications. That is the easiest way I can put it. Your everyday Order\CRM\Whatever application. Not an application that needs to process 10.000+ messages a second. Not an environment where there are restricted amounts of resources nor where the application consumes unnaturally huge amounts of resources.<br />
<br />
I guess my approach can be split into two parts. The first thing being messaging and the second being asynchronous. Usually messaging is the reason why I use event/message based architectures. It is not from the desire of making asynchronous systems. However making a message based system asynchronous is trivial.<br />
<br />
<br />
<b>Why message based system?</b><br />
<br />
<i>Abstraction, Coupling</i><br />
High coupling is like the perfect fertilizer for code rot. A change to any piece of code can break everything because everything is coupled to everything else. How do you prevent this from happening? You introduce abstraction. There are multiple mechanisms that can decouple systems like injection of abstract types (interfaces, abstract classes..), function injection (delegates) and messaging. Messaging takes abstraction farther than the other mentioned mechanisms by adding a dispatcher that routes the message to the consumer. Because of this the sender is only dependent upon the message dispatcher and not the message handler. The message might be dispatched to multiple consumers without the producer of the message knowing about it. Instead of calling a method handed to you through contract (interface, abstract class, delegate) you produce an output others can consume.<br />
<br />
<i>Producing real output</i><br />
By making sure that your code feature produces an output in the shape of a message you have made that code feature a stand alone piece of code. It can now make sense beyond the contract that would have otherwise been passed to it. Being a stand alone piece of code it becomes a natural place for integration. Integration that would happen through consuming a message or producing a message that other participants can consume through the message dispatcher.<br />
Open Closed principle: Software entities should be open for extension but closed for modification. Another principle which violation easily leads to code rot. Since messages (real output) flow through the dispatcher we can now extend functionality regarding message handling without modifying existing code.<br />
<br />
<i>Testing</i><br />
Since our small software blocks now produce real output it is very testable. We can rely on testing input and output focusing on given the circumstances what is the expected result (output). Create a fake message dispatcher for you tests. Publish a message for your feature to consume and verify that the message produced by your feature is as expected.<br />
<i><br />
</i><br />
<i><br />
</i><br />
<b>Now to the asynchronous part</b><br />
<i><br />
</i><br />
A message based system consists of participants, a message dispatcher and messages. The participants can send messages through the dispatcher and/or receive messages from the dispatcher. Whether your message based system is asynchronous or not can be as trivial as the dispatcher saying _consumer.Consume(message); vs ThreadPool.QueueUserWorkItem((m) => _consumer.Consume(m), message);. That little detail will make you able to have multiple features running in parallel vs features running sequentially.<br />
<br />
Even though the code that needs to be written to make a system asynchronous is done in a couple of seconds the impact of that change is enormous. Both in what you can do and what you need to make sure you don't do. Deciding to go down the road of asynchronous executing features requires that you carefully model how you want to handle state.<br />
<br />
First let's discuss what writing a message based system really means. For now we have looked into how a single handler produces and consumes messages. However message based systems are often about message chains. Just like mentioned in messaging and abstraction the message can split up the larger system features into smaller software entities that produce and consumes the various messages in a chain. A chain can consist of for instance RequestDelivery->PickItemsFromStock->CheckoutItems->PickTerminal->DeliverToTerminal or DomainLogic->...->...->DbPersistence. Each of the steps in this chain will be able to produce and consume messages.<br />
<br />
When writing asynchronous message based systems I tend to divide my types of message handlers into a few categories based on how it relates to state. Keep in mind that each message handler might handle a message in parallel with other handlers.<br />
<br />
<i>Stateless Message Handlers</i><br />
This is my first choice of handler as it has no side effects. It will only transform the input to an output message without writing to any publicly exposed state. It basically means (if a class) creates a new instance of the message handler, pass it the message and when done dispose of the handler.<br />
<br />
State Message Handlers<br />
These types of handlers deals either with accumulated internal state or external state. Either way it needs to make sure that it takes the right precautions (locking etc.). If only dealing with external state it has the ability to do work in the same instantiate, handle and dispose manner as the stateless message handlers. If dealing with internal state it needs to be a running "engine" that runs for as long as the scope of it's contained state. Either way it deals with state that at any time can be corrupted so I model these handlers CAREFULLY.<br />
<br />
Queuing Handlers<br />
As the name implies they do not dispatch the message like the others but rather stores it to some predetermined queue(s). Queuing handlers are usually something I go for whenever I want a persisted step in the message chain, a critical running ("engine") needing to pick work based on it's own task scheduler or I need to go cross system with some sense of reliability.<br />
<br />
At this point it is pretty obvious that choosing how to write asynchronous code deeply depends on the environment where it is written and the context of the problem that is solved. My context as mentioned tends to lean towards normal business applications with a moderate requirement for computer hardware. Had the point of the system been to process a large amount of messages pr second then that would have affected the systems architecture. Compromises are often taken between resources and maintainability. When being able to focus less on max performance we can focus more on writing the system in a way that is max maintainable.Svein Arne Ackenhausenhttp://www.blogger.com/profile/12518116016333593695noreply@blogger.com8tag:blogger.com,1999:blog-6013734844294471364.post-72994904936843941802011-08-19T03:07:00.000-07:002011-08-22T02:11:40.444-07:00AutoTest.Net 1.2 is out containing a Visual Studio Addin!<div style="background-color: transparent;"><span style="background-color: transparent; color: black; font-family: Arial; font-size: 11pt; font-style: normal; font-variant: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">We are proud to announce release 1.2 of AutoTest.Net today. In this version we have pulled back all the Visual Studio Addin functionality from the ContinuousTests (Mighty-Moose) source into AutoTest.Net. This means that you will get all test and build feedback right into a feedback window in Visual Studio. This speeds up the work flow drastically from using AutoTest.Net through the winforms app or console. In addition you will have features like manually running tests from right-click in code, solution explorer and class view. You will also be able to debug failing tests from the feedback window or directly from right-clicking the test.</span><br />
<span style="background-color: transparent; color: black; font-family: Arial; font-size: 11pt; font-style: normal; font-variant: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"></span><br />
<span style="background-color: transparent; color: black; font-family: Arial; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: bold; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">Getting started</span><br />
<span style="background-color: transparent; color: black; font-family: Arial; font-size: 11pt; font-style: normal; font-variant: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">To get started goto <a href="https://github.com/continuoustests/AutoTest.Net/archives/master">https://github.com/continuoustests/AutoTest.Net/archives/master</a> and download the AutoTest.Net-v1.2.0.exe, run the installer and then open a solution in Visual Studio. By default AutoTest.Net starts paused when used as an addin so it will not detect file changes right away. You can still do manual test runs though. The feedback will arrive into the feedback window which you can open through AutoTest.Net->Feedback Window. To start detecting changes click AutoTest.Net->Resume Engine. Now just modify a file/files and hit save and things should start happening. If you want to have AutoTest.Net enabled by default go to Global Configuration and change StartPaused to false (when changing the configs global or solution you need to run AutoTest.Net->Restart Engine for them to take effect).</span><br />
<span style="background-color: transparent; color: black; font-family: Arial; font-size: 11pt; font-style: normal; font-variant: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"></span><br />
<span style="background-color: transparent; color: black; font-family: Arial; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: bold; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">Work flow</span><br />
<span style="background-color: transparent; color: black; font-family: Arial; font-size: 11pt; font-style: normal; font-variant: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">When working with ContinuousTests or AutoTest.Net you no longer have to wait for your build or test runs to complete. This changes your work flow radically. At first it will be hard to let go of old habits but after a little while not waiting for the build/test result becomes natural. Most of the time when you build and run tests the outcome is what you expect so there is no need to wait for it. Since the feedback loop will be shortened the likelihood of this increases. If you get an unexpected outcome you are only a couple of ctrl+z/commenting away from previous state anyway.</span><br />
<span style="background-color: transparent; color: black; font-family: Arial; font-size: 11pt; font-style: normal; font-variant: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">There are a few short-cut keys that you will use very frequently.</span><br />
<span style="background-color: transparent; color: black; font-family: Arial; font-size: 11pt; font-style: normal; font-variant: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"></span><br />
<ul><li style="background-color: transparent; color: black; font-family: Arial; font-size: 11pt; font-style: normal; font-variant: normal; list-style-type: disc; text-decoration: none; vertical-align: baseline;"><span style="background-color: transparent; color: black; font-family: Arial; font-size: 11pt; font-style: normal; font-variant: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">Ctrl+Shift+J => Open/Toggle <span class="Apple-tab-span" style="white-space: pre;"> </span>Feedback Window<span class="Apple-tab-span" style="white-space: pre;"> </span></span></li>
<ul><li style="background-color: transparent; color: black; font-family: Arial; font-size: 11pt; font-style: normal; font-variant: normal; list-style-type: circle; text-decoration: none; vertical-align: baseline;"><span style="background-color: transparent; color: black; font-family: Arial; font-size: 11pt; font-style: normal; font-variant: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">Navigate Feedback Window with arrows or vim keys (j,k)</span></li>
<li style="background-color: transparent; color: black; font-family: Arial; font-size: 11pt; font-style: normal; font-variant: normal; list-style-type: circle; text-decoration: none; vertical-align: baseline;"><span style="background-color: transparent; color: black; font-family: Arial; font-size: 11pt; font-style: normal; font-variant: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">Hit enter in the list to go to build error or failing test</span></li>
<li style="background-color: transparent; color: black; font-family: Arial; font-size: 11pt; font-style: normal; font-variant: normal; list-style-type: circle; text-decoration: none; vertical-align: baseline;"><span style="background-color: transparent; color: black; font-family: Arial; font-size: 11pt; font-style: normal; font-variant: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">Hit the letter I to open the information window for extended information</span></li>
<li style="background-color: transparent; color: black; font-family: Arial; font-size: 11pt; font-style: normal; font-variant: normal; list-style-type: circle; text-decoration: none; vertical-align: baseline;"><span style="background-color: transparent; color: black; font-family: Arial; font-size: 11pt; font-style: normal; font-variant: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">Hit the letter D when positioned on a test to debug it</span></li>
</ul><li style="background-color: transparent; color: black; font-family: Arial; font-size: 11pt; font-style: normal; font-variant: normal; list-style-type: disc; text-decoration: none; vertical-align: baseline;"><span style="background-color: transparent; color: black; font-family: Arial; font-size: 11pt; font-style: normal; font-variant: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">Ctrl+Shift+Y, A => Build solution and run all tests</span></li>
<li style="background-color: transparent; color: black; font-family: Arial; font-size: 11pt; font-style: normal; font-variant: normal; list-style-type: disc; text-decoration: none; vertical-align: baseline;"><span style="background-color: transparent; color: black; font-family: Arial; font-size: 11pt; font-style: normal; font-variant: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">Ctrl+Shift+S => Save all files in Visual Studio</span></li>
</ul><br />
<span style="background-color: transparent; color: black; font-family: Arial; font-size: 11pt; font-style: normal; font-variant: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"></span><br />
<span style="background-color: transparent; color: black; font-family: Arial; font-size: 11pt; font-style: normal; font-variant: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">The first thing you do when opening a solution is Ctrl+Shift+Y, A (Build solution and run all tests) to get current state of the system. From now on the feedback window will work as current system state. The status text in the window represents last action while the content of the list represents current state. A failing test will stay in the window until it’s fixed even though current test run succeeded (ran tests in another assembly for instance).</span><br />
<span style="background-color: transparent; color: black; font-family: Arial; font-size: 11pt; font-style: normal; font-variant: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">From now on you work incrementally: </span>Write code, Ctrl+Shift+S, Continue writing code, Fail, Ctrl+Shift+J and navigate to to failure, Jump to code/ Debug, Write code, rinse repeat, rinse repeat. In that way you always work on emptying the feedback window and when it’s empty you write failing tests to fill it back up.</div><div style="background-color: transparent;"><span style="background-color: transparent; color: black; font-family: Arial; font-size: 11pt; font-style: normal; font-variant: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"></span><br />
<span style="background-color: transparent; color: black; font-family: Arial; font-size: 11pt; font-style: normal; font-variant: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">We hope you will have fun with this product and don’t hesitate to report issues on the github page </span><a href="https://github.com/continuoustests/AutoTest.Net"><span style="background-color: transparent; color: #000099; font-family: Arial; font-size: 11pt; font-style: normal; font-variant: normal; text-decoration: underline; vertical-align: baseline; white-space: pre-wrap;">https://github.com/continuoustests/AutoTest.Net.</span></a><span style="background-color: transparent; color: black; font-family: Arial; font-size: 11pt; font-style: normal; font-variant: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> We love reported issues as it’s what makes the product better. If the issue is followed by a pull request we are talking total awesomeness.</span></div>Svein Arne Ackenhausenhttp://www.blogger.com/profile/12518116016333593695noreply@blogger.com2tag:blogger.com,1999:blog-6013734844294471364.post-29592792454416562852011-08-12T14:19:00.000-07:002011-08-12T14:19:23.873-07:00The Moose Shows Some Community Love<a href="http://2.bp.blogspot.com/-V4AHdPKokKY/TkWWoAvmBmI/AAAAAAAAAJQ/nBem-XXYJKs/s1600/6.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="186" src="http://2.bp.blogspot.com/-V4AHdPKokKY/TkWWoAvmBmI/AAAAAAAAAJQ/nBem-XXYJKs/s200/6.jpg" width="200" /></a>As <a href="http://www.continuoustests.com/">ContinuousTests</a> moves along getting more features we feel it's important to give some love to the community too. For those new to the subject ContinuousTests is based on the source of open source <a href="http://github.com/continuoustests">AutoTest.Net</a>. AutoTest.Net is continuously improved and extended through the development of ContinuousTests but we can do better than that. Let's show some real community love :)<br />
<br />
We will over the weekend push the Visual Studio addin part of ContinuousTests back into the AutoTest.Net code along with some essential features.<br />
<br />
Feedback View<br />
The feedback view from ContinuousTests inside Visual Studio where you are able to easily navigate through build and test failures. Also goto source by clicking on stack lines, errors or the test itself.<br />
<br />
Various Features<br />
<br />
<ul><li>Right click inside code / solution explorer / Class View to run tests</li>
<li>Debug tests from the feedback window or by right clicking inside a test and choosing debug</li>
<li>Build all projects and run all tests</li>
<li>Pause and resume AutoTest.Net engine</li>
</ul><div><br />
</div><div>Right now using AutoTest.Net vs using ContinuousTests are light years apart. With these changes implemented AutoTest.Net will get the same efficient workflow ContinuousTests provide. And it will be a valid alternative for those of you that can live with not just running affected tests and all the other goodies ContinuousTests contains.</div><div><br />
</div><div><br />
</div><div>Get your antlers on!</div><div><br />
</div><div>TheMoose team</div><div>Greg & Svein</div><div><br />
</div><div><br />
</div>Svein Arne Ackenhausenhttp://www.blogger.com/profile/12518116016333593695noreply@blogger.com0tag:blogger.com,1999:blog-6013734844294471364.post-51202609351403157222011-07-24T08:51:00.000-07:002011-07-24T08:51:45.737-07:00Monospace talkI'm having a great time here in Boston. So many brilliant people. Keep up the good work!<br />
<br />
Anyways, here's the code from my presentation. Both the messaging stuff and the graphing stuff is in there. Please note that the graph thing is more of a proof of concept so it won't handle everything :)<br />
<br />
<a href="http://www.acken.no/SimpleMessaging.zip">Simple messaging sample with chain graphing</a>Svein Arne Ackenhausenhttp://www.blogger.com/profile/12518116016333593695noreply@blogger.com2tag:blogger.com,1999:blog-6013734844294471364.post-71772523915725177892011-02-21T00:40:00.000-08:002011-02-21T00:40:47.535-08:00Are we producing value?I read <a href="http://news.ycombinator.com/item?id=2240595">this</a> discussion about using TDD/BDD in lean startups. They touch some really interesting topics there. Amongst others how TDD/BDD are great for client work but not so good for lean startups. Even though there is a lot of talk forth and back between client projects and lean startups I am confident that a lean startup is almost no different from any other software project. Except for one significant difference Let me explain.<br />
<br />
When working as a developer with a client we are focusing on delivering quality software meeting the customers demands. We have the integrity for our craft playing a role and we have our developer hat on. However and here comes the interesting part. When we're dealing with a lean startup we are actually wearing our stakeholder hat and our developer hat on. Because of wearing our stakeholder hat we are forced to think in terms of value or ROI.<br />
This is the real difference. Not whether it's a client project or a startup. Or am I wrong? Is the client's stakeholder less interested in us producing high value than what we would want in our own startup? And why is being agile in our lean startup to succeed any different from together with the client stakeholder achieving success through an agile process? Whoever the stakeholder the project starts with a vision. A vision we will mold and change to in the end reach a valuable result.<br />
<br />
If we state that TDD/BDD is a waste of time in our lean startup but not in our client project it sounds to me like we are stating that it's ok to waste our clients money but not our own. TDD/BDD is just an example here. I'm trying to put some light on our intents behind determining value. Are we loosing focus? Shouldn't we keep focus on constantly producing value at the highest rate possible regardless of the setting?<br />
<br />
Am I way off here? What is your opinion?Svein Arne Ackenhausenhttp://www.blogger.com/profile/12518116016333593695noreply@blogger.com0tag:blogger.com,1999:blog-6013734844294471364.post-54427679021427399322011-02-18T01:06:00.000-08:002011-02-18T02:44:22.412-08:00A layer of abstraction or a layer of complexity<div lang="en-US" style="border-collapse: collapse; font-family: arial, sans-serif; font-size: 13px; margin-bottom: 0in;">To me abstraction rarely has to do with technical issues alone. When writing a layer of abstraction the domain plays a huge role. I find this specific issue is often handled badly with ORM's. A relational database especially is rarely modeled in a way that reflects domain behaviors. I therefore cringe when I see ORM's used to generate a object mirror of the database schema. Unless it's a pure CRUD app what do we really gain by doing this? Other than database intellisense.</div><div lang="en-US" style="border-collapse: collapse; font-family: arial, sans-serif; font-size: 13px; margin-bottom: 0in;"><br />
</div><div lang="en-US" style="border-collapse: collapse; font-family: arial, sans-serif; font-size: 13px; margin-bottom: 0in;">Greg Young threw me a bit of when for some time ago I mentioned using Unit of Work and he replied “why do you need that?”. I frankly didn't even understand the question. Why would I not need a Unit of Work? I now know the answer and it makes me sad. I never knew what an aggregate was about and frankly I never really paid attention to the problem I was trying to solve. I never included the domain into the actual solution.</div><div lang="en-US" style="border-collapse: collapse; font-family: arial, sans-serif; font-size: 13px; margin-bottom: 0in;"><br />
</div><div lang="en-US" style="border-collapse: collapse; font-family: arial, sans-serif; font-size: 13px; margin-bottom: 0in;">When doing something like generating an object version of a relational database we state that “I have no idea what this thing is supposed to do so I'll support everything”.Instead of first getting an understanding of the domain behaviors and create a persistence model supporting those scenarios.</div><div lang="en-US" style="border-collapse: collapse; font-family: arial, sans-serif; font-size: 13px; margin-bottom: 0in;"><br />
</div><div lang="en-US" style="border-collapse: collapse; font-family: arial, sans-serif; font-size: 13px; margin-bottom: 0in;">Back to what Greg said. Why don't we need a Unit of Work? Because we have modeled true behaviors. Unit of Work is solving transactional issues. Actually Unit of Work is there to handle transactions between aggregates. The reason we tend to use a Unit of Work is because we have paid no attention to what an aggregate should contain. If we have an object model that is generated from a database schema our aggregates are defined by our database relations. Because of that our aggregates are completely separated from our domain behaviors. Resulting in most domain behaviors touching more than one aggregate. And through that we are forced to use a Unit of Work. However if we take the time to model our aggregates based on actual behavior there would be no need for cross aggregate transactions. Actually what we are doing when not paying attention to domain behaviors is creating and impedance mismatch between the problem we are trying to solve and our technical implementation.</div><div lang="en-US" style="border-collapse: collapse; font-family: arial, sans-serif; font-size: 13px; margin-bottom: 0in;"><br />
</div><div style="border-collapse: collapse; font-family: arial, sans-serif; font-size: 13px; margin-bottom: 0in;"><span lang="en-US">If you want to learn more about these type of concepts visit </span><a href="http://cqrsinfo.com/" style="color: #0000cc;" target="_blank">http://cqrsinfo.com/</a><span lang="en-US">. And if you get the time today. Greg is doing an online session</span><span lang="en-US"> today (</span><a href="http://cqrsweekday.eventbrite.com/" style="color: #0000cc;" target="_blank">http://cqrsweekday.<wbr></wbr>eventbrite.com/</a><span lang="en-US">). There is still time! If you can attend you should! You will not regret it.</span></div>Svein Arne Ackenhausenhttp://www.blogger.com/profile/12518116016333593695noreply@blogger.com2tag:blogger.com,1999:blog-6013734844294471364.post-43213765655183553362011-02-15T00:17:00.000-08:002011-02-15T00:35:11.075-08:00AutoTest.Net no longer in beta!Finally AutoTest.Net has reached it's first official release. There has been tons of contributions since last beta release and most of it has been coming from the Mighty-Moose project (<a href="http://www.continuoustests.com/">www.continuoustests.com</a>). Amongst others AutoTest.Net now has it's own test runner that hosts NUnit and XUnit (more to come). This means that there is no longer need to manually configure these test runners. Also the configuration as it is today is set to 64bit/Visual Studio 10. So if that's your setup no configuration needed for MSBuild or MSTest either.<br />
<br />
New features<br />
<br />
<ul><li>New hosted test runner (AutoTest.TestRunner)</li>
<li>MSpec is now supported thanks to contributions from Alexander Groß himself. For now it has to be manually configured but it's planned to get it supported in our hosted runner.</li>
<li>Support for either watching a directory or a solution</li>
<li>When watching a solution you can choose to build the solution instead of individual projects</li>
<li>Setting for running failed tests first</li>
</ul><br />
<br />
For a direct link to the the new version try this one <a href="https://github.com/downloads/acken/AutoTest.Net/AutoTest.Net-v1.1.zip">https://github.com/downloads/acken/AutoTest.Net/AutoTest.Net-v1.1.zip</a>.<br />
<br />
From what I see there are actually quite a few users out there. Still there is not too much feedback. There have been some talk about both the WinForms and the Console UI. If you have questions or want to discuss join our group <a href="http://groups.google.com/group/autotestnet">http://groups.google.com/group/autotestnet</a>. And I will encourage anyone to report anything they want considered changed / added to the issues list on github (<a href="https://github.com/acken/AutoTest.Net">https://github.com/acken/AutoTest.Net</a>). Let's together make this an awesome piece of software!<br />
<br />
-Svein ArneSvein Arne Ackenhausenhttp://www.blogger.com/profile/12518116016333593695noreply@blogger.com0tag:blogger.com,1999:blog-6013734844294471364.post-30376729879652467162010-09-17T14:20:00.000-07:002010-09-17T14:20:23.206-07:00Parallel takes you half the wayParallel is the future! You can't go anywhere without hearing those words these days. And it's the truth. We all have multi core machines now. Languages prepare for this by implementing features that makes it easier to write code running in parallel. Microsoft for instance is pushing it's Task Parallel Library with .Net Framework 4 including parallel for, parallel LINQ, Parallel.Invoke and more. We even have debugging tools in Visual Studio helping out when debugging our parallel code. We have it all at our fingertips. Writing parallel code has never been easier! Soo.. why does it feel so complex and awquard? It's all there, the tools, the language features. We have all that we could ask for and still there is one crucial part that has been left out. It cannot be fixed by tools or frameworks because it is: Our minds!<br />
<br />
Bare with me for a second. We create software to solve problems. A problem is often described by something like "Given A when something about A equals x then our application should produce Z". We have a starting point A that when satisfying some condition should produce Z. We produce executable code that does what is necessary to get from A to Z.<br />
<div class="separator" style="clear: both; text-align: center;"></div><br />
<div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/_eTMyostu-7o/TIdTqsytklI/AAAAAAAAADk/SOrXKtZxLYA/s1600/normal_execution.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" src="http://1.bp.blogspot.com/_eTMyostu-7o/TIdTqsytklI/AAAAAAAAADk/SOrXKtZxLYA/s320/normal_execution.jpg" /></a></div>Running the code the execution path would look something like this. From a starting point it will execute the code, step by step, function by function going deeper into the call stack until the result Z is produced and from there back to where it started out.<br />
To speed things up we can decide to make it parallel. What we would do is to fork the line going deeper into the call stack into several parallel sequences. Before returning back to where it started out those parallel sequences are joined together to one sequence. We are following the exact same concept as before. We have just implemented a different concept within it. The concept of executing multiple sequences of steps in parallel. For some scenarios this is a perfectly viable solution. It's viable for the times we want synchronous behavior.<br />
<br />
Most of the time we will end up writing code like this. The problem though is that parallel or not this implementation is synchronous. Parallel execution is NOT synchronous. We make it synchronous by making sure we join the parallel sequences together into a single sequence again as they finish. Thus making it synchronous.<br />
<br />
<b>If the future is parallel the future is also asynchronous.</b> When entering into a parallel future we simply cannot keep writing our software in a synchronous manner. We need to reeducate our minds with asynchronous being the norm while the exceptions are synchronous. Frankly it shouldn't be all that difficult. Our whole life is based around asynchronous behavior. After sliding a pizza into the oven we don't sit and stare at it until it's done. We put on a timer or make sure we check the time now and then and go do other things. We do it all the time. It's natural. Forcing parallel behavior into a synchronous setting on the other hand is not and it is going to be painful.<br />
<blockquote>Some people has gone this rout already. <a href="http://codebetter.com/blogs/gregyoung/"><span class="Apple-style-span" style="color: black;">Greg Young</span></a> is one of them with his architectural pattern <a href="http://cqrsinfo.com/"><span class="Apple-style-span" style="color: black;">CQRS</span></a>. If you have not seen his talk on the subject I strongly recommend you look into it. He recorded one of his online sessions for everyone to enjoy. You can download it <a href="http://dl.dropbox.com/u/108121/CQRS.zip"><span class="Apple-style-span" style="color: black;">here</span></a>. It is a full day session but it's worth every minute.</blockquote>Like the title states. Parallel takes us only half the way. The rest is up to us. We need to change the way we think so that we design our systems and write our code in a way that suites this parallel future. When solving a problem through software we automatically give the problem a synchronous setting. That is what we have been taught. This is the way it's been for a long time. We have become technically challenged. We have become so good at solving asynchronous problems in a synchronous fashion that it even feels natural. If given the same problem in a real life scenario we would probably have solved it asynchronously.<br />
Lets imagine a CEO wanting to keep track of her contacts. How would we solve this problem. Oh, wait. You just had the following thought didn't you: "A screen where you can type contact information then press save and it'll save the contact in a database and if something fails throw an error back to the user". Our technically challenged minds just made the scenario synchronous. It became type, save, confirm. If we ask the CEO what she usually does she'd say: "Well I usually tell people to leave their contact details with my secretary". And that my friends is a concept called forking! She just passed that task off to someone else so that she could go do other things. She made the task completely asynchronous. How can this be, what if the secretary throws an exception (forgets), what if there's already a contact with the same name, what if, what if..<br />
So why does this work in the real world? Because of trust and the fact that context is taken into consideration. The CEO trusts the secretary to store the contact details. She also knows that if something goes wrong she'll just tell the secretary to look up the information a second time. Even though it fails now and then the overall goal is met. And the overall goal being that the CEO spends less time dealing with keeping track of contact information. Through trust and context the matter was handled efficiently with sufficient error handling.<br />
Most scenarios aren't type, save confirm. We just make them type, save, confirm because that is how we usually solve everything.<br />
<br />
When writing software in a "wait for confirmation" manner we effectively state that our software will fail to do what it is supposed to do more than 50% of the time. Because for it to be viable for us waiting for the response it would have to respond with something unexpected most of the time. Either that or that every other task we could possibly perform depended on this tasks response. When accepting that the system failing is the exception not the norm we can start thinking about error handling in a different way. If the failure rate is very low the but a single failure is extremely costly business value wise we could even set up manual handling for failures and use fire and forget. If that is the right solution for the business then that is the right solution technically.<br />
<br />
When working on <a href="http://github.com/acken/AutoTest.Net">AutoTest.NET</a> I discovered that even immediate feedback isn't really immediate. AutoTest.NET is a continuous testing tool for .NET. The workflow when using it would be:<br />
<br />
<ol><li>Write code</li>
<li>Save all changes</li>
<li>AutoTest.NET builds and runs tests</li>
<li>AutoTest.NET outputs red or green</li>
</ol><br />
It's purpose is to provide immediate feedback regarding the state of the code. My initial workflow included waiting from I had pressed "save all" until I could see the output I had expected. But as stated earlier, for waiting to be viable I should produce flawed code on more than 50% of my saves. And I don't. The response is usually what I expect so it's more efficient for me to keep on working as AutoTest.NET is doing it's job. As I save approximately every 30 seconds a 20 second delay would be considered immediate feedback. It's immediate because of context. I only need to know within my next save point, which is 30 seconds later. When it yields unexpected results the solution is only a couple of Ctrl+Z's away.<br />
Given the context even immediate feedback can be dealt with asynchronously.<br />
<br />
To go all the way we need to embrace asynchronous behavior with parallel programming. Make asynchronous the default behavior. That's what we do everyday in real life. Terms like eventual consistency and asynchronous behavior isn't even something we consider when we go about our business. Because in real life that is the way everything works.Svein Arne Ackenhausenhttp://www.blogger.com/profile/12518116016333593695noreply@blogger.com3tag:blogger.com,1999:blog-6013734844294471364.post-68893635978924621452010-08-22T16:34:00.000-07:002010-08-23T00:39:39.011-07:00Testing through interfacesWhen at Vagif Abilov's BBQ while talking about testing Greg Young said something like "I wonder what tests would look like if we wrote tests against interfaces instead of their implementation". And really, what would they look like?<br />
<br />
The scenario was that you have an interface that has multiple implementations and you would want to write tests against the interface testing all implementations. This would seriously reduce the number of tests you would have to write. So let's give it a try.<br />
<br />
The first thing we need to keep in mind is that we're writing the tests on something abstract. This means that we don't really know what to expect. When passing x in to the various implementations it's not certain that all of them will answer y. Actually hopefully only one of them would answer y where x is act and y is assert. If not there would be multiple implementations doing the exact same thing and that would kind of defeat the purpose. Off the top of my head that leaves us with the following scenarios:<br />
<br />
<b><i>X stays the same while Y varies</i></b><br />
<i>This would be something like an ICalculator. The ICalculator would have implementations like DecimalCalculator and OctalCalculator. When running tests here we would end up with results like this:</i><br />
<ul><li><i>DecimalCalculator: 7*7 = 49</i></li>
<li><i>OctalCalculator: 7*7 = 61</i></li>
</ul><div><i>Which means that when writing these types of tests we need to be able to handle asserting on certain values pr. implementation.</i></div><div><br />
</div><div style="text-align: left;"><b><i>X varies while Y stays the same</i></b></div><div style="text-align: left;"><i>Let's imagine that we have some type of parser taking xml in returning a list of objects of a certain type. This would typically mean one implementation pr. xml schema while the output could be the same. So writing these types of tests we'll have varying code for passing parameters while the assert would stay the same.</i></div><div><br />
</div><div>Ok, that wasn't too bad. We could probably make this look clean. Now over to some other aspects that we'll have to deal with.</div><div><br />
</div><div><b>Dependencies</b></div><div>With the right (wrong) implementation faking dependencies might be a hellish thing with this solution. I guess that's a good thing as it forces us to not make a mess of it. But still we need some way of handling setting up dependencies for the implementations.</div><div><br />
</div><div><b>Resolving implementations</b></div><div>We need a way to retrieve all implementations for an interface. Of course this is something we do all the time with DI containers so any DI container would provide us with what we need here. We could probably do something smart here to inject the faked dependencies we'll need for each implementation.<br />
<br />
With this in mind let's set up a test for the calculator scenario. The first thing I did was creating a class for handling the plumbing. Right now this class takes care of resolving all implementations of the chosen interface, running the test on each implementation and performing specified assertions. My test ended up looking like this:</div><div><br />
[<span class="Apple-style-span" style="color: #3d85c6;">Test</span>]<br />
<span class="Apple-style-span" style="color: blue;">public</span> <span class="Apple-style-span" style="color: blue;">void</span> Should_multiply()<br />
{<br />
<span class="Apple-style-span" style="color: blue;">var</span> tester = <span class="Apple-style-span" style="color: blue;">new</span> <span class="Apple-style-span" style="color: #3d85c6;">InterfaceTester</span><<span class="Apple-style-span" style="color: #3d85c6;">ICalculator</span>>();<br />
tester.Test(c => c.Multiply(7, 7))<br />
.AssertThat<<span class="Apple-style-span" style="color: #3d85c6;">DecimalCalculator</span>>().Returned(49)<br />
.AssertThat<<span class="Apple-style-span" style="color: #3d85c6;">OctalCalculator</span>>().Returned(61);<br />
} </div><div><br />
</div><div>I'm quite happy with that. This test is both extend able and readable. Now let's do the same to the scenario with the string parser. I'll just extend the plumbing class used in the previous example to handle varying input parameters. The implementation ended up looking like this:<br />
<br />
<br />
[<span class="Apple-style-span" style="color: #3d85c6;">Test</span>]<br />
<span class="Apple-style-span" style="color: blue;">public</span> <span class="Apple-style-span" style="color: blue;">void</span> Should_parse_number()<br />
{<br />
<span class="Apple-style-span" style="color: blue;">var</span> tester = <span class="Apple-style-span" style="color: blue;">new</span> <span class="Apple-style-span" style="color: #3d85c6;">InterfaceTester</span><<span class="Apple-style-span" style="color: #3d85c6;">INumberParser</span>>();<br />
tester<br />
.Test<<span class="Apple-style-span" style="color: #3d85c6;">XmlParser</span>>(x => x.Parse(<span class="Apple-style-span" style="color: #990000;">"<number>14</number>"</span>))<br />
.Test<<span class="Apple-style-span" style="color: #3d85c6;">StringParser</span>>(x => x.Parse(<span class="Apple-style-span" style="color: #990000;">"14"</span>))<br />
.Returned(14);<br />
}<br />
<br />
<br />
I can't say I'm as happy with this one as the complete delegate is copied for both implementations and not just the part that differs. But still it's a huge simplification compared to writing a full test suite pr implementation.<br />
<br />
I guess I'll leave it at that for now. What this does not cover is setting up dependencies which likely will complicate the implementation a bit. After doing this implementation I can really see the value of writing my tests like this. It would save me time and energy and would leave me with a cleaner simpler test suite. The implementation ended up being fairly simple. Initial conclusion: Writing tests against interfaces is a good idea!<br />
<br />
I'd love to hear your thoughts on this! And if you're interested in the full source code let me know and I'll upload it to github or something.<br />
<br />
<br />
</div>Svein Arne Ackenhausenhttp://www.blogger.com/profile/12518116016333593695noreply@blogger.com2tag:blogger.com,1999:blog-6013734844294471364.post-78862442108361861952010-08-21T17:59:00.000-07:002010-08-21T17:59:36.141-07:00New challenges, starting my own businessTimes are changing and in a bit more than a week I'll be starting up a company with four others. I was asked to join them and after some thinking I said yes. Now I'm throwing myself off the cliff having a firm belief that wings will grow before I hit the ground :)<br />
So what is the company about? Our goal is providing skilled, experienced people specialized in their field. All five of us have our own specialties that go we'll together from developer to analyst. From a business point of view we have deep knowledge with enterprise software specially oil/energy trading and business applications. The company has gotten the name Contango Consulting AS.<br />
<br />
I am very excited about realizing a dream of being responsible for my own future. Trying to do my own thing. It's going to be tough and I'll probably learn more in a year than I have done up to now. I'm also certain that this blog will be affected by this. Hopefully it can result in my learnings ending up here for others to enjoy. And of course if any of you are in need of a skilled .NET <a href="http://no.linkedin.com/pub/svein-arne-ackenhausen/1/957/361">developer / architect / trainer</a> don't hesitate to let me know :) You can view more detailed information about me <a href="http://no.linkedin.com/pub/svein-arne-ackenhausen/1/957/361">here</a>.<br />
<br />
-Svein ArneSvein Arne Ackenhausenhttp://www.blogger.com/profile/12518116016333593695noreply@blogger.com2tag:blogger.com,1999:blog-6013734844294471364.post-85518154586208507702010-08-10T13:51:00.000-07:002010-08-10T14:18:46.235-07:00Nu on Linux (debian based systems)<span class="Apple-style-span" style="font-family: Arial; font-size: small;"><span class="Apple-style-span" style="font-size: 13px;">There's a lot of activity these days on the <a href="http://groups.google.com/group/nu-net">Nu project</a>. Short the Nu project is for .Net what gems are for Ruby. In fact it uses gems. In my last post I talked about tooling and where I wish tooling would go in the future. Package management is definitely one of the tools that will help us in the future. If you want to read up on Nu <a href="http://ferventcoder.com/">Rob Reynolds</a> has some good posts explaining what it is and how to use it. What I'm going to go through here is just what is needed to get it working on Linux. There are just some small tweaks that needs to be done for it to be working.</span></span><br />
<br />
<span class="Apple-style-span" style="font-family: Arial; font-size: 13px;">If you don't have Ruby on your system already you'll need it.</span><br />
<blockquote><i><span class="Apple-style-span" style="color: #0c343d;">sudo apt-get install ruby-full build-essential</span></i></blockquote><span class="Apple-style-span" style="font-family: Arial; font-size: small;"><span class="Apple-style-span" style="font-size: 13px;">Next you'll need to get gems.</span></span><br />
<blockquote><i><span class="Apple-style-span" style="color: #0c343d;">sudo apt-get install rubygems</span></i></blockquote><span class="Apple-style-span" style="font-family: Arial; font-size: small;"><span class="Apple-style-span" style="font-size: 13px;">Ok, then we have all dependencies required to get going. Now lets get Nu. Nu is installed through gems like this (NB! make sure you install nu with root priveleges or it won't work).</span></span><br />
<blockquote><i><span class="Apple-style-span" style="color: #0c343d;">sudo gem install nu</span></i></blockquote><span class="Apple-style-span" style="font-family: Arial; font-size: small;"><span class="Apple-style-span" style="font-size: 13px;">Good, now we have all we need to start using Nu. Just one thing, when running Nu I got an error saying "no such file to load -- FileUtils". The reason for this is that Linux has a case sensitive file system and the file is called fileutils.rb not FileUtils.rb. If you also end up having this issue go to the folder containing fileutils.rb (something like /usr/lib/ruby/1.8) and create a symlink by running the following command.</span></span><br />
<blockquote><i><span class="Apple-style-span" style="color: #0c343d;">sudo ln fileutils.rb FileUtils.rb</span></i></blockquote><span class="Apple-style-span" style="font-family: Arial; font-size: small;"><span class="Apple-style-span" style="font-size: 13px;">Now to some real Nu action. Let's say we have a project and we want to start using NHibernate. What you have to do is to go to the root folder of your project and type this command.</span></span><br />
<blockquote><i><span class="Apple-style-span" style="color: #0c343d;">nu install nhibernate</span></i></blockquote><span class="Apple-style-span" style="font-family: Arial; font-size: small;"><span class="Apple-style-span" style="font-size: 13px;">You'll get a couple of warnings but it's going to do what it's supposed to. When it's done you can go into the newly created lib directory and see NHibernate and it's dependencies in there. Neat huh!?</span></span><br />
<span class="Apple-style-span" style="font-family: Arial; font-size: small;"><span class="Apple-style-span" style="font-size: 13px;"><br />
</span></span>Svein Arne Ackenhausenhttp://www.blogger.com/profile/12518116016333593695noreply@blogger.com2tag:blogger.com,1999:blog-6013734844294471364.post-20399058418776128682010-08-09T16:18:00.000-07:002010-08-09T16:18:16.393-07:00Tooling visionsLately I have felt more and more uncomfortable about the tooling I'm currently working with. I feel a lot of the tools are not helping me reaching my goal. Frankly their in my way. The whole thing started when I started using ReSharper and saw what an IDE is about. ReSharper truly helps you accomplish things. It keeps you focused on your real goal: producing quality code. The only bad thing about ReSharper is that it's tied to Visual Studio. Visual Studio has become a horrid beast of an application. It's packed with features that to me has nothing to do with the application I use to write code. It doesn't help me any more than it's in my way.<br />
My second wakeup call was when I started using Git. Specially after watching Linus Torvalds talking about he's ideas behind it and why he made it like he did. One of the reasons he says was to create a source control system that does it's job well and doesn't get in your way. And he succeeded! When working with Git you have to deal with it when you pull, commit or push. And those are the times you're supposed to deal with source control. You shouldn't have to deal with source control because you want to edit a file or having the source control system insert padlocks left right and center. You shouldn't have to think about whether your computer is online or offline when you work with your code. Going back to working with TFS made me realize how much time I waste using a tool. Time that I could have spent solving real problems.<br />
<br />
Ok, that was the venting part :) Now to something a little more constructive. I have been thinking about how my ideal set of tools would look and what would be important. There are some points I really want to focus on. The first point applies as well to code: SEPARATION OF CONCERNS! Each tool should help you solve a single problem and it should do it well, extremely well. Second, as mentioned it should stay out of your way. It should know what it's trying to help you solve and act like that twin that completes your sentences. Third, it should not compromise. When you work with a mess of a project having everything depending on everything it should be painful. The tool should keep it's focus on helping you produce quality code and not make compromises to help dealing with a festering pile like Uncle Bob puts it.<br />
<br />
First off let's deal with what we call the IDE. To me an IDE is notepad+ReSharper+navigation, and that's what I think it should be. It should be there to help us produce quality code as efficiently as possible providing intellisense, auto complete, refactoring and everything that has to do with writing code. And that to me has nothing to do with building binaries, running tests, debugging and deploying. Though I understand why IDE's have ended up where they are it's time to move on. We're no longer hacking and hoping. We don't set breakpoints and step through half the application as part of our work pattern. We write code and watch tests fail and pass. To me the IDE is about efficiently writing code.<br />
<br />
Of course we need to compile and run tests and that should be it's own tool. We already have continuous testing tools like JUnitMax, Ruby's autotest and <a href="http://github.com/acken/AutoTest.Net">AutoTest.NET</a> which I'm currently working on (add cheesy commercial part here). This tool should basically stay out of your way. The only time we would want to interact with this tool is when we have broken something. It should build and run only what it needs to and only grab our attention when something has gone wrong. This is the tool that would bind the editor and the debugger together. When something has gone wrong we should be able to get the right and enough information. When builds or tests fail we should be able to easily move to the right file and position in the editor to fix whatever is wrong.<br />
<br />
Now to the debugger. The way I see it debuggers the way they work today are optimized for full system debugging not the simple "now what the heck did I just do to fail this test". And that's what I'm looking for 95% of the time. For these types of tasks I don't think that debugging through the IDE is helping. I don't think displaying the code file by file, class by class, function by function like it's written is the best way. And certainly not stepping through it. Something I do think would be more efficient is analyzing a series of snapshots showing where the execution had it's turning point, what threw an exception and things like that. I have tons of idea's that I'm hoping to realize through the <a href="http://github.com/acken/ClrSequencer">ClrSequencer</a> project. I think I'm going to dive a little bit deeper into this in another blog post.<br />
<br />
I guess that's enough rambling for tonight. It's probably not the last thing you'll hear from me on the subject. Tooling is very important and tooling should help you not fight you and I have been feeling a lot of the latter lately.Svein Arne Ackenhausenhttp://www.blogger.com/profile/12518116016333593695noreply@blogger.com0tag:blogger.com,1999:blog-6013734844294471364.post-87505287873411039282010-08-06T15:16:00.000-07:002010-09-26T09:06:16.389-07:00Continous testing with AutoTest.NetIt's about time for me to do some writing about AutoTest.Net. This is a project I have been working on for the last 2-3 months. It's a <a href="http://blog.objectmentor.com/articles/2007/09/20/continuous-testing-explained">continous testing</a> tool for .Net originally based on ruby's autotest. After playing with Ruby for a couple of evenings I really enjoyed the way you could work with it. My usual work circle of "write code, build, wait, run tests, wait" was replaced with "write code, save". Luckily the code I write tends to work more often than it breaks so waiting for builds and tests to run is a waste of time. Specially when I build and run tests about every 30 seconds or so.<br />
<div>So after the joy of working like that in Ruby I was determined to find a tool like that for .Net. After a bit of searching I found AutoTest.Net. The project was hosted at code.google.com and was initiated by James Avery but because of not having enough time on his hands the project was put on hold. I really wanted a tool like that so I went and got his permission to continue the project. It's now hosted on <a href="http://github.com/acken/AutoTest.Net">github.com/acken/AutoTest.Net</a>. Today it contains support for both .Net and mono and it's cross platform. NUnit, MSTest and XUnit are the testing frameworks supported today and MbUnit will be added soon. It supports running tests from multiple testing frameworks in the same assembly.</div><div><br />
</div><div>Now, how does it all work you say? The whole thing consists of a console application and a winforms application. By now I use the winforms app about 98% of the time so that's what I'm going to show here.</div><div>The first thing you do ofcourse is to go to <a href="http://github.com/acken/AutoTest.Net/downloads">this link</a> and download the latest binaries and unzip them to the folder of your choice. Locate the file named AutoTest.config and open it in your favorite xml editor. Now let's edit a few settings:</div><div><br />
</div><div><ol><li>DirectoryToWatch: Set this property to the folder containing the source code you want to work with. AutoTest.Net will detect changes to this folder and it's subfolders.</li>
<li>BuildExecutable: This is the path to msbuild or in mono's case xbuild. You have the possibility to specify a version of msbuild pr. framework or visual studio version.. For now let's just specify the default <BuildExecutable> property. Something like C:\Windows\Microsoft.NET\Framework\v3.5\MSBuild.exe.</li>
<li>Now let's specify a testing framework. You can pick from zero to all though zero wouldn't be any fun. I'll go with NUnit in this example.</li>
<li>The last thing we want to do is to specify a code editor (<CodeEditor>). Let's pick visual studio. We can pass visual studio the file to open and the line to go to. Sadly there's a bug in visual studio preventing it to go to the right line :( So for now we'll rely on Ctrl+G. Anyways the config has visual studio set up correctly by default. Just make sure the path to devenv.exe is the same as on your machine.</li>
</ol><div>Now we're ready to start the AutoTest.WinForms.exe application and do some real work.The first thing you 'll see is a form looking like this.</div></div><div><br />
</div><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/_eTMyostu-7o/TFx7Ca-qofI/AAAAAAAAACU/W-ILG28l1OM/s1600/Startup.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="315" src="http://2.bp.blogspot.com/_eTMyostu-7o/TFx7Ca-qofI/AAAAAAAAACU/W-ILG28l1OM/s640/Startup.png" width="640" /></a></div><div><br />
</div><div>The only interesting thing right after startup is the button in the top right corner. As you can see now (gonna do something about the colors) the button is yellow. Behind this button you'll find the status for AutoTest.Net application. It's yellow now because the configuration has generated a warning. If the button is red an error has occurred within AutoTest.Net. Right now the window will look like this.</div><div><br />
</div><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/_eTMyostu-7o/TFx9QhD2SbI/AAAAAAAAACc/H4VYmHRwmMM/s1600/status.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="170" src="http://3.bp.blogspot.com/_eTMyostu-7o/TFx9QhD2SbI/AAAAAAAAACc/H4VYmHRwmMM/s640/status.jpg" width="640" /></a></div><div><br />
</div><div>So let's go ahead write some jibberish in one of the files inside the folder we're watching and save the file. AT.Net should start working right after you save the file.</div><div><br />
</div><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/_eTMyostu-7o/TFx_xJW_9vI/AAAAAAAAACk/r3sfyDJqwdw/s1600/RunningTests.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="248" src="http://2.bp.blogspot.com/_eTMyostu-7o/TFx_xJW_9vI/AAAAAAAAACk/r3sfyDJqwdw/s640/RunningTests.jpg" width="640" /></a></div><div><br />
</div><div>And of course jibberish means errors which will result in this.</div><div><br />
</div><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/_eTMyostu-7o/TFyA1ww_i6I/AAAAAAAAACs/3RUDgTQ_CuU/s1600/errors.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="278" src="http://1.bp.blogspot.com/_eTMyostu-7o/TFyA1ww_i6I/AAAAAAAAACs/3RUDgTQ_CuU/s640/errors.jpg" width="640" /></a></div><div><br />
</div><div>When selecting one of the lines in the list you'll get the build error/test details underneath and you can click on the links to open the file in visual studio. Now let's fix the error we just made and save the file and let's see what happens.</div><div><br />
</div><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/_eTMyostu-7o/TFyBuX273NI/AAAAAAAAAC0/DF6vSBcwJgY/s1600/allgood.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="252" src="http://3.bp.blogspot.com/_eTMyostu-7o/TFyBuX273NI/AAAAAAAAAC0/DF6vSBcwJgY/s640/allgood.jpg" width="640" /></a></div><div><br />
</div><div>And as expected it goes green with 5 succeeded builds and 221 passed tests. That's basically it. From here on it's lather, rinse, repeat.</div><div><br />
</div><div>Right now it's in alpha and it will of course have some bugs here and there. I hope this post will tempt you to try it out. Even at an early stage like this it's a really effective way of working!</div>Svein Arne Ackenhausenhttp://www.blogger.com/profile/12518116016333593695noreply@blogger.com6tag:blogger.com,1999:blog-6013734844294471364.post-67083407908453923482010-05-29T14:10:00.000-07:002010-05-29T14:14:54.595-07:00(Red-Green)N-RefactorRed/Green/Refactor is the TDD mantra. And fair enough, it's a good description of the concept of TDD which I guess it's meant to describe. Heck it even describes the order you should perform these steps in when writing a test. Make sure the test is able to fail, make sure it passes for the right reasons and when done clean up. This is all good but I see people being mislead by this mantra. To explain this a bit better let's have a look at Uncle Bob's three rules of TDD<br />
<blockquote><blockquote>1. You are not allowed to write any production code unless it is to make a failing unit test pass.</blockquote><blockquote>2. You are not allowed to write any more of a unit test than is sufficient to fail; and compilation failures are failures.</blockquote><blockquote>3. You are not allowed to write any more production code than is sufficient to pass the one failing unit test.</blockquote></blockquote>Following these rules (and you should!) you'll see that when writing code you're mantra should in fact be red/green/red/green/red/green/............./refactor. Frankly though that wouldn't make a very good mantra. At the same time red/green/refactor doesn't work as an implementation pattern. The same way as agile methods will have you delivering working software in short iterations these three rules will deliver a suite of passing tests in about 30 second long iterations. And more importantly every red/green cycle will have you focusing on a single small task which is an efficient way of working. Summing up the mantra through Uncle Bob's three rules we get red/green repeated until the test is completed. This gives us (Red-Green)N.<br />
So, why isn't refactoring part of the three rules of TDD? TDD or not TDD you should ALWAYS refactor. Make sure your code is readable and maintainable. TDD just happens to make your code a lot easier to refactor safely. Given that the concept of TDD tells you to refactor for every completed test (and you should!) the mantra you should follow when implementing is (Red-Green)N-Refactor.Svein Arne Ackenhausenhttp://www.blogger.com/profile/12518116016333593695noreply@blogger.com0tag:blogger.com,1999:blog-6013734844294471364.post-6621232834579329032010-05-19T07:29:00.000-07:002010-05-19T07:29:22.543-07:00Focus on functionality, use technology - Revisited<a href="http://www.kindblad.com/">Lars-Erik Kindblad</a> posted some great questions about the implementation in my previous post. I thought I'd post the reply here as the questions pins a lot of the thought behind the previous post. The code up for discussion is:<br />
<br />
<pre><span style="color: blue;">public</span><span style="color: black;"> </span><span style="color: blue;">void</span><span style="color: black;"> BlockCustomer(</span><span style="color: blue;">int</span><span style="color: black;"> customerID)
{
var customer = getCustomer(customerID);
customer.Block();
}</span>
</pre><br />
He's questions were:<br />
<blockquote><span class="Apple-style-span" style="color: #333333; font-family: Georgia, serif; font-size: 13px; line-height: 20px;">1) How would you go about unit testing the second example? Since the customer is not injected into the constructor, the unit test would have a dependency on the customer.Block() method?<br />
2) Would customer.Block() (and customer.Save(), Remove(), Notify() etc.) break the Single Responsibility Principle?<br />
3) I agree customer.Block() creates more readable code, but it can create large and messy entity classes when the code base is growing. I've seen such classes with 1000+ lines of code. In that case I rather prefer using manager/service classes in order to partition the functionality across multiple classes.</span></blockquote>So, first thing. In my code I use repositories extensively. Most likely the getCustomer method would use some repository to fetch the customer. The class holding the BlockCustomer method would therefore have the ICustomerRepository injected. When dealing with dependency injection I separate between stateless classes and state full classes. Usually I automatically inject stateless classes through a dependency injection container. The repository class is a stateless class and would be injected into the class holding the BlockCustomer method automatically. The repository would be responsible for constructing the state full class Customer. To make sure that we are able to write maintainable tests for the BlockCustomer method we'll have the repository return an ICustomer interface.<br />
<br />
On to the second question, what about SRP. He is absolutely right, having the save and remove methods on the customer would break the Single Responsibility Principle.When using a domain model I'm not a big fan of using that type of active record implementation. Add and Remove is something I would have implemented in the ICustomerRepository. Within the repository I would use some kind of data mapper framework (NHibernate, EF..). Preferably one that supports automatic entity change tracking so that I would not have to implement Save. But still he has a point. From the first example in the previous post we could see that customer.Block() would have to do more than modifying it's own state. However I don't see anything wrong with it delegating the rest of it's work to a dependency. The dependency would be injected into the Customer class by the customer repository before handing it off.<br />
<br />
<pre><span style="color: blue;">class</span><span style="color: black;"> Customer
{
...
</span><span style="color: blue;">private</span><span style="color: black;"> CustomerState _state;
</span><span style="color: blue;">private</span><span style="color: black;"> IDependencyNeededToBlock _dependencyNeededToBlock;
</span><span style="color: blue;">public</span><span style="color: black;"> IDependencyNeededToBlock DependencySetter
{
</span><span style="color: blue;">set</span><span style="color: black;">
{
</span><span style="color: blue;">if</span><span style="color: black;"> (_dependencyNeededToBlock != </span><span style="color: blue;">null</span><span style="color: black;">)
</span><span style="color: blue;">throw</span><span style="color: black;"> </span><span style="color: blue;">new</span><span style="color: black;"> Exception(</span><span style="color: maroon;">"Dependency already set"</span><span style="color: black;">);
_dependencyNeededToBlock = value;
}
}
</span><span style="color: blue;">public</span><span style="color: black;"> </span><span style="color: blue;">void</span><span style="color: black;"> Block()
{
_state = CustomerState.Blocked;
_dependencyNeededToBlock.DoWhatIsNeeded();
}
}</span>
</pre><br />
However what is important to note is that the implementation should show the intention of the functionality. Since the intent of this functionality was blocking a customer it's natural that the functionality is initiated by the customer. If on the other hand the functionality was about something else and part of it was blocking the customer then of course that other functionality would not be delegated by the customer.<br />
<br />
If the Customer class now is limited to modifying internal state and calling dependencies we should have a nice and maintainable class. On the other hand if it still ends up as a 1000+ lines class we probably need to redefine our perception of the word customer :)Svein Arne Ackenhausenhttp://www.blogger.com/profile/12518116016333593695noreply@blogger.com0tag:blogger.com,1999:blog-6013734844294471364.post-1724010322360885972010-05-13T04:32:00.000-07:002010-05-13T04:46:05.110-07:00Focus on functionality, use technologyAs time goes I find myself more and more intrigued by Domain Driven Design. It is for me like a shift where we go from forcefully molding functionality to fit the technology to elegantly produce functionality with the help of technology. Instead of looking at a piece of code and seeing Nhibernate sessions / Datasets and terms like update, delete and insert we'll see the true intention of the functionality we're looking at. The following example is code the way I would have written it some time ago.<br />
<pre><span style="color: blue;"> public</span><span style="color: black;"> </span><span style="color: blue;">void</span><span style="color: black;"> UpdateCustomerState(</span><span style="color: blue;">int</span><span style="color: black;"> customerID, CustomerState newState)
{
DataSet ds = getCustomerDataset(customerID);
ds.Tables[</span><span style="color: maroon;">"Customer"</span><span style="color: black;">]
.Rows[</span><span style="color: red;">0</span><span style="color: black;">][</span><span style="color: maroon;">"CustomerState"</span><span style="color: black;">] = stateToInt(newState);
</span><span style="color: blue;">switch</span><span style="color: black;"> (newState)
{
</span><span style="color: blue;">case</span><span style="color: black;"> CustomerState.Blocked:
</span><span style="color: green;">// Do varius stuff done to blocked customers</span><span style="color: black;">
</span><span style="color: blue;">break</span><span style="color: black;">;
<span style="color: green;">// Handle more state variants...</span>
}
updateCustomerDataset(ds);
}</span></pre>Now, looking at this code it looks pretty much like any code we would expect to find anywhere in any source code right? It has split some functionality out into separate methods to clean up the code. It reads pretty well so we can understand what it does, it sets the state of the customer. So what's the problem?<br />
The problem is that it's all done from the tooling perspective and not from the actual intent of the functionality. What happened here was that the developer was told to create the functionality for blocking a customer. The developer did as we usually do.. went right into techno mode. "Ok, we have this state field in the customer table. If I just set that field to state blocked and make the necessary changes to the linked rows in table x and y that should do the trick." And when done the code looked like the code above. This is very much like what happens in Hitchhikers Guide To the Galaxy when Deep Thought reveals that the answer to the Ultimate Question of Life, the Universe, and Everything as being 42. As we know the answer wasn't the problem. The problem was that question. We can say the same thing about this piece of code. What you see is the answer but there is nothing mentioned about the intention behind it. When browsing through the classes you'll only find a method called UpdateCustomerState and nothing about blocking customers.<br />
<br />
So what do we do about it? We write the code as stated by the intent behind the functionality. The developer was told that a customer needed to be able to be blocked. From that we can determine that a customer is some entity and it needs a behavior which is block. The implementation would look something like this:<br />
<pre><span style="color: blue;">public</span><span style="color: black;"> </span><span style="color: blue;">void</span><span style="color: black;"> BlockCustomer(</span><span style="color: blue;">int</span><span style="color: black;"> customerID)
{
var customer = getCustomer(customerID);
customer.Block();
}</span>
</pre>The first example is the typical: Classes with mostly properties in addition to a set of manager/handler/service classes manipulating these properties to achieve desired behavior. The second example keeps the customers state hidden within the customer and only exposes it's behaviors.<br />
<br />
Write functionality with the help of technology!Svein Arne Ackenhausenhttp://www.blogger.com/profile/12518116016333593695noreply@blogger.com3tag:blogger.com,1999:blog-6013734844294471364.post-54648419188272432942010-04-29T04:47:00.000-07:002010-04-29T05:22:00.763-07:00Code readability<div><div style="text-align: left;">I<span class="Apple-style-span" style="font-family: Arial; font-size: small;"><span class="Apple-style-span" style="font-size: 13px;"> had an interresting discussion with a co-worker the other day about code documentation and formatting. Lately I have moved away from my previous traditional views on the matter. Earlier my main focus would have been about consistency and similarity. Things like having a header on each class, property and function explaining what it is and what it does is. In addition things like using regions within files and how to order functions, properties, private functions, events and so on within a class.</span></span></div></div><div><span class="Apple-style-span" style="font-family: Arial; font-size: small;"><span class="Apple-style-span" style="font-size: 13px;"><br />
</span></span></div><div><span class="Apple-style-span" style="font-family: Arial; font-size: small;"><span class="Apple-style-span" style="font-size: 13px;">All this is done in the name of readeability right? At least we think we do. We have become so good at managing these things that we completely forget to ask ourselfe why the class has grown so obeese that we need a set of rules to navigate within it. Mybe we create these rules to enable us to write crap code and still feel good about it? I am not really convinced that setting up a huge ruleset in tools like StyleCop makes your code more readable. Of course it would make it readeable in the sense that if someone gave me a handwritten letter and then handed me the same letter written on a computer both in some language I don't understand.. I would probably be able to read the words letter by letter from the computer version more easily. Does it really matter? It's not like I would understand any of it anyway. Same thing when looking at code. Crap code doesn't get any better just because it's all formatted the same way.</span></span></div><div><span class="Apple-style-span" style="font-family: Arial; font-size: small;"><span class="Apple-style-span" style="font-size: 13px;"><br />
</span></span></div><div><span class="Apple-style-span" style="font-family: Arial; font-size: small;"><span class="Apple-style-span" style="font-size: 13px;">The same goes for code comments. Does the function GetCustomerByName need a comment? If it needs a comment does that mean that the function does more than retrieving a customer by it's name? Maybe the name really is GetCustomerByNameOrInSomeCasesCreateOrder. If so this code doesn't need comments it needs some hard refactoring. Difficulties when choosing a good name for your function is usually a code smell.</span></span></div><div><span class="Apple-style-span" style="font-family: Arial; font-size: small;"><span class="Apple-style-span" style="font-size: 13px;"><br />
</span></span></div><div><span class="Apple-style-span" style="font-family: Arial; font-size: small;"><span class="Apple-style-span" style="font-size: 13px;">My point here is that your code should do the talking. It should express it's true intention. Lets look at two peices of code. The first one being a brute force implementation and the other being a bit more rafined.</span></span></div><div><span class="Apple-style-span" style="font-family: Arial; font-size: small;"><span class="Apple-style-span" style="font-size: 13px;"><br />
</span></span></div><div><span class="Apple-style-span" style="font-family: Arial; font-size: small;"><span class="Apple-style-span" style="font-size: 13px;"><i>Implementation 1</i></span></span></div><div class="separator" style="clear: both; text-align: left;"><a href="http://4.bp.blogspot.com/_eTMyostu-7o/S9lrmWIXK9I/AAAAAAAAABU/SNHd6AadHXU/s1600/notreadeable.jpg" imageanchor="1" style="clear: left;"><img border="0" height="257" src="http://4.bp.blogspot.com/_eTMyostu-7o/S9lrmWIXK9I/AAAAAAAAABU/SNHd6AadHXU/s640/notreadeable.jpg" width="640" /></a></div><div><span class="Apple-style-span" style="font-family: Arial; font-size: 13px;"><i><br />
</i></span><br />
<span class="Apple-style-span" style="font-family: Arial; font-size: 13px;"><i>Implementation 2</i></span></div><div class="separator" style="clear: both; text-align: left;"><a href="http://4.bp.blogspot.com/_eTMyostu-7o/S9lscWnbgGI/AAAAAAAAABc/2DThfILxHp4/s1600/readeable.jpg" imageanchor="1" style="clear: left;"><img border="0" height="180" src="http://4.bp.blogspot.com/_eTMyostu-7o/S9lscWnbgGI/AAAAAAAAABc/2DThfILxHp4/s400/readeable.jpg" width="500" /></a></div><div><span class="Apple-style-span" style="font-family: Arial; font-size: small;"><span class="Apple-style-span" style="font-size: 13px;"><br />
</span></span></div><div><span class="Apple-style-span" style="font-family: Arial; font-size: 13px;">So what's the difference between the two? Basically the second example has split the functionality up into well named classes, functions and properties. In my opinion this is not a problem unless youre writing performance critical low level stuff. Specially not when using disk or network resources.</span></div><div><span class="Apple-style-span" style="font-family: Arial; font-size: 13px;"><br />
</span><br />
<span class="Apple-style-span" style="font-family: Arial; font-size: 13px;">What I am trying to say is focus on making the code readable before you focus on getting it well formatted. And most likely, when you get the code readable the need for standard formatting would be way lower.</span></div>Svein Arne Ackenhausenhttp://www.blogger.com/profile/12518116016333593695noreply@blogger.com0tag:blogger.com,1999:blog-6013734844294471364.post-15084560873347930752010-03-08T15:14:00.000-08:002010-03-08T15:14:43.650-08:00Persistence ignoranceThis post is very much related to my post about <a href="http://ackenpacken.blogspot.com/2010/01/object-oriented-design-and-relational.html">relational databases and OO design</a>. Specially the domain model part. A problem with the way data access was presented in the last post was that it was drawn as a layer between the domain model and the database. Of course when writing applications we need a layer between the database and whatever using it. The thing is that the domain model needs more than simple CRUD. Multiple CRUD operations has to be batched and handled in transactions. This is when the concept of persistence ignorance makes it's grand entrance. We need a way to cleanly handle storing/persisting our entities within transactions so that we can ensure that processes executes successfully or rolls back all changes.<br />
<br />
I talk about entities here and of course you can use dataset's and other types of data carriers. Since I mainly work in C# which is an object oriented language my preference is to work with POCO's.<br />
<br />
First off we need a way to persist that entity somewhere. The reason why I use the word persist instead of storing to the database is that location or type of storage is not relevant to the domain model. The only thing the domain model needs to know is that it's entity is persisted somewhere so that it can get to it later.<br />
<br />
How do we create this magical peice of code that will handle all persistance and transactions for us? Well, we don't! Unless you feel the need to reinvent the wheel. Most ORM frameworks implements some kind of persistance ignorance, some container that can keep track of changes made to your entities and commit to the storage or roll back. There are some great frameworks out there that you can use. My personal favorite being NHibernate.<br />
<br />
That being said you can make a mess with ORM's too. Some people talk about creating an application with Entity Framework or NHibernate. This is usualy a sign that the source code is full of ORM queries and connection/transaction handling. Again these are issues the domain model shouldn't have to deal with. It should focus on cleanly implementing it's spcified functionality. Not deal with these kind of technical details.<br />
<br />
Let's take a minute too look at transactions. Transactions live within what we call a transaction scope. A transaction scope starts when you start the transaction and ends when you commit or roll back. So what would be included in a transactional scope? Let's say we're writing some code that updates some information on a customer and on the customers address which is stored in a separate table. Would we want both those updates within a transaction scope? Indeed! Then what about that other function that does various updates and then calls the contact and address update function? Shouldn't we have a transaction scope wrapping all of that too? Well of course so lets add some transaction handling to this function too and make sure we support nested transactions for the customer and address function. And with that the whole thing started giving off an unpleasant smell.. We have just started cluttering our code with transactions left, right and center. Now what?<br />
<br />
Let's take a look at the model again. We can visualize the domain model as a bounded context. It has it's core and outer boundaries.Through it's boundaries it talks to other bounded contexts (UI, Other services, Database...). Take the UI. The UI would call some method on the domain model's facade and set of the domain model to do something clever. My point being that the domain model never ever goes of doing something all of a sudden. Something does a request or triggers an event. Something outside it's boundries always requests or triggers it to do something. These requests and triggers are perfect transaction scopes. They are units of work. These units of work knows exactly what needs to exist within transactional scope.<br />
<br />
Unit Of Work is an implementation pattern for persistance ignorance. We can use this pattern to handle persistence and transactions. Let's say that every process or event triggered in the domain models boundry is a unit of work. This unit of work can be represented by an IUnitOfWork interface. To obtain a IUnitOfWork instance we use a IWorkFactory. By doing this we end up with a transaction handler which we have access to from the second our domain code is invoked until the call completes. How would a class like this look? Well we need some way to notify it about entities we want it to handle. Let's call the method Attach and give it a parameter entity. Now we can pass every entity object we want to persist to the Attach method of the IUnitOfWork. We also need a way to remove entities from storage. We'll create a Delete method for that. If the current unit of work succeeds we need a way to let it know that all is good and then go ahead and complete the transaction. Let's call this method Commit. This gives us a simple interface for handling persistence.<br />
<br />
IUnitOfWork<br />
T Attach<T>(T entity)<br />
void Delete<T>(T entity)<br />
void Commit() <br />
<br />
<div></div><div>The code using it would look something like this.<br />
<br />
</div><div></div><div> using (IUnitOfWork work = _workFactory.Start())</div><div> {</div><div> MyEntity entity = new MyEntity();</div><div> work.Attach<MyEntity>(entity);</div><div> work.Commit();</div><div> }<br />
<br />
Since we are using something like for instance NHibernate in the background we would have to retrieve entities from storage through NHibernate and then attach them to IUnitOfWork. IUnitOfWork of course uses NHibernate in the background for all persistence. Because of the nature of ORM's like NHibernate it would make more sense to include entity retrieval through the IUnitOfWork too since every entity retrieved is automatically change tracked by the NHibernate session. That would also let us abstract NHibernate better from our domain model. Lets add a few functions to the IUnitOfWork interface to accomplish this. We would need a GetList function to be able to return a list of entities and maybe a GetSingle function to return a single entity. Get single would have to be able to retrieve through identity to take advantage of caching within the ORM framework and also be able to pass queries where if using NHibernate we could use IDetachedCriteria. If you want complete abstraction you can make your own query builder which can convert to NHibernate queries internally. Now the IUnitOfWork interface would look something like this:<br />
<br />
IUnitOfWork<br />
T GetSingle<T>(object entity)<br />
T GetSingle<T>(IDetcahedCriteria criteria)<br />
IList<t> GetList<T>(IDetcahedCriteria criteria)<br />
T Attach<T>(T entity)<br />
void Delete(object entity)<br />
void Commit()<br />
<br />
To obtain the active instance of IUnitOfWork from anywhere in the code we can create a WorkRepository class. We'll just have the IWorkFactory register the unit of work with the WorkRepository using thread id as key. Doing that would enable us to issue the following command in whatever class we want to use the unit of work:</t></div><br />
<div> public void SomeFunction()</div><div> {</div><div> ...</div><div> var unitOfWork = WorkFactory.GetCurrent();</div><div> var customer = unitOfWork.GetSingle<Customer>(customerID);</div><div> ...</div><div> }<br />
<br />
How is that for a database layer? This is most likely all you need. Smack repository pattern on top of that and your code will be pure cleanly written domain functionality. Now go ahead and solve real world problems ignoring all the complexity that comes with persistance, transactions and retrieving entities.</div>Svein Arne Ackenhausenhttp://www.blogger.com/profile/12518116016333593695noreply@blogger.com0tag:blogger.com,1999:blog-6013734844294471364.post-51672141074202335562010-02-27T04:21:00.000-08:002010-02-27T04:21:19.490-08:00DRY and reuse pitfallsDon't get me wrong here. I'm all about keeping my code reusable and DRY (Don't repeat yourself). What I want to pinpoint in this post are common pitfalls when reusing code. More the thought behind the decisions than the principle itself.<br />
<br />
First let's talk about our overall mindset when writing code. When developing applications we spend time researching and planning for the functionality before we start implementing. The solution is constantly evolving in our heads and being discussed among the projects team members. This thought process will continue throughout planning and implementation. Because of human nature we'll always look forward into upcoming needs like "Maybe we have to support X in the future? I'd better prepare for it now." or "The function I just wrote could support scenario Y if I just make these changes. We'll probably need it in the future so I'd better do it now". We're making compromises on our existing code based on assumptions. In my mind this is not reuse it's code pollution. Reuse is something that happens when you have two implementations doing the exact same thing. DRY is not planning for the future. DRY is reusing functionality in your existing codebase.<br />
For this scenario a good solution would be writing your code based using the SOLID principles. In that way you'd know that your code would be able to evolve with the uncertainties of the future<br />
<br />
Another thing I come over quite often is SRP (Single Responsibility Principle) violations as a result of code reuse. Let's take the example where our application has a LogWriter handling writing to the error log. The class looks like this:<br />
<br />
<pre><span style="color: blue;">class</span><span style="color: black;"> LogWriter
{
</span><span style="color: blue;">private</span><span style="color: black;"> </span><span style="color: blue;">const</span><span style="color: black;"> </span><span style="color: blue;">string</span><span style="color: black;"> LOG_ENTRY_START = </span><span style="color: maroon;">"*************************"</span><span style="color: black;">;
</span><span style="color: blue;">private</span><span style="color: black;"> </span><span style="color: blue;">string</span><span style="color: black;"> _filename;
</span><span style="color: blue;">public</span><span style="color: black;"> LogHandler(</span><span style="color: blue;">string</span><span style="color: black;"> filename)
{
_filename = filename;
}
</span><span style="color: blue;">public</span><span style="color: black;"> </span><span style="color: blue;">void</span><span style="color: black;"> WriteLogEntry(</span><span style="color: blue;">string</span><span style="color: black;"> message, </span><span style="color: blue;">string</span><span style="color: black;"> stackTrace)
{
</span><span style="color: blue;">using</span><span style="color: black;"> (var writer = </span><span style="color: blue;">new</span><span style="color: black;"> StreamWriter(_filename, </span><span style="color: blue;">true</span><span style="color: black;">))
{
writer.WriteLine(LOG_ENTRY_START);
writer.WriteLine(message);
writer.WriteLine(stackTrace);
}
}
}</span>
</pre><pre><span style="color: black;">
</span></pre>Time goes and for some reason a need arises to also be able to support writing log entries to a database. Someone gets the clever idea to create and overload to the WriteLogEntry method that takes an extra boolean writeToDatabase parameter. Cramming two separate behaviors into a single class or function is not reusing code. It might feel like code reuse since you can use the same class for writing to both logs. The painful reality is that this is considered code rot, not code reuse.<br />
Again this is something that is better of solved through following the SOLID principles. If everything was depending on an abstraction of the LogWriter such as an ILogWriter interface we could easily extend our solution with a new DatabaseLogWriter implementing ILogWriter.<br />
<br />
The last subject I want to mention here is cross boundary reuse. This is a topic I have touched in an earlier post about <a href="http://ackenpacken.blogspot.com/2010/01/object-oriented-design-and-relational.html">OO design and relational databases</a>. The .NET community is jumping straight into using ORM these days. Which I think is fantastic! Whether it's NHibernate, LLBLGen or entity framework we're using entities now not datasets. I will use entities as an example on cross boundary reuse pitfalls This leads me back to my previous post where I argue that the Domain Model/Business Logic and the UI serves two very different needs. Let's say we decide to create a Customer entity in the Domain Model that we also pass off the Domain Models boundaries up to the UI. We probably end up having to clutter our entity with loads of information needed exclusively by the UI. In the UI there's needs like showing addresses, customer activities and various readable information. This is a lot like the LogWriter example only on a higher level. This time we violate SRP to be able to reuse an entity cross boundary. Again this does not lead to greater code reuse but to greater code rot.<br />
In this case I would strongly recommend using DTO's for transferring information cross boundaries. These DTO's can be created in a way that they perfectly fit the needs of the one aimed to consume them.Svein Arne Ackenhausenhttp://www.blogger.com/profile/12518116016333593695noreply@blogger.com0tag:blogger.com,1999:blog-6013734844294471364.post-54123223722930328602010-01-22T15:03:00.000-08:002010-01-22T15:05:36.836-08:00Solution structuring and TDD in Visual StudioI'm currently working on solution structuring for a large system using TDD. To be able to work efficiently within the project we have defined a set of solution types with different purposes. This is the setup we ended up with:<br />
<br />
<b>Workbench</b><br />
TDD requires that the solutions you spend most of your time in builds as fast as possible. To achieve this you have to keep the project number within the solution to a minimum. If you cannot get arround having multiple dependencies for every project, binary references would be the way to go. We have choosen a more decoupled approach where every project depends upon abstractions and interfaces are wired to implementations through a DI container. Contracts and interfaces are separated out into contract projects. This keeps project references down to just referencing interface/contracts projects. The build output from this solution would not be able to run since the contract implementations are not referenced here. The solution contains all environment independent unit and integration tests for this workbench's projects. Because of practicing TDD it's not important that the solution is able to run but that the tests are able to pass. A workbench exists for every bounded context and standalone code library.<br />
<br />
<b>Test rig</b><br />
Even though most of our time is spent writing tests and code in the workbench solutions we some times have the need to debug a running system. These solutions contain all code needed to run parts of the system like hosting services or even complete running systems. The test rig solutions are usually quite large and takes time to build but, theire there for us to ocasionally test the system locally.<br />
<br />
<b>Continous Integration solution</b><br />
This solution contains all projects and all environment independent unit and integration test for the complete system. This solution is part of the continous integration build performed on check in. Naturally CI runs all tests on every build.<br />
<br />
<b>System test solution</b><br />
We need a solution containing environment dependent tests requiring things like database access or a running system.The tests within this solution should run on every deployment to the test environment to make sure the system wiring is intact.<br />
<br />
As for deployment both test rigs and continous integration solutions contains enough of the system to be able to perform deployment. <br />
<div><br />
</div><div>It would really be interresting to see how other people are working on similar projects.<br />
</div>Svein Arne Ackenhausenhttp://www.blogger.com/profile/12518116016333593695noreply@blogger.com0tag:blogger.com,1999:blog-6013734844294471364.post-23311660521959902732010-01-16T08:38:00.000-08:002010-01-16T16:35:32.132-08:00Object Oriented design and Relational DatabasesFor as long as I have worked with object oriented languages there has always been a bit awquard working with relational databases. I have grown up in the Microsoft world with visual foxpro/vb/.net/access/ms sql server and then used ado, ado.net and now ORM. Seeing Udi Dahan's talk on"Command Query Responsibility Segregation" and reading Eric Evans book "Domain Driven Design" made me connect some dots leading me to write this post.<br /><br />So why do we use relational databases? If the only purpose of the database is to persist the domain models entities we would have used an object oriented database right? No transformation between tables and objects would be needed. Ok, given this scenario we're sitting here with our clean entity objects formed in a way that perfectly satisfies the needs for rule validation and process execution performed by the domain model. Brilliant, just the way we like it! Enter the UI. Now this is where it gets ugly. The user requires information to be presented in a way that is humanly readeable. The domain model is perfectly happy with knowing that the customer entity with id 1432 links to the address entity with id 65423. To the person using the application that would be a useless peice of information. The structure of the information needed by the user is often very different to the entities needed by the domain model. Specially when the user needs some kind of grouping of information or statistics. These type of complex queries wants to gather information spaning multiple entites joining them in ways unnatural to the domain model. This is where the relational database comes and performs it's magic. With a relational database we can easily perform complex queries joining multiple tables and tweaking information to fit our needs.<div><br /><div><img src="http://2.bp.blogspot.com/_eTMyostu-7o/S1JYvRMw-1I/AAAAAAAAABM/tJ9GhlMVDhE/s320/TraditionalView.jpg" style="cursor:pointer; cursor:hand;width: 320px; height: 290px;" border="0" alt="" id="BLOGGER_PHOTO_ID_5427498070028909394" /></div><div><br /></div><div>Above is the traditional way of looking at layered architecture. I find this way of viewing layered architecture a bit deceiving. So what about the issue described above with the UI geting in the mix. How do we often solve this problem? Well sadly the domain model often gets to pay the price. Our clean entities are stretched and pulled and lumps of information are attached to them so that we can pass them on to the UI. These sins are committed in the name of Layered Architecture though Layered Architecture is not to blame. It's just easy to interpret the picture above that way. Wether being datasets or object entities relations and bulks of information are added, complicating both the UI and the domain. We end up having to make compromizes constantly because the entity no longer fits either the domain model or the UI in a good way.<div><div>There must be a better way! Well there is. We have already pin pointed two separate needs here. The domain model needs a database to persist it's entities to and the user needs to use the persisted information in a way that makes sence to him/her. Let's create two rules.</div><div><ol><li>Neither the domain model nor the UI should ever be aware of the complex structure of the database. </li><li>The domain model should never be aware of the complexity of it's clients (UI in this example). </li></ol>Ok, that solves two problems. The domain model's entities will no longer be compromized by it's consumers since their complexity can no longer affect it. They will also be unaware of any database complexity coming from the database because of it being fitted to coping with multiple needs. Our data abstraction layer using orm or any other data access provider will make sure of that.</div><div>Great now we have a clean, readeable and maintainable domain model again. So where does the UI retrieve it's information from then? From the database of course. And to hide the database complexity we can use a view or a stored procedure that returns the information the UI needs formatted excatcly as it needs it to be. How cool is that. We just took advantage of the power of the relational database which now hides it's complexity from it's users.</div><div><br />The UI now bypasses the domain model completely when retrieving it's information. This means that the way the UI makes changes to the database has changed. Earlier the UI was provided with the domain model's entities which it modified and sent back to the domain model for persisting. That is no longer possible since the domain model doesn't share it's entities. What we would want to do now is to build an abstraction between the domain model and the UI. Call it an abstraction layer or service layer. Naming is not important right now. The UI now needs to be able to persist information and execute processes through this abstraction. We need some defined messages that the UI can send to the abstraction. For example the SaveAddress operation in the abstraction needs to be able to take an AddressMessage containing address information. The abstraction then needs to use the message to persist it's information using the domain and it's entities. We then end up with a design where information flows like on the sketch below.</div><div><br /></div><div><img src="http://4.bp.blogspot.com/_eTMyostu-7o/S1H5SZRv92I/AAAAAAAAABE/mYrhYmSJFO4/s320/overview.jpg" style="cursor:pointer; cursor:hand;width: 306px; height: 320px;" border="0" alt="" id="BLOGGER_PHOTO_ID_5427393120376452962" /></div><div><br /></div><div>When creating services and abstractions it's important to think about responsibility. For instance the consumer of the service should be responsible for it's interfaces and messages while the service host should be responsible for the implementation of these interfaces. Let's look at the service layer between the UI and the model. The UI would define how the service methods and messages should look and the domain model would implement the service interface. Again for the database access framework the UI and the domain model would be responsible for defining the interface while the data abstraction component/layer would implement this interface.</div><br />To conclude this post the main takeaways are that a design like this should be viewed as three separate parts: UI, Domain Model and Data store. The implementations should respect this and make sure that each part focus on the problem it's trying to solve.<div><ul><li>Domain Model - Handle the logic, rules and processes the application is supposed to handle through it's specification. This is the heart of the application.</li><li>UI - Make sure that the user is able to work with the applications functionality in a way suited for the human mind.</li><li>Relational Database - Handles persisting the Domain Model's entities and provides the UI with human readable information.</li></ul></div></div></div></div>Svein Arne Ackenhausenhttp://www.blogger.com/profile/12518116016333593695noreply@blogger.com2tag:blogger.com,1999:blog-6013734844294471364.post-38443616412846091172010-01-15T10:14:00.000-08:002010-01-16T08:27:44.572-08:00Please stop the madnessLooking at Microsoft’s approach to frameworks and libraries lately gives me the creeps. In frameworks like entity framework, workflow foundation and such Microsoft heavily relies on graphical tools and generated code. My three main objections to this way of developing are 1. It complicates the way of working 2. It complicates maintenance 3. It gives of the wrong signals to developers.<br /><br />The whys:<br /><br /><span style="font-weight:bold;">1. It complicates the way of working</span><br />As developers what is our main skill? Writing code right? And of course with experience we have learned how to read code and through reading we learn how to write cleaner more readable code. Now suddenly we have to relate to the code we write, the designer UI and the code generated by the designer. In addition to that the code generated by the designer are often a messy blob of complex code. By using these tools we have complicated what should have been clean readable code.<br />Another thing is writing tests for code using generated code. This usually ends up being a nightmare.<br /><br /><span style="font-weight:bold;">2. It complicates maintenance</span><br />What happens when requirements change? Well you have to have the designer regenerate the code don't you? Something that you could have done through refactoring tools you now have to do through the provided UI. Also you risk ending up in a scenario when the framework comes in a new and fresh version where upgrade issues corrupts the generated code. Ok that one was a bit unfair but I'll still consider it an issue.<br /><br /><span style="font-weight:bold;">3. It gives of the wrong signals to developers</span><br />This is probably my biggest issue with the concept. The issues solved by these designers the way I see it is: hiding a complex framework or making a non developer friendly framework. First off, hiding a complex framework is treating the symptoms of bad design. I would much rather see them putting their effort into writing a high quality useable API for it. If the reason is that writing code for it is too much to ask from the developer that's just sad. As stated earlier one of the greatest skills a developer has is writing code. As for 'non developers' we're talking about development frameworks not applications like the office suite which rightfully contains UI designers and code generator tools.<br /><br />I just needed to get this out of my system :p I guess my plead to Microsoft is please stop the madness and get back to writing good clean framework API's that developers can write high quality applications with. The .NET core proves that you know how to.Svein Arne Ackenhausenhttp://www.blogger.com/profile/12518116016333593695noreply@blogger.com0tag:blogger.com,1999:blog-6013734844294471364.post-72059276168462869832009-11-11T03:58:00.000-08:002009-11-18T06:06:26.255-08:00Tech-ed Europe 2009 has startedI thought I'd make a couple of notes here from tech-ed Berlin. It's now Tuesday and we have quite a bit left. Am sitting at my hotel room with a fever.. *grin*. Hopefully German medicine will have me up and running by tomorrow.<br />For now I have three people that are must sees if you have the possibility:<br /><br /><span style="font-weight:bold;">Udi Dahan</span><br />This guy is pure brilliance. He is a innovator and comes up with flexible scalable solutions in a ridiculously simple manner. In addition he is a fantastic speaker.<br /><br /><span style="font-weight:bold;">Ralf Westphal</span><br />His session was about architecture. To say he is an expert on the area is an understatement. He has very firm opinions on how you should handle abstractions and single responsibility, and I love it. He gives it to you straight and he has realized the fact that writing good software is hard work.<br /><br /><span style="font-weight:bold;">Magnus Mårtensson</span><br />Joined this guy for two sessions on flexible design and MEF. He has tons of experience in software design. He is extremely enthusiastic about his profession and through the sessions he shared some very cool solutions!Svein Arne Ackenhausenhttp://www.blogger.com/profile/12518116016333593695noreply@blogger.com0