mandag 18. august 2014

lørdag 3. november 2012

The ContinuousTests workflow

The biggest issue I have seen people having with ContinuousTests is understanding the workflow in which it provides highest value. And specially from users having previously used NCrunch. The two products take a different approach to how it deals with developer workflow. Let me just run you through the thought behind the ContinuousTests workflow.

No more running tests manually
ContinuousTests is all about building and running your tests in the background. You write your code and hit save when you need feedback. We will build and run only tests affected by the change you made in the background so that you can keep on working and focusing on the task at hand. With fast enough tests you will have the result of your previous save available before you hit save again. In that way you will end up in a highly effective workflow of writing code, saving, writing code, saving. Working like this eliminates time previously spent building and testing. You will be surprised how eliminating that short wait time improves your focus.

Tests are the driving force behind writing code through TDD
This is  the mantra going through ContinuousTests. That effectively means that the state of the system is defined by what tests are failing. A passing test is the norm so knowing that a test passed is of no interest. What we want to know is what tests are not working as of now. The heartbeat of ContinuousTests is the "Run Output Window" below which at all times shows what tests are failing in the current solution.


The tests are your backlog
When I open a solution the first thing I do is to build and run all tests. When that operation has finished I know what tests are broken in the system. This will likely tell me where I left off yesterday. As it's your backlog you will keep picking failing tests from this window until it's empty. Whenever a problem needs solving make it into a failing test and it will appear in this window.
There are two dimensions to this window. The text representing the state of your last test run and the list representing the state of the system. This is why you can end up with a last run that is green and still have failing tests in the list. Working with tests in this type of automatically updated backlog is something we have found very effective.

Risk margins
Like mentioned the "Run Ouput Window" is the heartbeat of ContinuousTests. It gives you the primary information needed to work efficiently with tests. However writing code is a lot more than just flipping through red or green. Writing new code is fairly safe and highly effective through TDD. However changing existing code is a whole different chapter. Ok so you go to the step of changing some existing code. Tests ran and it all came out green. Did you break anything? Well green looks good. However the green is only as good as the quality of tests being run. AND whether or not they assert on affected code in the way you hoped it would. Boom!

Conclusion is that knowing whether tests covering affected areas passes or not is not enough. When changing code we need to know more about the environment in which the code you are changing exists. How close are the tests to the affected code. What are the dependencies between this code and it's surroundings. These are all questions we are trying to answer through the risk margins and graphs. The color of the margin glyph tells you the calculated risk, green=low, yellow=medium, red=high  and then of course the dragon. Inside the colored ring the number tells you how many tests cover this method. In addition to that there are graphs showing design time and run time coupling of the system. All this information is there to provide a grounds for deciding how to go about changing existing code in a safe manner.


Conclusion
ContinuousTests aims to provide a workflow that maximizes your efforts into writing code while working with your backlog of "tasks needed to be implemented". In addition it provides features that helps you make the right decisions when changing existing code.

onsdag 11. januar 2012

2 cups of messaging, 1 tablespoon of async and a whole lot of context

Just got some very interesting input from someone who had the patience to watch through my Monospace talk. First off I was gutted after looking through it myself. I did not manage to communicate the content anywhere close to the way I had intended to. No matter how painful it was watching it I now know what to work on. I'll practice harder and to a better job next time :) Let me try to make a bit more sense here.

Øyvind Teig left this comment on one of my previous blog posts. After reading through some of his material I decided to reply through a new post. And for the record his papers are well worth reading!
I just saw the Monospace talk, and read some of your blog notes. Just as I have been stating that "synchronous" programming is what we should do, and you say about "asynchronous" the same, we both perhaps, might be easily misunderstood. The problems you told about in the lecture (shared state and ordering etc.) and the fact that messaging is naturally distributable, and that “the world is asynchronous”, could in fact advocate any way. Synchronous and asynchronous are tools in the toolbox. I work in embedded safety-critical systems, where sending pointers internally is bad (if they are still in scope after sending), message buffer overflow is bad (if it restarts the system and not handled at application level), and WYSIWYG-semantics is good. Not trusting 50% of the code, as an argument for doing things asynchronous is ok. But it has to do with layering, not paradigm. If the spec says you must wait, you wait. If not, you don’t. Also, do observe that asynchronous messages (and not rendezvous as in Ada or channels as in Go) makes the messages / sequence diagrams tilting – good for some, not wanted in other situations. What happens after you “send and forget”? Is this what I want to do, always? Of course not. And “waiting” does not mean “don’t do anything”. It has to do with design and also “parallel slackness”. I have blogged some about this, see http://oyvteig.blogspot.com/, and published some, see http://www.teigfam.net/oyvind/pub/pub.html. 
Øyvind Teig, Norway
 As Øyvind points out interpreting a saying using your own knowledge can easily lead to different outcomes depending on the person. From his perspective (and please arrest me if I'm not interpreting you right) async functionality should be boxed in and modeled through what he calls the "Office Mapping Factor" as an example. And it makes perfect sense!
The basic idea of this type of modelling is using CSP (Communicating Sequential Processes). CSP defines separate processes/threads running in parallel while inter communication between the processes/threads is happening synchronously. The receiver of the message has to acknowledge that it can/want's to receive the message rather than the fire and forget / queuing approach.

In my talk and blog posts I tend to talk more about fire and forget messaging. Systems that run asynchronously without the various parts of the system being aware of the asynchronous behaviors happening around them. Each part or rather block of functionality in the system works synchronously while message consumption happens asynchronously. Quite the opposite of what CSP states.

And again the differences are all about context. Personally I write code almost exclusively for servers and desktop computers that has the amount of memory, cores and disk as is desired. Øyvind on the other hand has 128KB program memory and 32KB external memory at his disposal. Meaning all my assumptions of how stuff works goes right out the window. Like he explains: Yes you can use a queue but what happens when the queue is full and you get queue overflow (a scenario that has not even crossed my mind)? With 32KB of memory that is quite likely. The system would crash or halt wouldn't it? How about spawning processes? Same thing. In this context the environment plays a huge role and the architecture needs to reflect it. How you spend your resources is critical.

Establishing a context around the problem you are trying to solve is crucial to how you end up implementing it. I quite enjoyed reading about how Øyvind reflects around the Office Mapping Factor. So how come I tend to approach asynchronous programming so different from Øyvind?

Let's set the context for what kind of system that is usually on my mind when I'm talking about asynchronous systems. Everyday applications. That is the easiest way I can put it. Your everyday Order\CRM\Whatever application. Not an application that needs to process 10.000+ messages a second. Not an environment where there are restricted amounts of resources nor where the application consumes unnaturally huge amounts of resources.

I guess my approach can be split into two parts. The first thing being messaging and the second being asynchronous. Usually messaging is the reason why I use event/message based architectures. It is not from the desire of making asynchronous systems. However making a message based system asynchronous is trivial.


Why message based system?

Abstraction, Coupling
High coupling is like the perfect fertilizer for code rot. A change to any piece of code can break everything because everything is coupled to everything else. How do you prevent this from happening? You introduce abstraction. There are multiple mechanisms that can decouple systems like injection of abstract types (interfaces, abstract classes..), function injection (delegates) and messaging. Messaging takes abstraction farther than the other mentioned mechanisms by adding a dispatcher that routes the message to the consumer. Because of this the sender is only dependent upon the message dispatcher and not the message handler. The message might be dispatched to multiple consumers without the producer of the message knowing about it. Instead of calling a method handed to you through contract (interface, abstract class, delegate) you produce an output others can consume.

Producing real output
By making sure that your code feature produces an output in the shape of a message you have made that code feature a stand alone piece of code. It can now make sense beyond the contract that would have otherwise been passed to it. Being a stand alone piece of code it becomes a natural place for integration. Integration that would happen through consuming a message or producing a message that other participants can consume through the message dispatcher.
Open Closed principle: Software entities should be open for extension but closed for modification. Another principle which violation easily leads to code rot. Since messages (real output) flow through the dispatcher we can now extend functionality regarding message handling without modifying existing code.

Testing
Since our small software blocks now produce real output it is very testable. We can rely on testing input and output focusing on given the circumstances what is the expected result (output). Create a fake message dispatcher for you tests. Publish a message for your feature to consume and verify that the message produced by your feature is as expected.




Now to the asynchronous part


A message based system consists of participants, a message dispatcher and messages. The participants can send messages through the dispatcher and/or receive messages from the dispatcher. Whether your message based system is asynchronous or not can be as trivial as the dispatcher saying _consumer.Consume(message); vs ThreadPool.QueueUserWorkItem((m) => _consumer.Consume(m), message);. That little detail will make you able to have multiple features running in parallel vs features running sequentially.

Even though the code that needs to be written to make a system asynchronous is done in a couple of seconds the impact of that change is enormous. Both in what you can do and what you need to make sure you don't do. Deciding to go down the road of asynchronous executing features requires that you carefully model how you want to handle state.

First let's discuss what writing a message based system really means. For now we have looked into how a single handler produces and consumes messages. However message based systems are often about message chains. Just like mentioned in messaging and abstraction the message can split up the larger system features into smaller software entities that produce and consumes the various messages in a chain. A chain can consist of for instance RequestDelivery->PickItemsFromStock->CheckoutItems->PickTerminal->DeliverToTerminal or DomainLogic->...->...->DbPersistence. Each of the steps in this chain will be able to produce and consume messages.

When writing asynchronous message based systems I tend to divide my types of message handlers into a few categories based on how it relates to state. Keep in mind that each message handler might handle a message in parallel with other handlers.

Stateless Message Handlers
This is my first choice of handler as it has no side effects. It will only transform the input to an output message without writing to any publicly exposed state. It basically means (if a class) creates a new instance of the message handler, pass it the message and when done dispose of the handler.

State Message Handlers
These types of handlers deals either with accumulated internal state or external state. Either way it needs to make sure that it takes the right precautions (locking etc.). If only dealing with external state it has the ability to do work in the same instantiate, handle and dispose manner as the stateless message handlers. If dealing with internal state it needs to be a running "engine" that runs for as long as the scope of it's contained state. Either way it deals with state that at any time can be corrupted so I model these handlers CAREFULLY.

Queuing Handlers
As the name implies they do not dispatch the message like the others but rather stores it to some predetermined queue(s). Queuing handlers are usually something I go for whenever I want a persisted step in the message chain, a critical running ("engine") needing to pick work based on it's own task scheduler or I need to go cross system with some sense of reliability.

At this point it is pretty obvious that choosing how to write asynchronous code deeply depends on the environment where it is written and the context of the problem that is solved. My context as mentioned tends to lean towards normal business applications with a moderate requirement for computer hardware. Had the point of the system been to process a large amount of messages pr second then that would have affected the systems architecture. Compromises are often taken between resources and maintainability. When being able to focus less on max performance we can focus more on writing the system in a way that is max maintainable.

fredag 19. august 2011

AutoTest.Net 1.2 is out containing a Visual Studio Addin!

We are proud to announce release 1.2 of AutoTest.Net today. In this version we have pulled back all the Visual Studio Addin functionality from the ContinuousTests (Mighty-Moose) source into AutoTest.Net. This means that you will get all test and build feedback right into a feedback window in Visual Studio. This speeds up the work flow drastically from using AutoTest.Net through the winforms app or console. In addition you will have features like manually running tests from right-click in code, solution explorer and class view. You will also be able to debug failing tests from the feedback window or directly from right-clicking the test.

Getting started
To get started goto https://github.com/continuoustests/AutoTest.Net/archives/master and download the AutoTest.Net-v1.2.0.exe, run the installer and then open a solution in Visual Studio. By default AutoTest.Net starts paused when used as an addin so it will not detect file changes right away. You can still do manual test runs though. The feedback will arrive into the feedback window which you can open through AutoTest.Net->Feedback Window. To start detecting changes click AutoTest.Net->Resume Engine. Now just modify a file/files and hit save and things should start happening. If you want to have AutoTest.Net enabled by default go to Global Configuration and change StartPaused to false (when changing the configs global or solution you need to run AutoTest.Net->Restart Engine for them to take effect).

Work flow
When working with ContinuousTests or AutoTest.Net you no longer have to wait for your build or test runs to complete. This changes your work flow radically. At first it will be hard to let go of old habits but after a little while not waiting for the build/test result becomes natural. Most of the time when you build and run tests the outcome is what you expect so there is no need to wait for it. Since the feedback loop will be shortened the likelihood of this increases. If you get an unexpected outcome you are only a couple of ctrl+z/commenting away from previous state anyway.
There are a few short-cut keys that you will use very frequently.

  • Ctrl+Shift+J => Open/Toggle Feedback Window
    • Navigate Feedback Window with arrows or vim keys (j,k)
    • Hit enter in the list to go to build error or failing test
    • Hit the letter I to open the information window for extended information
    • Hit the letter D when positioned on a test to debug it
  • Ctrl+Shift+Y, A => Build solution and run all tests
  • Ctrl+Shift+S => Save all files in Visual Studio


The first thing you do when opening a solution is Ctrl+Shift+Y, A (Build solution and run all tests) to get current state of the system. From now on the feedback window will work as current system state. The status text in the window represents last action while the content of the list represents current state. A failing test will stay in the window until it’s fixed even though current test run succeeded (ran tests in another assembly for instance).
From now on you work incrementally: Write code, Ctrl+Shift+S, Continue writing code, Fail, Ctrl+Shift+J and navigate to to failure, Jump to code/ Debug, Write code, rinse repeat, rinse repeat. In that way you always work on emptying the feedback window and when it’s empty you write failing tests to fill it back up.

We hope you will have fun with this product and don’t hesitate to report issues on the github page https://github.com/continuoustests/AutoTest.Net. We love reported issues as it’s what makes the product better. If the issue is followed by a pull request we are talking total awesomeness.

fredag 12. august 2011

The Moose Shows Some Community Love

As ContinuousTests moves along getting more features we feel it's important to give some love to the community too. For those new to the subject ContinuousTests is based on the source of open source AutoTest.Net. AutoTest.Net is continuously improved and extended through the development of ContinuousTests but we can do better than that. Let's show some real community love :)

We will over the weekend push the Visual Studio addin part of ContinuousTests back into the AutoTest.Net code along with some essential features.

Feedback View
The feedback view from ContinuousTests inside Visual Studio where you are able to easily navigate through build and test failures. Also goto source by clicking on stack lines, errors or the test itself.

Various Features

  • Right click inside code / solution explorer / Class View to run tests
  • Debug tests from the feedback window or by right clicking inside a test and choosing debug
  • Build all projects and run all tests
  • Pause and resume AutoTest.Net engine

Right now using AutoTest.Net vs using ContinuousTests are light years apart. With these changes implemented AutoTest.Net will get the same efficient workflow ContinuousTests provide. And it will be a valid alternative for those of you that can live with not just running affected tests and all the other goodies ContinuousTests contains.


Get your antlers on!

TheMoose team
Greg & Svein


søndag 24. juli 2011

Monospace talk

I'm having a great time here in Boston. So many brilliant people. Keep up the good work!

Anyways, here's the code from my presentation. Both the messaging stuff and the graphing stuff is in there. Please note that the graph thing is more of a proof of concept so it won't handle everything :)

Simple messaging sample with chain graphing

mandag 21. februar 2011

Are we producing value?

I read this discussion about using TDD/BDD in lean startups. They touch some really interesting topics there. Amongst others how TDD/BDD are great for client work but not so good for lean startups. Even though there is a lot of talk forth and back between client projects and lean startups I am confident that a lean startup is almost no different from  any other software project. Except for one significant difference Let me explain.

When working as a developer with a client we are focusing on delivering quality software meeting the customers demands. We have the integrity for our craft playing a role and we have our developer hat on. However and here comes the interesting part. When we're dealing with a lean startup we are actually wearing our stakeholder hat and our developer hat on. Because of wearing our stakeholder hat we are forced to think in terms of value or ROI.
This is the real difference. Not whether it's a client project or a startup. Or am I wrong? Is the client's stakeholder less interested in us producing high value than what we would want in our own startup? And why is being agile in our lean startup to succeed any different from together with the client stakeholder achieving success through an agile process? Whoever the stakeholder the project starts with a vision. A vision we will mold and change to in the end reach a valuable result.

If we state that TDD/BDD is a waste of time in our lean startup but not in our client project it sounds to me like we are stating that it's ok to waste our clients money but not our own. TDD/BDD is just an example here. I'm trying to put some light on our intents behind determining value. Are we loosing focus? Shouldn't we keep focus on constantly producing value at the highest rate possible regardless of the setting?

Am I way off here? What is your opinion?