fredag 19. august 2011

AutoTest.Net 1.2 is out containing a Visual Studio Addin!

We are proud to announce release 1.2 of AutoTest.Net today. In this version we have pulled back all the Visual Studio Addin functionality from the ContinuousTests (Mighty-Moose) source into AutoTest.Net. This means that you will get all test and build feedback right into a feedback window in Visual Studio. This speeds up the work flow drastically from using AutoTest.Net through the winforms app or console. In addition you will have features like manually running tests from right-click in code, solution explorer and class view. You will also be able to debug failing tests from the feedback window or directly from right-clicking the test.

Getting started
To get started goto https://github.com/continuoustests/AutoTest.Net/archives/master and download the AutoTest.Net-v1.2.0.exe, run the installer and then open a solution in Visual Studio. By default AutoTest.Net starts paused when used as an addin so it will not detect file changes right away. You can still do manual test runs though. The feedback will arrive into the feedback window which you can open through AutoTest.Net->Feedback Window. To start detecting changes click AutoTest.Net->Resume Engine. Now just modify a file/files and hit save and things should start happening. If you want to have AutoTest.Net enabled by default go to Global Configuration and change StartPaused to false (when changing the configs global or solution you need to run AutoTest.Net->Restart Engine for them to take effect).

Work flow
When working with ContinuousTests or AutoTest.Net you no longer have to wait for your build or test runs to complete. This changes your work flow radically. At first it will be hard to let go of old habits but after a little while not waiting for the build/test result becomes natural. Most of the time when you build and run tests the outcome is what you expect so there is no need to wait for it. Since the feedback loop will be shortened the likelihood of this increases. If you get an unexpected outcome you are only a couple of ctrl+z/commenting away from previous state anyway.
There are a few short-cut keys that you will use very frequently.

  • Ctrl+Shift+J => Open/Toggle Feedback Window
    • Navigate Feedback Window with arrows or vim keys (j,k)
    • Hit enter in the list to go to build error or failing test
    • Hit the letter I to open the information window for extended information
    • Hit the letter D when positioned on a test to debug it
  • Ctrl+Shift+Y, A => Build solution and run all tests
  • Ctrl+Shift+S => Save all files in Visual Studio


The first thing you do when opening a solution is Ctrl+Shift+Y, A (Build solution and run all tests) to get current state of the system. From now on the feedback window will work as current system state. The status text in the window represents last action while the content of the list represents current state. A failing test will stay in the window until it’s fixed even though current test run succeeded (ran tests in another assembly for instance).
From now on you work incrementally: Write code, Ctrl+Shift+S, Continue writing code, Fail, Ctrl+Shift+J and navigate to to failure, Jump to code/ Debug, Write code, rinse repeat, rinse repeat. In that way you always work on emptying the feedback window and when it’s empty you write failing tests to fill it back up.

We hope you will have fun with this product and don’t hesitate to report issues on the github page https://github.com/continuoustests/AutoTest.Net. We love reported issues as it’s what makes the product better. If the issue is followed by a pull request we are talking total awesomeness.

fredag 12. august 2011

The Moose Shows Some Community Love

As ContinuousTests moves along getting more features we feel it's important to give some love to the community too. For those new to the subject ContinuousTests is based on the source of open source AutoTest.Net. AutoTest.Net is continuously improved and extended through the development of ContinuousTests but we can do better than that. Let's show some real community love :)

We will over the weekend push the Visual Studio addin part of ContinuousTests back into the AutoTest.Net code along with some essential features.

Feedback View
The feedback view from ContinuousTests inside Visual Studio where you are able to easily navigate through build and test failures. Also goto source by clicking on stack lines, errors or the test itself.

Various Features

  • Right click inside code / solution explorer / Class View to run tests
  • Debug tests from the feedback window or by right clicking inside a test and choosing debug
  • Build all projects and run all tests
  • Pause and resume AutoTest.Net engine

Right now using AutoTest.Net vs using ContinuousTests are light years apart. With these changes implemented AutoTest.Net will get the same efficient workflow ContinuousTests provide. And it will be a valid alternative for those of you that can live with not just running affected tests and all the other goodies ContinuousTests contains.


Get your antlers on!

TheMoose team
Greg & Svein


søndag 24. juli 2011

Monospace talk

I'm having a great time here in Boston. So many brilliant people. Keep up the good work!

Anyways, here's the code from my presentation. Both the messaging stuff and the graphing stuff is in there. Please note that the graph thing is more of a proof of concept so it won't handle everything :)

Simple messaging sample with chain graphing

mandag 21. februar 2011

Are we producing value?

I read this discussion about using TDD/BDD in lean startups. They touch some really interesting topics there. Amongst others how TDD/BDD are great for client work but not so good for lean startups. Even though there is a lot of talk forth and back between client projects and lean startups I am confident that a lean startup is almost no different from  any other software project. Except for one significant difference Let me explain.

When working as a developer with a client we are focusing on delivering quality software meeting the customers demands. We have the integrity for our craft playing a role and we have our developer hat on. However and here comes the interesting part. When we're dealing with a lean startup we are actually wearing our stakeholder hat and our developer hat on. Because of wearing our stakeholder hat we are forced to think in terms of value or ROI.
This is the real difference. Not whether it's a client project or a startup. Or am I wrong? Is the client's stakeholder less interested in us producing high value than what we would want in our own startup? And why is being agile in our lean startup to succeed any different from together with the client stakeholder achieving success through an agile process? Whoever the stakeholder the project starts with a vision. A vision we will mold and change to in the end reach a valuable result.

If we state that TDD/BDD is a waste of time in our lean startup but not in our client project it sounds to me like we are stating that it's ok to waste our clients money but not our own. TDD/BDD is just an example here. I'm trying to put some light on our intents behind determining value. Are we loosing focus? Shouldn't we keep focus on constantly producing value at the highest rate possible regardless of the setting?

Am I way off here? What is your opinion?

fredag 18. februar 2011

A layer of abstraction or a layer of complexity

To me abstraction rarely has to do with technical issues alone. When writing a layer of abstraction the domain plays a huge role. I find this specific issue is often handled badly with ORM's. A relational database especially is rarely modeled in a way that reflects domain behaviors. I therefore cringe when I see ORM's used to generate a object mirror of the database schema. Unless it's a pure CRUD app what do we really gain by doing this? Other than database intellisense.

Greg Young threw me a bit of when for some time ago I mentioned using Unit of Work and he replied “why do you need that?”. I frankly didn't even understand the question. Why would I not need a Unit of Work? I now know the answer and it makes me sad. I never knew what an aggregate was about and frankly I never really paid attention to the problem I was trying to solve. I never included the domain into the actual solution.

When doing something like generating an object version of a relational database we state that “I have no idea what this thing is supposed to do so I'll support everything”.Instead of first getting an understanding of the domain behaviors and create a persistence model supporting those scenarios.

Back to what Greg said. Why don't we need a Unit of Work? Because we have modeled true behaviors. Unit of Work is solving transactional issues. Actually Unit of Work is there to handle transactions between aggregates. The reason we tend to use a Unit of Work is because we have paid no attention to what an aggregate should contain. If we have an object model that is generated from a database schema our aggregates are defined by our database relations. Because of that our aggregates are completely separated from our domain behaviors. Resulting in most domain behaviors touching more than one aggregate. And through that we are forced to use a Unit of Work. However if we take the time to model our aggregates based on actual behavior there would be no need for cross aggregate transactions. Actually what we are doing when not paying attention to domain behaviors is creating and impedance mismatch between the problem we are trying to solve and our technical implementation.

If you want to learn more about these type of concepts visit http://cqrsinfo.com/. And if you get the time today. Greg is doing an online session today (http://cqrsweekday.eventbrite.com/). There is still time! If you can attend you should! You will not regret it.

tirsdag 15. februar 2011

AutoTest.Net no longer in beta!

Finally AutoTest.Net has reached it's first official release. There has been tons of contributions since last beta release and most of it has been coming from the Mighty-Moose project (www.continuoustests.com). Amongst others AutoTest.Net now has it's own test runner that hosts NUnit and XUnit (more to come). This means that there is no longer need to manually configure these test runners. Also the configuration as it is today is set to 64bit/Visual Studio 10. So if that's your setup no configuration needed for MSBuild or MSTest either.

New features

  • New hosted test runner (AutoTest.TestRunner)
  • MSpec is now supported thanks to contributions from Alexander Groß himself. For now it has to be manually configured but it's planned to get it supported in our hosted runner.
  • Support for either watching a directory or a solution
  • When watching a solution you can choose to build the solution instead of individual projects
  • Setting for running failed tests first


For a direct link to the the new version try this one https://github.com/downloads/acken/AutoTest.Net/AutoTest.Net-v1.1.zip.

From what I see there are actually quite a few users out there. Still there is not too much feedback. There have been some talk about both the WinForms and the Console UI. If you have questions or want to discuss join our group http://groups.google.com/group/autotestnet. And I will encourage anyone to report anything they want considered changed / added to the issues list on github (https://github.com/acken/AutoTest.Net). Let's together make this an awesome piece of software!

-Svein Arne