I read this discussion about using TDD/BDD in lean startups. They touch some really interesting topics there. Amongst others how TDD/BDD are great for client work but not so good for lean startups. Even though there is a lot of talk forth and back between client projects and lean startups I am confident that a lean startup is almost no different from any other software project. Except for one significant difference Let me explain.
When working as a developer with a client we are focusing on delivering quality software meeting the customers demands. We have the integrity for our craft playing a role and we have our developer hat on. However and here comes the interesting part. When we're dealing with a lean startup we are actually wearing our stakeholder hat and our developer hat on. Because of wearing our stakeholder hat we are forced to think in terms of value or ROI.
This is the real difference. Not whether it's a client project or a startup. Or am I wrong? Is the client's stakeholder less interested in us producing high value than what we would want in our own startup? And why is being agile in our lean startup to succeed any different from together with the client stakeholder achieving success through an agile process? Whoever the stakeholder the project starts with a vision. A vision we will mold and change to in the end reach a valuable result.
If we state that TDD/BDD is a waste of time in our lean startup but not in our client project it sounds to me like we are stating that it's ok to waste our clients money but not our own. TDD/BDD is just an example here. I'm trying to put some light on our intents behind determining value. Are we loosing focus? Shouldn't we keep focus on constantly producing value at the highest rate possible regardless of the setting?
Am I way off here? What is your opinion?
mandag 21. februar 2011
fredag 18. februar 2011
A layer of abstraction or a layer of complexity
To me abstraction rarely has to do with technical issues alone. When writing a layer of abstraction the domain plays a huge role. I find this specific issue is often handled badly with ORM's. A relational database especially is rarely modeled in a way that reflects domain behaviors. I therefore cringe when I see ORM's used to generate a object mirror of the database schema. Unless it's a pure CRUD app what do we really gain by doing this? Other than database intellisense.
Greg Young threw me a bit of when for some time ago I mentioned using Unit of Work and he replied “why do you need that?”. I frankly didn't even understand the question. Why would I not need a Unit of Work? I now know the answer and it makes me sad. I never knew what an aggregate was about and frankly I never really paid attention to the problem I was trying to solve. I never included the domain into the actual solution.
When doing something like generating an object version of a relational database we state that “I have no idea what this thing is supposed to do so I'll support everything”.Instead of first getting an understanding of the domain behaviors and create a persistence model supporting those scenarios.
Back to what Greg said. Why don't we need a Unit of Work? Because we have modeled true behaviors. Unit of Work is solving transactional issues. Actually Unit of Work is there to handle transactions between aggregates. The reason we tend to use a Unit of Work is because we have paid no attention to what an aggregate should contain. If we have an object model that is generated from a database schema our aggregates are defined by our database relations. Because of that our aggregates are completely separated from our domain behaviors. Resulting in most domain behaviors touching more than one aggregate. And through that we are forced to use a Unit of Work. However if we take the time to model our aggregates based on actual behavior there would be no need for cross aggregate transactions. Actually what we are doing when not paying attention to domain behaviors is creating and impedance mismatch between the problem we are trying to solve and our technical implementation.
If you want to learn more about these type of concepts visit http://cqrsinfo.com/. And if you get the time today. Greg is doing an online session today (http://cqrsweekday. eventbrite.com/). There is still time! If you can attend you should! You will not regret it.
tirsdag 15. februar 2011
AutoTest.Net no longer in beta!
Finally AutoTest.Net has reached it's first official release. There has been tons of contributions since last beta release and most of it has been coming from the Mighty-Moose project (www.continuoustests.com). Amongst others AutoTest.Net now has it's own test runner that hosts NUnit and XUnit (more to come). This means that there is no longer need to manually configure these test runners. Also the configuration as it is today is set to 64bit/Visual Studio 10. So if that's your setup no configuration needed for MSBuild or MSTest either.
New features
For a direct link to the the new version try this one https://github.com/downloads/acken/AutoTest.Net/AutoTest.Net-v1.1.zip.
From what I see there are actually quite a few users out there. Still there is not too much feedback. There have been some talk about both the WinForms and the Console UI. If you have questions or want to discuss join our group http://groups.google.com/group/autotestnet. And I will encourage anyone to report anything they want considered changed / added to the issues list on github (https://github.com/acken/AutoTest.Net). Let's together make this an awesome piece of software!
-Svein Arne
New features
- New hosted test runner (AutoTest.TestRunner)
- MSpec is now supported thanks to contributions from Alexander Groß himself. For now it has to be manually configured but it's planned to get it supported in our hosted runner.
- Support for either watching a directory or a solution
- When watching a solution you can choose to build the solution instead of individual projects
- Setting for running failed tests first
For a direct link to the the new version try this one https://github.com/downloads/acken/AutoTest.Net/AutoTest.Net-v1.1.zip.
From what I see there are actually quite a few users out there. Still there is not too much feedback. There have been some talk about both the WinForms and the Console UI. If you have questions or want to discuss join our group http://groups.google.com/group/autotestnet. And I will encourage anyone to report anything they want considered changed / added to the issues list on github (https://github.com/acken/AutoTest.Net). Let's together make this an awesome piece of software!
-Svein Arne
fredag 17. september 2010
Parallel takes you half the way
Parallel is the future! You can't go anywhere without hearing those words these days. And it's the truth. We all have multi core machines now. Languages prepare for this by implementing features that makes it easier to write code running in parallel. Microsoft for instance is pushing it's Task Parallel Library with .Net Framework 4 including parallel for, parallel LINQ, Parallel.Invoke and more. We even have debugging tools in Visual Studio helping out when debugging our parallel code. We have it all at our fingertips. Writing parallel code has never been easier! Soo.. why does it feel so complex and awquard? It's all there, the tools, the language features. We have all that we could ask for and still there is one crucial part that has been left out. It cannot be fixed by tools or frameworks because it is: Our minds!
Bare with me for a second. We create software to solve problems. A problem is often described by something like "Given A when something about A equals x then our application should produce Z". We have a starting point A that when satisfying some condition should produce Z. We produce executable code that does what is necessary to get from A to Z.
Running the code the execution path would look something like this. From a starting point it will execute the code, step by step, function by function going deeper into the call stack until the result Z is produced and from there back to where it started out.
To speed things up we can decide to make it parallel. What we would do is to fork the line going deeper into the call stack into several parallel sequences. Before returning back to where it started out those parallel sequences are joined together to one sequence. We are following the exact same concept as before. We have just implemented a different concept within it. The concept of executing multiple sequences of steps in parallel. For some scenarios this is a perfectly viable solution. It's viable for the times we want synchronous behavior.
Most of the time we will end up writing code like this. The problem though is that parallel or not this implementation is synchronous. Parallel execution is NOT synchronous. We make it synchronous by making sure we join the parallel sequences together into a single sequence again as they finish. Thus making it synchronous.
If the future is parallel the future is also asynchronous. When entering into a parallel future we simply cannot keep writing our software in a synchronous manner. We need to reeducate our minds with asynchronous being the norm while the exceptions are synchronous. Frankly it shouldn't be all that difficult. Our whole life is based around asynchronous behavior. After sliding a pizza into the oven we don't sit and stare at it until it's done. We put on a timer or make sure we check the time now and then and go do other things. We do it all the time. It's natural. Forcing parallel behavior into a synchronous setting on the other hand is not and it is going to be painful.
Lets imagine a CEO wanting to keep track of her contacts. How would we solve this problem. Oh, wait. You just had the following thought didn't you: "A screen where you can type contact information then press save and it'll save the contact in a database and if something fails throw an error back to the user". Our technically challenged minds just made the scenario synchronous. It became type, save, confirm. If we ask the CEO what she usually does she'd say: "Well I usually tell people to leave their contact details with my secretary". And that my friends is a concept called forking! She just passed that task off to someone else so that she could go do other things. She made the task completely asynchronous. How can this be, what if the secretary throws an exception (forgets), what if there's already a contact with the same name, what if, what if..
So why does this work in the real world? Because of trust and the fact that context is taken into consideration. The CEO trusts the secretary to store the contact details. She also knows that if something goes wrong she'll just tell the secretary to look up the information a second time. Even though it fails now and then the overall goal is met. And the overall goal being that the CEO spends less time dealing with keeping track of contact information. Through trust and context the matter was handled efficiently with sufficient error handling.
Most scenarios aren't type, save confirm. We just make them type, save, confirm because that is how we usually solve everything.
When writing software in a "wait for confirmation" manner we effectively state that our software will fail to do what it is supposed to do more than 50% of the time. Because for it to be viable for us waiting for the response it would have to respond with something unexpected most of the time. Either that or that every other task we could possibly perform depended on this tasks response. When accepting that the system failing is the exception not the norm we can start thinking about error handling in a different way. If the failure rate is very low the but a single failure is extremely costly business value wise we could even set up manual handling for failures and use fire and forget. If that is the right solution for the business then that is the right solution technically.
When working on AutoTest.NET I discovered that even immediate feedback isn't really immediate. AutoTest.NET is a continuous testing tool for .NET. The workflow when using it would be:
It's purpose is to provide immediate feedback regarding the state of the code. My initial workflow included waiting from I had pressed "save all" until I could see the output I had expected. But as stated earlier, for waiting to be viable I should produce flawed code on more than 50% of my saves. And I don't. The response is usually what I expect so it's more efficient for me to keep on working as AutoTest.NET is doing it's job. As I save approximately every 30 seconds a 20 second delay would be considered immediate feedback. It's immediate because of context. I only need to know within my next save point, which is 30 seconds later. When it yields unexpected results the solution is only a couple of Ctrl+Z's away.
Given the context even immediate feedback can be dealt with asynchronously.
To go all the way we need to embrace asynchronous behavior with parallel programming. Make asynchronous the default behavior. That's what we do everyday in real life. Terms like eventual consistency and asynchronous behavior isn't even something we consider when we go about our business. Because in real life that is the way everything works.
Bare with me for a second. We create software to solve problems. A problem is often described by something like "Given A when something about A equals x then our application should produce Z". We have a starting point A that when satisfying some condition should produce Z. We produce executable code that does what is necessary to get from A to Z.
Running the code the execution path would look something like this. From a starting point it will execute the code, step by step, function by function going deeper into the call stack until the result Z is produced and from there back to where it started out.
To speed things up we can decide to make it parallel. What we would do is to fork the line going deeper into the call stack into several parallel sequences. Before returning back to where it started out those parallel sequences are joined together to one sequence. We are following the exact same concept as before. We have just implemented a different concept within it. The concept of executing multiple sequences of steps in parallel. For some scenarios this is a perfectly viable solution. It's viable for the times we want synchronous behavior.
Most of the time we will end up writing code like this. The problem though is that parallel or not this implementation is synchronous. Parallel execution is NOT synchronous. We make it synchronous by making sure we join the parallel sequences together into a single sequence again as they finish. Thus making it synchronous.
If the future is parallel the future is also asynchronous. When entering into a parallel future we simply cannot keep writing our software in a synchronous manner. We need to reeducate our minds with asynchronous being the norm while the exceptions are synchronous. Frankly it shouldn't be all that difficult. Our whole life is based around asynchronous behavior. After sliding a pizza into the oven we don't sit and stare at it until it's done. We put on a timer or make sure we check the time now and then and go do other things. We do it all the time. It's natural. Forcing parallel behavior into a synchronous setting on the other hand is not and it is going to be painful.
Some people has gone this rout already. Greg Young is one of them with his architectural pattern CQRS. If you have not seen his talk on the subject I strongly recommend you look into it. He recorded one of his online sessions for everyone to enjoy. You can download it here. It is a full day session but it's worth every minute.Like the title states. Parallel takes us only half the way. The rest is up to us. We need to change the way we think so that we design our systems and write our code in a way that suites this parallel future. When solving a problem through software we automatically give the problem a synchronous setting. That is what we have been taught. This is the way it's been for a long time. We have become technically challenged. We have become so good at solving asynchronous problems in a synchronous fashion that it even feels natural. If given the same problem in a real life scenario we would probably have solved it asynchronously.
Lets imagine a CEO wanting to keep track of her contacts. How would we solve this problem. Oh, wait. You just had the following thought didn't you: "A screen where you can type contact information then press save and it'll save the contact in a database and if something fails throw an error back to the user". Our technically challenged minds just made the scenario synchronous. It became type, save, confirm. If we ask the CEO what she usually does she'd say: "Well I usually tell people to leave their contact details with my secretary". And that my friends is a concept called forking! She just passed that task off to someone else so that she could go do other things. She made the task completely asynchronous. How can this be, what if the secretary throws an exception (forgets), what if there's already a contact with the same name, what if, what if..
So why does this work in the real world? Because of trust and the fact that context is taken into consideration. The CEO trusts the secretary to store the contact details. She also knows that if something goes wrong she'll just tell the secretary to look up the information a second time. Even though it fails now and then the overall goal is met. And the overall goal being that the CEO spends less time dealing with keeping track of contact information. Through trust and context the matter was handled efficiently with sufficient error handling.
Most scenarios aren't type, save confirm. We just make them type, save, confirm because that is how we usually solve everything.
When writing software in a "wait for confirmation" manner we effectively state that our software will fail to do what it is supposed to do more than 50% of the time. Because for it to be viable for us waiting for the response it would have to respond with something unexpected most of the time. Either that or that every other task we could possibly perform depended on this tasks response. When accepting that the system failing is the exception not the norm we can start thinking about error handling in a different way. If the failure rate is very low the but a single failure is extremely costly business value wise we could even set up manual handling for failures and use fire and forget. If that is the right solution for the business then that is the right solution technically.
When working on AutoTest.NET I discovered that even immediate feedback isn't really immediate. AutoTest.NET is a continuous testing tool for .NET. The workflow when using it would be:
- Write code
- Save all changes
- AutoTest.NET builds and runs tests
- AutoTest.NET outputs red or green
It's purpose is to provide immediate feedback regarding the state of the code. My initial workflow included waiting from I had pressed "save all" until I could see the output I had expected. But as stated earlier, for waiting to be viable I should produce flawed code on more than 50% of my saves. And I don't. The response is usually what I expect so it's more efficient for me to keep on working as AutoTest.NET is doing it's job. As I save approximately every 30 seconds a 20 second delay would be considered immediate feedback. It's immediate because of context. I only need to know within my next save point, which is 30 seconds later. When it yields unexpected results the solution is only a couple of Ctrl+Z's away.
Given the context even immediate feedback can be dealt with asynchronously.
To go all the way we need to embrace asynchronous behavior with parallel programming. Make asynchronous the default behavior. That's what we do everyday in real life. Terms like eventual consistency and asynchronous behavior isn't even something we consider when we go about our business. Because in real life that is the way everything works.
søndag 22. august 2010
Testing through interfaces
When at Vagif Abilov's BBQ while talking about testing Greg Young said something like "I wonder what tests would look like if we wrote tests against interfaces instead of their implementation". And really, what would they look like?
The scenario was that you have an interface that has multiple implementations and you would want to write tests against the interface testing all implementations. This would seriously reduce the number of tests you would have to write. So let's give it a try.
The first thing we need to keep in mind is that we're writing the tests on something abstract. This means that we don't really know what to expect. When passing x in to the various implementations it's not certain that all of them will answer y. Actually hopefully only one of them would answer y where x is act and y is assert. If not there would be multiple implementations doing the exact same thing and that would kind of defeat the purpose. Off the top of my head that leaves us with the following scenarios:
X stays the same while Y varies
This would be something like an ICalculator. The ICalculator would have implementations like DecimalCalculator and OctalCalculator. When running tests here we would end up with results like this:
[Test]
public void Should_multiply()
{
var tester = new InterfaceTester<ICalculator>();
tester.Test(c => c.Multiply(7, 7))
.AssertThat<DecimalCalculator>().Returned(49)
.AssertThat<OctalCalculator>().Returned(61);
}
The scenario was that you have an interface that has multiple implementations and you would want to write tests against the interface testing all implementations. This would seriously reduce the number of tests you would have to write. So let's give it a try.
The first thing we need to keep in mind is that we're writing the tests on something abstract. This means that we don't really know what to expect. When passing x in to the various implementations it's not certain that all of them will answer y. Actually hopefully only one of them would answer y where x is act and y is assert. If not there would be multiple implementations doing the exact same thing and that would kind of defeat the purpose. Off the top of my head that leaves us with the following scenarios:
X stays the same while Y varies
This would be something like an ICalculator. The ICalculator would have implementations like DecimalCalculator and OctalCalculator. When running tests here we would end up with results like this:
- DecimalCalculator: 7*7 = 49
- OctalCalculator: 7*7 = 61
Which means that when writing these types of tests we need to be able to handle asserting on certain values pr. implementation.
X varies while Y stays the same
Let's imagine that we have some type of parser taking xml in returning a list of objects of a certain type. This would typically mean one implementation pr. xml schema while the output could be the same. So writing these types of tests we'll have varying code for passing parameters while the assert would stay the same.
Ok, that wasn't too bad. We could probably make this look clean. Now over to some other aspects that we'll have to deal with.
Dependencies
With the right (wrong) implementation faking dependencies might be a hellish thing with this solution. I guess that's a good thing as it forces us to not make a mess of it. But still we need some way of handling setting up dependencies for the implementations.
Resolving implementations
We need a way to retrieve all implementations for an interface. Of course this is something we do all the time with DI containers so any DI container would provide us with what we need here. We could probably do something smart here to inject the faked dependencies we'll need for each implementation.
With this in mind let's set up a test for the calculator scenario. The first thing I did was creating a class for handling the plumbing. Right now this class takes care of resolving all implementations of the chosen interface, running the test on each implementation and performing specified assertions. My test ended up looking like this:
With this in mind let's set up a test for the calculator scenario. The first thing I did was creating a class for handling the plumbing. Right now this class takes care of resolving all implementations of the chosen interface, running the test on each implementation and performing specified assertions. My test ended up looking like this:
[Test]
public void Should_multiply()
{
var tester = new InterfaceTester<ICalculator>();
tester.Test(c => c.Multiply(7, 7))
.AssertThat<DecimalCalculator>().Returned(49)
.AssertThat<OctalCalculator>().Returned(61);
}
I'm quite happy with that. This test is both extend able and readable. Now let's do the same to the scenario with the string parser. I'll just extend the plumbing class used in the previous example to handle varying input parameters. The implementation ended up looking like this:
[Test]
public void Should_parse_number()
{
var tester = new InterfaceTester<INumberParser>();
tester
.Test<XmlParser>(x => x.Parse("<number>14</number>"))
.Test<StringParser>(x => x.Parse("14"))
.Returned(14);
}
I can't say I'm as happy with this one as the complete delegate is copied for both implementations and not just the part that differs. But still it's a huge simplification compared to writing a full test suite pr implementation.
I guess I'll leave it at that for now. What this does not cover is setting up dependencies which likely will complicate the implementation a bit. After doing this implementation I can really see the value of writing my tests like this. It would save me time and energy and would leave me with a cleaner simpler test suite. The implementation ended up being fairly simple. Initial conclusion: Writing tests against interfaces is a good idea!
I'd love to hear your thoughts on this! And if you're interested in the full source code let me know and I'll upload it to github or something.
[Test]
public void Should_parse_number()
{
var tester = new InterfaceTester<INumberParser>();
tester
.Test<XmlParser>(x => x.Parse("<number>14</number>"))
.Test<StringParser>(x => x.Parse("14"))
.Returned(14);
}
I can't say I'm as happy with this one as the complete delegate is copied for both implementations and not just the part that differs. But still it's a huge simplification compared to writing a full test suite pr implementation.
I guess I'll leave it at that for now. What this does not cover is setting up dependencies which likely will complicate the implementation a bit. After doing this implementation I can really see the value of writing my tests like this. It would save me time and energy and would leave me with a cleaner simpler test suite. The implementation ended up being fairly simple. Initial conclusion: Writing tests against interfaces is a good idea!
I'd love to hear your thoughts on this! And if you're interested in the full source code let me know and I'll upload it to github or something.
lørdag 21. august 2010
New challenges, starting my own business
Times are changing and in a bit more than a week I'll be starting up a company with four others. I was asked to join them and after some thinking I said yes. Now I'm throwing myself off the cliff having a firm belief that wings will grow before I hit the ground :)
So what is the company about? Our goal is providing skilled, experienced people specialized in their field. All five of us have our own specialties that go we'll together from developer to analyst. From a business point of view we have deep knowledge with enterprise software specially oil/energy trading and business applications. The company has gotten the name Contango Consulting AS.
I am very excited about realizing a dream of being responsible for my own future. Trying to do my own thing. It's going to be tough and I'll probably learn more in a year than I have done up to now. I'm also certain that this blog will be affected by this. Hopefully it can result in my learnings ending up here for others to enjoy. And of course if any of you are in need of a skilled .NET developer / architect / trainer don't hesitate to let me know :) You can view more detailed information about me here.
-Svein Arne
So what is the company about? Our goal is providing skilled, experienced people specialized in their field. All five of us have our own specialties that go we'll together from developer to analyst. From a business point of view we have deep knowledge with enterprise software specially oil/energy trading and business applications. The company has gotten the name Contango Consulting AS.
I am very excited about realizing a dream of being responsible for my own future. Trying to do my own thing. It's going to be tough and I'll probably learn more in a year than I have done up to now. I'm also certain that this blog will be affected by this. Hopefully it can result in my learnings ending up here for others to enjoy. And of course if any of you are in need of a skilled .NET developer / architect / trainer don't hesitate to let me know :) You can view more detailed information about me here.
-Svein Arne
tirsdag 10. august 2010
Nu on Linux (debian based systems)
There's a lot of activity these days on the Nu project. Short the Nu project is for .Net what gems are for Ruby. In fact it uses gems. In my last post I talked about tooling and where I wish tooling would go in the future. Package management is definitely one of the tools that will help us in the future. If you want to read up on Nu Rob Reynolds has some good posts explaining what it is and how to use it. What I'm going to go through here is just what is needed to get it working on Linux. There are just some small tweaks that needs to be done for it to be working.
If you don't have Ruby on your system already you'll need it.
If you don't have Ruby on your system already you'll need it.
sudo apt-get install ruby-full build-essentialNext you'll need to get gems.
sudo apt-get install rubygemsOk, then we have all dependencies required to get going. Now lets get Nu. Nu is installed through gems like this (NB! make sure you install nu with root priveleges or it won't work).
sudo gem install nuGood, now we have all we need to start using Nu. Just one thing, when running Nu I got an error saying "no such file to load -- FileUtils". The reason for this is that Linux has a case sensitive file system and the file is called fileutils.rb not FileUtils.rb. If you also end up having this issue go to the folder containing fileutils.rb (something like /usr/lib/ruby/1.8) and create a symlink by running the following command.
sudo ln fileutils.rb FileUtils.rbNow to some real Nu action. Let's say we have a project and we want to start using NHibernate. What you have to do is to go to the root folder of your project and type this command.
nu install nhibernateYou'll get a couple of warnings but it's going to do what it's supposed to. When it's done you can go into the newly created lib directory and see NHibernate and it's dependencies in there. Neat huh!?
Abonner på:
Innlegg (Atom)
