Bare with me for a second. We create software to solve problems. A problem is often described by something like "Given A when something about A equals x then our application should produce Z". We have a starting point A that when satisfying some condition should produce Z. We produce executable code that does what is necessary to get from A to Z.
Running the code the execution path would look something like this. From a starting point it will execute the code, step by step, function by function going deeper into the call stack until the result Z is produced and from there back to where it started out.
To speed things up we can decide to make it parallel. What we would do is to fork the line going deeper into the call stack into several parallel sequences. Before returning back to where it started out those parallel sequences are joined together to one sequence. We are following the exact same concept as before. We have just implemented a different concept within it. The concept of executing multiple sequences of steps in parallel. For some scenarios this is a perfectly viable solution. It's viable for the times we want synchronous behavior.
Most of the time we will end up writing code like this. The problem though is that parallel or not this implementation is synchronous. Parallel execution is NOT synchronous. We make it synchronous by making sure we join the parallel sequences together into a single sequence again as they finish. Thus making it synchronous.
If the future is parallel the future is also asynchronous. When entering into a parallel future we simply cannot keep writing our software in a synchronous manner. We need to reeducate our minds with asynchronous being the norm while the exceptions are synchronous. Frankly it shouldn't be all that difficult. Our whole life is based around asynchronous behavior. After sliding a pizza into the oven we don't sit and stare at it until it's done. We put on a timer or make sure we check the time now and then and go do other things. We do it all the time. It's natural. Forcing parallel behavior into a synchronous setting on the other hand is not and it is going to be painful.
Some people has gone this rout already. Greg Young is one of them with his architectural pattern CQRS. If you have not seen his talk on the subject I strongly recommend you look into it. He recorded one of his online sessions for everyone to enjoy. You can download it here. It is a full day session but it's worth every minute.Like the title states. Parallel takes us only half the way. The rest is up to us. We need to change the way we think so that we design our systems and write our code in a way that suites this parallel future. When solving a problem through software we automatically give the problem a synchronous setting. That is what we have been taught. This is the way it's been for a long time. We have become technically challenged. We have become so good at solving asynchronous problems in a synchronous fashion that it even feels natural. If given the same problem in a real life scenario we would probably have solved it asynchronously.
Lets imagine a CEO wanting to keep track of her contacts. How would we solve this problem. Oh, wait. You just had the following thought didn't you: "A screen where you can type contact information then press save and it'll save the contact in a database and if something fails throw an error back to the user". Our technically challenged minds just made the scenario synchronous. It became type, save, confirm. If we ask the CEO what she usually does she'd say: "Well I usually tell people to leave their contact details with my secretary". And that my friends is a concept called forking! She just passed that task off to someone else so that she could go do other things. She made the task completely asynchronous. How can this be, what if the secretary throws an exception (forgets), what if there's already a contact with the same name, what if, what if..
So why does this work in the real world? Because of trust and the fact that context is taken into consideration. The CEO trusts the secretary to store the contact details. She also knows that if something goes wrong she'll just tell the secretary to look up the information a second time. Even though it fails now and then the overall goal is met. And the overall goal being that the CEO spends less time dealing with keeping track of contact information. Through trust and context the matter was handled efficiently with sufficient error handling.
Most scenarios aren't type, save confirm. We just make them type, save, confirm because that is how we usually solve everything.
When writing software in a "wait for confirmation" manner we effectively state that our software will fail to do what it is supposed to do more than 50% of the time. Because for it to be viable for us waiting for the response it would have to respond with something unexpected most of the time. Either that or that every other task we could possibly perform depended on this tasks response. When accepting that the system failing is the exception not the norm we can start thinking about error handling in a different way. If the failure rate is very low the but a single failure is extremely costly business value wise we could even set up manual handling for failures and use fire and forget. If that is the right solution for the business then that is the right solution technically.
When working on AutoTest.NET I discovered that even immediate feedback isn't really immediate. AutoTest.NET is a continuous testing tool for .NET. The workflow when using it would be:
- Write code
- Save all changes
- AutoTest.NET builds and runs tests
- AutoTest.NET outputs red or green
It's purpose is to provide immediate feedback regarding the state of the code. My initial workflow included waiting from I had pressed "save all" until I could see the output I had expected. But as stated earlier, for waiting to be viable I should produce flawed code on more than 50% of my saves. And I don't. The response is usually what I expect so it's more efficient for me to keep on working as AutoTest.NET is doing it's job. As I save approximately every 30 seconds a 20 second delay would be considered immediate feedback. It's immediate because of context. I only need to know within my next save point, which is 30 seconds later. When it yields unexpected results the solution is only a couple of Ctrl+Z's away.
Given the context even immediate feedback can be dealt with asynchronously.
To go all the way we need to embrace asynchronous behavior with parallel programming. Make asynchronous the default behavior. That's what we do everyday in real life. Terms like eventual consistency and asynchronous behavior isn't even something we consider when we go about our business. Because in real life that is the way everything works.