The biggest issue I have seen people having with ContinuousTests is understanding the workflow in which it provides highest value. And specially from users having previously used NCrunch. The two products take a different approach to how it deals with developer workflow. Let me just run you through the thought behind the ContinuousTests workflow.
No more running tests manually
ContinuousTests is all about building and running your tests in the background. You write your code and hit save when you need feedback. We will build and run only tests affected by the change you made in the background so that you can keep on working and focusing on the task at hand. With fast enough tests you will have the result of your previous save available before you hit save again. In that way you will end up in a highly effective workflow of writing code, saving, writing code, saving. Working like this eliminates time previously spent building and testing. You will be surprised how eliminating that short wait time improves your focus.
Tests are the driving force behind writing code through TDD
This is the mantra going through ContinuousTests. That effectively means that the state of the system is defined by what tests are failing. A passing test is the norm so knowing that a test passed is of no interest. What we want to know is what tests are not working as of now. The heartbeat of ContinuousTests is the "Run Output Window" below which at all times shows what tests are failing in the current solution.
The tests are your backlog
When I open a solution the first thing I do is to build and run all tests. When that operation has finished I know what tests are broken in the system. This will likely tell me where I left off yesterday. As it's your backlog you will keep picking failing tests from this window until it's empty. Whenever a problem needs solving make it into a failing test and it will appear in this window.
There are two dimensions to this window. The text representing the state of your last test run and the list representing the state of the system. This is why you can end up with a last run that is green and still have failing tests in the list. Working with tests in this type of automatically updated backlog is something we have found very effective.
Like mentioned the "Run Ouput Window" is the heartbeat of ContinuousTests. It gives you the primary information needed to work efficiently with tests. However writing code is a lot more than just flipping through red or green. Writing new code is fairly safe and highly effective through TDD. However changing existing code is a whole different chapter. Ok so you go to the step of changing some existing code. Tests ran and it all came out green. Did you break anything? Well green looks good. However the green is only as good as the quality of tests being run. AND whether or not they assert on affected code in the way you hoped it would. Boom!
Conclusion is that knowing whether tests covering affected areas passes or not is not enough. When changing code we need to know more about the environment in which the code you are changing exists. How close are the tests to the affected code. What are the dependencies between this code and it's surroundings. These are all questions we are trying to answer through the risk margins and graphs. The color of the margin glyph tells you the calculated risk, green=low, yellow=medium, red=high and then of course the dragon. Inside the colored ring the number tells you how many tests cover this method. In addition to that there are graphs showing design time and run time coupling of the system. All this information is there to provide a grounds for deciding how to go about changing existing code in a safe manner.
ContinuousTests aims to provide a workflow that maximizes your efforts into writing code while working with your backlog of "tasks needed to be implemented". In addition it provides features that helps you make the right decisions when changing existing code.