2

I am having a hard time introducing automated testing to a particular application. Most of the material I've found focus on unit testing. This doesn't help with this application for several reasons:

  • The application is multi-tiered
  • Not written with testing in mind (lots of singletons hold application level settings, objects not mockable, "state" is required many places)
  • Application is very data-centric
  • Most "functional processes" are end to end (client -> server -> database)

Consider this scenario:

var item = server.ReadItem(100);
var item2 = server.ReadItem(200);
client.SomethingInteresting(item1, item2);
server.SomethingInteresting(item1, item2);

Definition of server call:

Server::SomethingInteresting(Item item1, Item item2)
{
  // Semi-interesting things before going to the server.

  // Possibly stored procedure call doing some calculation
  database.SomethingInteresting(item1, item2);
}

How would one set up some automated tests for that, where there is client interaction, server interaction then database interaction?

I understand that this doesn't constitute a "unit". This is possibly an example of functional testing.

Here are my questions:

  • Is testing an application like this a lost cause?
  • Can this be accomplished using something like nUnit or MS Test?
  • Would the "setup" of this test simply force a full application start up and login?
  • Would I simply re-read item1 and item2 from the database to validate they have been written correctly?
  • Do you guys have any tips on how to start this daunting process?

Any assistance would be greatly appreciated.

Kevin
  • 1,723
  • 2
  • 17
  • 16
  • To clarify, I understand the concept of unit testing the various "SomethingInteresting()" calls. What I'm looking for is a way of testing the process from end to end. – Kevin Oct 20 '11 at 14:19

4 Answers4

2

If this is a very important application in your company that is expected to last several years, then I would suggest start small. Start looking for little pieces that you can test or test with small modifications to the code to make it testable.

It's not ideal, but in a year or so you'll probably much better than you are today.

Hector Correa
  • 26,290
  • 8
  • 57
  • 73
  • I understand how this works with respect to unit testing, but how about integration test? – Kevin Oct 20 '11 at 13:47
  • Although there are tools to automate end-to-end testing of applications (web or windows apps) I haven't had much luck with them. The setup is so complicated for an average app that you end up needing to hire a team of people just to automate it and learn the tool. Depending on your environment this might be a good setup but it hasn't been that great in most instances that I've seen. They can be great but you need to invest a lot of time/money/people to get it working. – Hector Correa Oct 20 '11 at 13:58
  • So for the coding example I have above, I would have to write separate tests for each step (client.SomethingInteresting(), server.SomethingInteresting())? – Kevin Oct 20 '11 at 14:42
  • Yes, very likely. This might sound easier than it is since you'll probably need to modify those classes to be testable first, but essentially yes. I would start by finding classes that can be updated easily or without jeopardizing the entire system first. This way you'll get a (real) feel for what's needed, and then, little by little work your way through the rest of the system. – Hector Correa Oct 20 '11 at 14:55
2

You should look into integration test runners like Cucumber or Fitnesse. Depending on your scenario, an integration test runner will either act as a client to your server and make the appropriate calls into the server, or it will share some domain code with your existing client and run that code. You will give your scenarios a sequence of calls that should be made and verify that the correct results are output.

I think you'll be able to make at least some parts of your application testable with a little work. Good luck!

Ben Fulton
  • 3,988
  • 3
  • 19
  • 35
  • Are such tools useful for .Net applications? Cucumber seems to be Ruby related (and also BDD focused), and Fitnesse seems to be Java related. I'm interested in looking at them if they'll help with automation of .Net applications... – Merlyn Morgan-Graham Oct 20 '11 at 03:03
  • This question seems useful: SpecFlow in particular as a .Net focused alternative to Cucumber - http://stackoverflow.com/questions/976078/cucumber-alternative-for-net – Merlyn Morgan-Graham Oct 20 '11 at 05:52
  • There is a Fitnesse version that runs with .Net. If you are using an independent client it doesn't matter what language it's written in though. – Ben Fulton Oct 20 '11 at 13:24
  • One day these will be integrated into our build process, so as close as possible to a .NET solution would be preferable. – Kevin Oct 20 '11 at 14:14
2

Look at Pex and Moles. Pex can analyse your code and generate unit tests that will test all the borderline conditions etc. You can start introducing a mocking framework to start stubbing out some of the complext bit.

To take it further, you have to start hitting the database with integration tests. To do this you have to control the data in the database and clean up after the test is done. Few ways of doing this. You can run sql scripts as part of the set up/tear down for each test. You can use a compile time AOP framework such as Postsharp to run the sql scripts for by just specifying them in attributes. Or if your project doesn't want to use Postsharp, you can use a built in dot net feature called "Context Bound Objects" to inject code to run the Sql Scripts in an AOP fashion. More details here - http://www.chaitanyaonline.net/2011/09/25/improving-integration-tests-in-net-by-using-attributes-to-execute-sql-scripts/

Basically, the steps I would suggest,

1) Install Pex and generate some unit tests. This will be the quickest, least effort way to generate a good set of Regression tests.

2) Analyse the unit tests and see if there are other tests or permutations of the test you could use. If so write them by hand. Introduce a mocking framework if you require.

3) Start doing end to end integration testing. Write sql scripts to insert and delete data into the database. Write a unit tests that will run the insert sql scripts. Call some very high level method in your application front end, such as "AddNewUser" and make sure it makes it all the way to the backend. The front end, high level method may call any number of intermediate methods in various layers.

4) Once you find yourself writing a lot of integration tests in this pattern "Run sql script to insert test data" -> Call high level functional method -> "Run Sql script to clean up test data", you can clean up the code and encapsulate this pattern using AOP techniques such as Postsharp or Context Bound Methods.

Chaitanya
  • 5,203
  • 8
  • 36
  • 61
  • +1; I have used tools similar to Moles (not publicly available, sorry) and gotten some great testability gains. My experience is that it is difficult to understand the code you're replacing, but the final product is worth it. And Pex can be quite helpful for whipping up basic test cases very quickly. – Merlyn Morgan-Graham Oct 20 '11 at 03:39
  • Pex would be awesome for unit tests, but currently I'm limited to Visual Studio 2008 :-( – Kevin Oct 20 '11 at 14:23
2

Ideally you would refactor your code over time so you can make it more testable. Ideally any incremental improvement in this area will allow you to write more unit tests, which should drastically increase your confidence in your existing code, and your ability to write new code without breaking existing tested code.

I find such refactoring often gives other benefits too, such as a more flexible and maintainable design. These benefits are good to keep in mind when trying to justify the work.

Is testing an application like this a lost cause?

I've been doing automated integration testing for years, and found it to be very rewarding, both for myself, and for the companies I've worked for :) I only recently started understanding how to make an application fully unit testable within the last 3 years or so, and I was doing full integration tests (with custom implemented/hack test hooks) before that.

So, no, it is not a lost cause, even with the application architected the way it is. But if you know how to do unit testing, you can get a lot of benefits from refactoring the application: stability and maintainability of tests, ease of writing them, and isolation of failures to the specific area of code that caused the failure.

Can this be accomplished using something like nUnit or MS Test?

Yes. There are web page/UI testing frameworks that you can use from .Net unit testing libraries, including one built into Visual Studio.

You might also get a lot of benefit by calling the server directly rather than through the UI (similar benefits to those you get if you refactor the application). You can also try mocking the server, and testing the GUI and business logic of the client application in isolation.

Would the "setup" of this test simply force a full application start up and login?

On the client side, yes. Unless you want to test login, of course :)

On the server side, you might be able to leave the server running, or do some sort of DB restore between tests. A DB restore is painful (and if you write it wrong, will be flaky) and will slow down the tests, but it will help immensely with test isolation.

Would I simply re-read item1 and item2 from the database to validate they have been written correctly?

Typical integration tests are based around the concept of a Finite State Machine. Such tests treat the code under test as if it is a set of state transitions, and make assertions on the final state of the system.

So you'd set the DB to a known state before hand, call a method on the server, then check the DB afterwards.

You should always do state-based assertions on a level below the level of code you are exercising. So if you exercise web service code, validate the DB. If you exercise the client and are running against a real version of the service, validate at the service level or at the DB level. You should never exercise a web service and perform assertions on that same web service (for example). Doing so masks bugs, and can give you no real confidence that you're actually testing the full integration of all the components.

Do you guys have any tips on how to start this daunting process?

Break up your work. Identify all the components of the system (every assembly, each running service, etc). Try to piece those apart into natural categories. Spend some time designing test cases for each category.

Design your test cases with priority in mind. Examine each test case. Think of a hypothetical scenario that could cause it to fail, and try to anticipate whether it would be a show stopping bug, if the bug could sit around a while, or if it could be punted to another release. Assign priority on your test cases based off these guesses.

Identify a piece of your application (possibly a specific feature) that has as few dependencies as possible, and write only the highest priority test cases for it (ideally, they should be the simplest/most basic tests). Once you are done, move on to the next piece of the application.

Once you have all the logical pieces of your application (all assemblies, all features) covered by your highest priority test cases, do what you can to get those test cases run every day. Fix test failures in those cases before working on additional test implementation.

Then get code coverage running on your application under test. See what parts of the application you have missed.

Repeat with each successive higher priority set of test cases, until the whole application is tested at some level, and you ship :)

Merlyn Morgan-Graham
  • 58,163
  • 16
  • 128
  • 183