1

it seems an absolute truth in development community that Unit Test is a must have, and you should add it whatever it costs (I know it is not 100% like that). Let me play devil's advocate here.

Management wants to introduce Unit Testing in the hope to minimize regression dev mistakes in every development cycle. <- Here is where I think we may be using the wrong remedy.

It's a MVC web application, with a good level of decoupling but with extensive not readily testable .js code, stored procedures, etc. Many times the regression errors happen due to wrong implementation or merge errors.

So I'm not asking how to add Unit Testing to an existing code, that is plenty answered in bottom link. My initial plan was to build integration tests with many scenarios, which would cover the "whole" app. It seems more valuable than 5000+ unit tests. Then we could try to add Unit tests as we go and we could see the benefit prove itself, if it really does.

In addition, some of the benefits of Unit Testing seem vague to me, it allows you to replace frameworks without breaking app, and it allow you to refactor code without breaking app.

Now, I ask:
Does it effectively minimize regression errors?

Can you write unit tests without rewriting the app substantially?

Can you promise refactoring code won't generate expensive new bugs? (I know this is not a valid question) How to you explain to business that you broke the app refactoring?

What about code history? Sometimes it is very much important for auditing to know why some code was introduced and refactoring loses that value, if lucky, you will only find it navigating long time through source control.

I know you read this and I'm one of those closed-minded people who won't change opinion, I promise I'm not!

At the end of the day, what we need is stability much more than avoiding a handful re-opened defects. And I'd like to find the most effective path to start.

Last but not least I did read this other threads which is brilliant.

Can unit testing be successfully added into an existing production project? If so, how and is it worth it?

Please share your thoughts.

Thanks

Community
  • 1
  • 1

1 Answers1

1

I think integration level or acceptance level tests that cover large swaths of the application can be a good way to start. It will probably be easier to write than the unit tests.

On to your questions:

Does [unit testing] effectively minimize regression errors?

Yes, if you add the right kinds of tests. Whenever I encounter a bug in the code, I write a unit test that exploits that bug to cause the test to fail. When the bug is fixed, the test passes. Once that test or group of tests are in your test suite, you should never re-introduce the bug because that would cause tests to fail.

However, this requires that 1) you write the tests that exploit the bug, 2) you run the tests on a regular basis to detect regressions. and 3) only software that passes all the tests gets released.

Can you write unit tests without rewriting the app substantially?

It is possible, but unlikely. An app that is already in production without any tests was probably not designed with testability in mind. So it is likely the code will have to be refactored, perhaps significantly in order to properly test it.

Can you promise refactoring code won't generate expensive new bugs? (I know this is not a valid question)

No you can't promise that. Especially if you need to refactor the code substantially before you have good test coverage to act as a safety net. However, it should be a rare occurrence especially if you also have your integration tests already written.

How to you explain to business that you broke the app refactoring?

Before a major software refactoring is to be made, I would hope you worked out with your stakeholders what the refactoring plan would be. They should have been made well aware of the risks and potential down time in the short term for improved reliability and faster development in the long term Before the Project Was Approved!

What about code history? Sometimes it is very much important for auditing to know why some code was introduced and refactoring loses that value, if lucky, you will only find it navigating long time through source control.

Well, as you said, you have source control to show the history as a last resort. But if the issue is of such importance, you should probably also write tests for it as if it was a bug like I described before. This way you will get failed tests if your refactoring re-introduces that issue.

Refactoring a legacy application in order to write unit tests (and then write those unit tests) is a major undertaking and should not be done lightly.

I once completely refactored/redesigned a legacy mission critical 400k+ LOC application at my company by myself. It took many years of nearly 100% effort. By the time I was "finished" and had the most critical 20% of the code covered and the architecture redesigned to an understandable and maintainable level, the industry landscape had changed and the application was no longer important. Luckily, over the course of the project, I made about 100 releases so the users got to take advantage of the improved stability and new features before the project was killed.

I guess my point is, make sure it is worth the effort before you start down this path.

dkatzel
  • 31,188
  • 3
  • 63
  • 67
  • HI, thanks a lot for this answer. It's good to hear from someone who has been through it. And I definitely have some take aways to go down unit test road :) –  Mar 25 '15 at 15:36