My rule of thumb: unit test logic, integration test plumbing.
If a function does actual computation or has branching logic that could go wrong in subtle ways, unit test it. If it's mostly wiring things together (fetch data, transform, save), an integration test catches more real bugs.
The "unit test everything" era came from a time when spinning up test databases was painful. Now with Docker and in-memory DBs, integration tests are often faster to write AND more useful.
Where unit tests still win: algorithmic code, parsing, anything with lots of edge cases. Writing unit tests forces you to think about boundaries. Integration tests just tell you "it worked this time."
The worst outcome is having tests that make refactoring painful but don't catch real bugs. I've seen codebases where every internal method signature change breaks 50 unit tests that were essentially testing implementation details.
Writing unit tests is futile exercise without a specification.
The software under test is always modeling something -- business logic, a communications protocol, a control algorithm, a standard, etc. Behind each of those things is a specification. If a specification doesn't exist then the software is called a prototype. For sustained long term incremental development a specification must exist.
The purpose of unit tests is to assert specification-defined invariants at the module interface level.
Unit tests are durable iff the specification they uphold is explicit and accessible to developers and the scope of the test is small. It's futile to write good tests for a module which has ambiguous utility.
priors: I worked in embedded SW and am now a PhD student.
Unit tests are super useful when first writing a function or class to confirm it does what you think it does.
Then throw away the unit tests and write integration or E2E tests instead. Then you can refactor under the hood while ensuring overall system behavior is as expected.
There are some exceptions where you might want to hang on to a small subset of unit tests. They can be useful for demonstrating how to use an interface or class. They can help support particularly complex bits of logic. If a certain part of your codebase is fragile and regression prone, unit test coverage can help.
There's another aspect of unit testing. It makes the units testable. The greatest benefit of this is that the units tend to be more coherent. A large blob of code that isn't unit tested may not have clear boundaries or a functional raison d'etre. Tests also serve as documentation or demos of the units which is great for onboarding devs later on.
Maybe AI analysis/synthesis will change the math on this, but beyond early prototypes and PoC's, tests pay for themselves.
The moment the software is in production, making a lot of money and is stable, I then add lots of tests (both unit and integration) to prevent a $1,000 issue turning into a $100,000 problem later down the line.
Instead of testing everything to being perfect, 100% test coverage and never releasing and the company questioning why the deadlines were missed for the project because of testing dogma.
> I've personally found that when the architecture of the system is not mature yet, unit tests can get in the way. Terribly so. Integration tests or system tests to assert behavior seem the starting point in this and other scenario's, including when there are no tests at all yet.
Testing, which is better known as documentation, is the contract for users that defines what your program is for and how the user can depend on it forever into the future. The test runner is not the focus, rather the effort to validate that what the contract says is actually true.
Test what the user will use. That could mean an end-user interface, or it could mean a public API. Again, the goal is for the user to understand what you are trying to accomplish for them and to have guarantees about what you won't change in the future.
I don't write any unit tests. Instead, I only do integration/system tests.
At the end of the day, I need to know that the system works and does what it is suppose to do. Unit tests adds too much complexity in my opinion and isn't worth it.
Kent Beck, credited with coining the term "unit test", defines unit tests as tests that run without affecting other tests. In other words, unit tests are tests that do not introduce side-effects. I suggest that if you are writing anything other than unit tests, as originally defined, in this day of age, you are doing something horribly wrong. Side-effects is how you end up with tests that randomly break, which is a nightmare for those running them. A good software steward would not deem that acceptable.
I've seen some other definitions out there, but they didn't make any sense. Obviously someone was trolling in those cases. Presumably you were thinking of one of them? Absolutely you wouldn't write tests that don't make any sense. I am not sure why anyone would.
The most important ones are those that catch bugs. If your system crashes you write a test that fails until you fix the bug. Don't punt on it because "it's hard to test" (code smell). Tests that never fail are of miniscule value.
[delayed]
My rule of thumb: unit test logic, integration test plumbing.
If a function does actual computation or has branching logic that could go wrong in subtle ways, unit test it. If it's mostly wiring things together (fetch data, transform, save), an integration test catches more real bugs.
The "unit test everything" era came from a time when spinning up test databases was painful. Now with Docker and in-memory DBs, integration tests are often faster to write AND more useful.
Where unit tests still win: algorithmic code, parsing, anything with lots of edge cases. Writing unit tests forces you to think about boundaries. Integration tests just tell you "it worked this time."
The worst outcome is having tests that make refactoring painful but don't catch real bugs. I've seen codebases where every internal method signature change breaks 50 unit tests that were essentially testing implementation details.
Writing unit tests is futile exercise without a specification.
The software under test is always modeling something -- business logic, a communications protocol, a control algorithm, a standard, etc. Behind each of those things is a specification. If a specification doesn't exist then the software is called a prototype. For sustained long term incremental development a specification must exist.
The purpose of unit tests is to assert specification-defined invariants at the module interface level.
Unit tests are durable iff the specification they uphold is explicit and accessible to developers and the scope of the test is small. It's futile to write good tests for a module which has ambiguous utility.
priors: I worked in embedded SW and am now a PhD student.
Tests are the specification.
That they also happen to be executable is there only to automatically ensure that the program actually conforms to the specification.
Unit tests are super useful when first writing a function or class to confirm it does what you think it does.
Then throw away the unit tests and write integration or E2E tests instead. Then you can refactor under the hood while ensuring overall system behavior is as expected.
There are some exceptions where you might want to hang on to a small subset of unit tests. They can be useful for demonstrating how to use an interface or class. They can help support particularly complex bits of logic. If a certain part of your codebase is fragile and regression prone, unit test coverage can help.
Otherwise, they just calcify the code.
My opinion is unit tests where you can easily craft inputs and check outputs.
If that becomes hard or you find yourself mocking a lot, then stop, and instead write integration and e2e tests.
Remember unit tests get tightly coupled to your code base, so use them wisely.
There's another aspect of unit testing. It makes the units testable. The greatest benefit of this is that the units tend to be more coherent. A large blob of code that isn't unit tested may not have clear boundaries or a functional raison d'etre. Tests also serve as documentation or demos of the units which is great for onboarding devs later on.
Maybe AI analysis/synthesis will change the math on this, but beyond early prototypes and PoC's, tests pay for themselves.
The moment the software is in production, making a lot of money and is stable, I then add lots of tests (both unit and integration) to prevent a $1,000 issue turning into a $100,000 problem later down the line.
Instead of testing everything to being perfect, 100% test coverage and never releasing and the company questioning why the deadlines were missed for the project because of testing dogma.
> I've personally found that when the architecture of the system is not mature yet, unit tests can get in the way. Terribly so. Integration tests or system tests to assert behavior seem the starting point in this and other scenario's, including when there are no tests at all yet.
Totally agree.
Testing, which is better known as documentation, is the contract for users that defines what your program is for and how the user can depend on it forever into the future. The test runner is not the focus, rather the effort to validate that what the contract says is actually true.
Test what the user will use. That could mean an end-user interface, or it could mean a public API. Again, the goal is for the user to understand what you are trying to accomplish for them and to have guarantees about what you won't change in the future.
I don't write any unit tests. Instead, I only do integration/system tests.
At the end of the day, I need to know that the system works and does what it is suppose to do. Unit tests adds too much complexity in my opinion and isn't worth it.
Kent Beck, credited with coining the term "unit test", defines unit tests as tests that run without affecting other tests. In other words, unit tests are tests that do not introduce side-effects. I suggest that if you are writing anything other than unit tests, as originally defined, in this day of age, you are doing something horribly wrong. Side-effects is how you end up with tests that randomly break, which is a nightmare for those running them. A good software steward would not deem that acceptable.
I've seen some other definitions out there, but they didn't make any sense. Obviously someone was trolling in those cases. Presumably you were thinking of one of them? Absolutely you wouldn't write tests that don't make any sense. I am not sure why anyone would.
It is a matter of horses for courses. Be driven by what the tests cost and what they accomplish.
Costs may be:
* Developer time to make and maintain.
* CI time means slower CI and higher costs there.
* Ossification of source code (especially unit tests, less so integration). Meaning harder to refactor as you also need to rewrite tests.
Benefits:
* Finds bugs
* Can be a handy local dev loop and local debugging loop
* Documents code and proves that documentation is correct
* Helps with AI assistance
* Integration tests should make refactoring easier or more confident.
I would err on the side of good coverage (80% excl stupid stuff) unless I have in hand a specific reasin not to.
The most important ones are those that catch bugs. If your system crashes you write a test that fails until you fix the bug. Don't punt on it because "it's hard to test" (code smell). Tests that never fail are of miniscule value.