About a month ago I had a little revelation in regards to software testing. I read Alec Reiter's Exploring Code Architecture and multiple scattered ideas suddenly clicked into place. Things like interfaces , layers , boundaries , and dependency injection suddenly made sense, all at once. In the period of time it took to reach the end of that post, my mind was drowning in all the new ways that I could produce software. I want to share that and everything I've learned since then. If you're time is scarce, you canscroll down to the resources sectionwith a few helpful links.
I think the reason why things clicked into place was that before reading that post, I consciously focused on producing as much Python code as possible over a certain time period. It started with a static site generator for this blog called wintersun , then I produced a netcat-like android app using the Python Kivy framework, had a slight detour into Elixir to try my hand at stockfigher.io , created an asynchronous TFTP server , and did some work on my own lil' web framework . Working on these in quick succession made me face some recurring problems.
One pattern that I noticed is that, out of these projects, the earlier ones are hard to test and hard to change whereas the later ones are easier to test and change. Another pattern was that I was mixing levels of abstraction by mixing different responsibilities into one container. This is most clearly visible in the py3tftp project (the earlier ones were too messy to even see the mess!) where network IO code is dangerously close to file IO code and the parsing logic.
Looking back at it all, there's a lot of progress, but there's definitely a long way to go. There's also the feeling of shame at how bad all of my old code must be. But there's a feeling of freedom and confidence too. The freedom and confidence that I finally have the tools to truly build things. From the ground up. Anything at all. A web framework? Sure. A game? Sounds great! A routing and inventory system for a farm? Hell yeah!
As long as I can pin down the domain knowledge of whatever it is I'm to build I can build it. Earlier, I built things one-way : there was a point of no return after which changes were impossible because things were so tightly coupled. Now I know how to keep things isolated from each other enough to make whatever I build more amenable to change. This is super important because it's usually impossible to define the project at the start. Creating a supple design that can change at the same pace as does your understanding of the problem domain is priceless.
I've first heard of TDD when I was a RoR intern but I could never make it work for me. I tried testing in multiple languages and frameworks. I listened to people talk about it on stage at meetups. I talked with other developers and it turned out they had the same problems as I did. How can you test something you don't know how to write yet? How can you justify the extra time needed for tests when you're already behind deadlines? How can you wait ages for your tests to complete? What value do tests even bring you?
I can finally answer these:
How can you test something you don't know how to write yet?
TDD and good software design are two sides of the same coin. You can't know if your design is good without TDD and you can't do TDD without good design.
Writing tests upfront pushes you to envision the design of a piece of software upfront - to take into consideration the end-to-end workings of the whole piece, some edge cases, and its place in the grand scheme of things. TDD is like setting down expectations and seeing if they even make sense.
Then, as you write the actual code (now much easier), the tests give you feedback on how you're progressing, how things actually fit together, how easy or hard it is to change things. Kent Beck also mentions that tests give him peace of mind and I wholly agree with him. I've worked on projects where a simple change took numerous hours because that change triggered a cascade of problems, sometimes even days later!
How can you justify the extra time needed for tests when you're already behind deadlines?
As a professional developer, can I sleep well at night not knowing whether the software I just delivered to a client actually works? All that'll happen is the wrong favicon will show up or the site will be offline for an hour or two. No biggie. What if this site starts leaking user details? We're not even talking about handling user's money or passwords - just things like their name, phone number, and email. How angry will they get if they're paying you money for your service? Probably pissed enough to sue. Now imagine your application also handles payments or other sensitive data. But can tests really prevent those nightmares?
A few researchers from Microsoft, IBM, and North Carolina State University did a study in 2008 titled "[Realizing quality improvement through test driven development: results and experiences of four industrial teams]". They found that:
[...] the pre-release defect density of the four products decreased between 40% and 90% relative to similar projects that did not use the TDD practice. Subjectively, the teams experienced a 15 - 35% increase in initial development time after adopting TDD.
In the worst case scenario you can cut down the number of bugs by almost half .
Imagine how much time this saves compared to having your client or user encounter a bug, send you a message, have you read that message, put it into a ticket system like Jira, possibly hand it off to another developer, spend time reproducing the bug, pushing the fix, deploying the fix.
I sure hope that this evoked feelings of stress and dread that most software developers are familiar with. TDD makes you tackle a lot of bugs up front so that even if you somehow spend as much time on TDD as on fixing user bugs, at least you're not losing the user's trust! That in itself should be enough to warrant introducing TDD.
How can you wait ages for your tests to complete?
This is a popular question from people who, in my opinion, haven't gotten down to testing right. I used to have this problem.
If your application is slaved to something like a database, no wonder your tests take from 1 to 30 minutes to complete. There are many ways to reduce this time: using an in-memory db, reusing the db schema, and few other tricks, but the biggest one is to actually go all in on TDD, which naturally leads to Clean Architecture (also known as Hexagonal Architecture and a few other names). If you move the meat of your logic - all the code built to handle the domain you're working in - away from the framework and into it's own set of modules that are independent of any IO then in most cases your tests will run in a second or two.
What value do tests even bring you?
In light of the previous answers, need I write anything here? ;)
TDD in the Real World
I feel like I should also mention that this isn't easy. It requires thinking through the design a lot and, especially when you're starting out, to throw a lot of code in the trash. You'll likely meet fierce resistance at your organization if you're already not using TDD.
Finally, I want to say that there is no zealotry in my admiration of TDD. It would be foolish to resist the real world. There are cases where I would forgo testing altogether - pieces of code with little value that I would ignore in order to ship a product on time. There are also those cases that I would strongly insist on testing - areas of code responsible for handling sensitive data, or ones that are business critical. No tests for the CommentService class? Probably ok. No tests for the cart checkout process? Absolute failure and critical target.
I should also mention that everything I wrote above is based on the OOP paradigm. I don't know enough about functional programming to extend these observations to that area, although as with all things FP, I think testing should be simpler and more enjoyable since you don't have to bother keeping track of state. I'm really looking forward to diving into my copy of Programming Phoenix to see some FP testing in action.
- Architecture: The Lost Years by R. Martin
- Domain-Driven Design: Tackling Complexity in the Heart of Software
- Refactoring code that accesses external services
- Clean and Green
- Simple Made Easy
- Growing Object-Oriented Software, Guided by Tests
- Patterns of Enterprise Application Architecture
- Boundaries by Gary Bernhardt