This reminds me of something John Carmack tweeted once (can't find the tweet).
In the tweet, he said that when coding, he'd start by the smallest possible PoC, and code it entirely front to back. That'd give the general structure, and then he'd build upon that. (this is what I remember of it fwiw).
I do this all the time too, which I think is a vastly superior approach to TDD, which assumes how an API is going to be used, without actually writing the actual thing that's going to use it.
I was working with a newer developer once, and we started working on some app that was going to be something moderately complex. I literally started with a single class that looked like:
public class Foo
{
public static void main( String[] args )
{
System.out.println( "Done" );
}
}
And he was really struck at first, like "Why would you write Hello World when we're building a $WHATEVER? My response was that the very first thing I always want to see, in any new project, is something compiling, packaging and executing. It's my way of ensuring that at a minimum the environment is set up, cosmic radiation hasn't fried my CPU or RAM, etc. And once I have that trivial program running, I just start building up from there.
I more or less follow that same pattern for everything I write to this day. At most, the slightly more complex version I start with is something like a "template" or "skeleton" project. For example, I keep a sample Spring Boot project around that as a pom file, the directory structure, a package named something like org.fogbeam.example, and a simple controller that just returns "Hello World". Once I can build and run that, and hit localhost:8080 and see my "Hello World" page I start iterating from there.
I can't tell you exactly how I developed this habit over the years, but it's worked well for me.
On the subject of TDD, the way I've done TDD has been very similar to the technique described in the article. Work in very small increments, try to get a very thin vertical slice of functionality working through the system as soon as possible and, whatever you build, try to get it working end-to-end as soon as you can. Of course, I use tests to drive the work but I find it very helpful to use tests to drive those thin vertical slices of functionality.
The book that really helped me to start working in this way is Growing Object Oriented Software, Guided by Tests by Steve Freeman and Nat Pryce [0]. The book has been around for a few years at this point and tech has moved on since then, as have some of the techniques, but it's still a very interesting read. (Disclosure: I was lucky enough to briefly work with one of the authors a few years ago but I was a fan of the book long before then.)
I don't remember where I heard this, but the method was described to me using the story of how a suspension bridge is built: first an arrow with twine tied to it is shot from one side of the canyon to the other. Then that twine is used to pull thicker twine, then rope, then steel cables, and so on. From a string to a suspension bridge, with the gap between the canyons conquered the entire time. To achieve large scale software projects, start with the thinnest logical twine that goes from start to finish for the project at hand, and build out with a start to finish operating environment as soon as possible and throughout the duration of the project.
I've nice to hear that. This is also how I approach building software. My goal after at the end of a V1 is that if anyone looked at the code, they would wonder where it all is, shouldn't there be more code?
I try to make the features, structure, everything as simple as possible. As you do this you'll see things that should be probably be abstracted, things you'll want to do as soon as this next part is in.
Don't do it yet, wait for that actual feature to be written, then you refactor, make a system, etc.. Don't do it prematurely. You want to wait because the longer you wait, the chance the features parameters or use case will be different, or it won't even exist. Half the requirements they think they need at the start will be side thoughts by the end.
You can (and perhaps should!) do this with TDD, you just start with functional tests instead. Only commit to unit and integration tests when you know you’re not going to be throwing lots of stuff away through refactorings.
That works if you are experienced in architecting these things. As with anything, if you're already pretty close in structure, you won't have as many problems adapting the PoC to the final form. If you were further away that you thought you'll end up with a bunch of inelegant hacks to reach the working state.
The real problem is getting to the point where the initial solution you imagine is close to the initial PoC.
> The real problem is getting to the point where the initial solution you imagine is close to the initial PoC.
IMHO Carmack's point is closer to the standard advice given to writers (which is also true and good):
Just get something on paper that you know is somewhat workable, and then reshape it from there.
Especially in team situations (as opposed to solo coding), the effect is magical. Endless meetings and weeks of technical flailing can be skipped by just having something instead of nothing.
Although, lately I've been finding I like this approach for solo coding too. Last night I opened up a Terminal, along with a browser tab for my new coding buddy, ChatGPT. The code I got from ChatGPT was absolutely horrendous for my needs, but at least it was something — and, an hour later, I'd scrapped and rewritten everything except for a couple function names.
There's something just plain nice about keeping things moving along — about getting some more clay on the table, some more paint on the canvas, and not being shy about reworking it after that.
I think TDD tries to capture this (especially in its pair-programming ping-pong-style implementations) — but, in its haste to come up with a one-size-fits-all system, TDD glosses over the soul of what we actually do. You're getting at exactly why — it over-constrains the freedom to reshape things, and it slaps those constraints in too early.
This is how I do TDD - first I build out the smallest possible thing (usually with code, sometimes with comments) then iterate / play around a little until I have the structure right. Then start putting tests in.
Then later in the cycle you have high-level structure so it's easier to start with the test.
We teach TDD and a lot of Software Engineering practises largely to beginners to make them productive for the maintenance of software - as that is about 95% of or work. So the flows that are stressed are those suitable to maturing/mature code not to completely new systems.
> TDD, which assumes how an API is going to be used, without actually writing the actual thing that's going to use it.
That’s not TDD. It’s a common misconception that you’ve fallen for, and are now spreading further.
TDD is a series of small steps where you write a bit of test code—about five lines—and then a bit of production code—about five lines—then refactor, then repeat.
Here’s a free chapter in my book that describes how it works:
I take a similar but somewhat orthogonal approach.
Most of the time, any major features that require refactoring are usually around the data model and its representation in code (the existing control flow and overall flow of a request through the system is generally fine).
I will build out what I believe the new data model should be, and then just work front-to-back, updating any references and refactoring the shared state and responsibility into the new data model, clearly separating out concerns and encapsulating responsibility.
This method has proved itself time and again, and I recommend it to anyone who needs to make large changes to and existing code base. That is, start with how the kernels of data, state, and responsibility should look, and everything grows from there.
Similar here. I pick the one thing the system depends on, I call it ‘hello world’, and implement it with no UI at all and one command line test. The test exists to prove to myself and my team that it works, and know when it breaks, not to demonstrate an API. You can call it TDD or not.
So I when someone suggests “we can build X” I say yes but first we need to build “hello world”. The discussion about what constitutes Hello World is often valuable, but often ends up with examples like the article, eg "can we write a message and the recipient gets it?" "can we do one trade?" etc. These sometimes seems like trivial goals but implementing them can be surprising.
IMO this style isn't at odds with the spirit of TDD. TDD is mainly a technique to teach people about loose coupling and designing for maintenance. The main takeaway has always been that you should be able to run/exercise your code at every step and that you should use code to do that. No matter if it's an API or the whole program, that code you use to exercise it is what is key as it not only exercises the code it also helps you understand the problem.
> I think is a vastly superior approach to TDD, which assumes how an API is going to be used
I think this is a function of how much you're designing up front, not of TDD itself. The stuff you design, you write tests for and then build. How much you choose to design up front (maybe almost nothing) is up to you.
You see this pattern with almost anyone who is proficient at almost anything. Start with something in the simplest, smallest or most general way you can and master it, then build from there.
Mathematicians and physicists make a career out of this, but it works with almost anything, including sports.
Inside my particular mind, this practice is called "driving a wire through it", which is close enough to the concept of a "steel thread" that I can imagine where the author might have gotten the idea.
If you write tests first, you don't write a complete front to back "thread" first, right?
If you write tests first, you assume that the test is going to match how you're going to use the stuff in a real situation, which you generally don't know.
That complete front-to-back 'happy path' test is exactly what I would normally start with as a first test, and it should of course be representative of how you are going to use it in a real situation.
Not sure what kind of tests you write if they don't represent the actual expected behavior?
You don't write production code unless there is a need for it.
The need is documented in a failing test case.
Where does the failing test case come from?
Hopefully from some other part of the code needing that code to be there. Or are you just conjuring up test cases out of thin air? If you're doing that, I'd venture you're not doing TDD.
And certainly doing a spike to get the lay of the land is very mach part of XP where TDD came from.
As is slicing your system vertically, so complete units of functionality within an incomplete system.
Rather than slicing horizontally, which is what you seem to be doing.
In the tweet, he said that when coding, he'd start by the smallest possible PoC, and code it entirely front to back. That'd give the general structure, and then he'd build upon that. (this is what I remember of it fwiw).
I do this all the time too, which I think is a vastly superior approach to TDD, which assumes how an API is going to be used, without actually writing the actual thing that's going to use it.