Accueil > professional > Book review: Growing Object-Oriented Software, Guided by Tests

Book review: Growing Object-Oriented Software, Guided by Tests

Not being fully satisfied about my TDD practices, I was reading The Failures of « Intro to TDD », which sums up nicely bits of the uneasiness I’m facing and offers a nice way of handling it. While tweeting about the article, I was once more told to read Growing Object-Oriented Software, Guided by Tests, and so I finally did and here is my feedback on it 🙂

So, first of all, the pain points. Sorry for them, but they’re some:

Reading plenty of business/tech choices related code, reworked time and time again, is an attention killer. Really.

The use case picked up comes with its technological choices and implementation related code which gets in the way of the pure « book related/TDD » code. Simply put, I struggled to keep on going on these 10 chapters. I’m happy to have done so though, since gems of knowledge are present in these chapters, but I would prefer them to be better presented.

Please note that there are books with code which I did like to read, but then these books were all about the code, like the Apache Wicket Cookbook. Yet, in these cases, I usually ended grabbing the code in some IDE to get into it, which isn’t applicable in such cases of « lengthy code as a real world example, reworked time and time again ».

Putting forward one’s own framework in a general purpose book isn’t this great either.

The authors are behind jMock2 and chose to use and present it extensively in the book. While mocking plays an important role in TDD for sure, and should be addressed in such a book, the authors IMHO went too much into the details of jMock2, especially since I’m an user of Mockito.

To be honest, I can’t remember why I’m a Mockito user, really. But then, the authors don’t present pro and cons of the various mocking frameworks. Why should I care about yet another mocking framework ? The book is about TDD, not TDD with jMock2, so the extra noise came as a surprise, which added to the one above increased the « attention killer » part of the code examples.

Good, done with the pain points, the good ones are way better 😉

The book is really interesting because one can feel the authors did (and still do) think a lot about testing, why to do it, how to do it. It’s not some freelance dude willing to push forward his business scrapping the surface and the common ground on something. They’re fully into the topic, having weighted and tried various ways and telling about them. And as often the result is a bit different from the noisy crowd of TDD/BDD fanatics. So let’s dig into the main ones, while the book contains way more.

Acceptance test first!

While traditionally TDD is presented roughly as « write an unit test, make it fail, make it work », Steve Freeman and Nat Pryce (the authors) say one must first write a (failing) acceptance test, highlighting the final goal of the current coding endeavor, and then write smaller unit tests on the way to make this acceptance test working. Once the acceptance test is working, the task at hand should be done.

While apparently minor, this advice has quite some consequences.

For example, what should be the first acceptance test of a new project? Well, simple, an end to end test! This test should be, as far as possible, from outside of your application, maybe faking its third party (for example faking a web service). It may be hard to achieve, sure, but then, once done, it’s crazy nice to have such end to end tests. You’ll be able to easily make sure your application is still starting. To check through all the stacks, frameworks and glue code you can come up with.

This acceptance test first approach has another nice side effect: it resolves the issue « how to TDD the higher level code first ». Indeed, when writing your initial acceptance test, the API is still to be decided. So it’s a nice occasion to play with it. On the way, sure, you should write some unit tests, but still the goal, the acceptance test, is separate from the work in progress. Changing the unit tests feels ok when one still has the acceptance test telling the bigger picture.

Only Mock Types That You Own

The chapter « Only Mock Types That You Own » nicely frames end to end tests and third party code in general. The rule « Only Mock Types That You Own » kind of stick to my mind and is really to the point. It reminds as well that one should generally fake external tools rather than mock them.

So nothing too crazy for once, but still straight to the point, important and a good example of the knowledge present in the book. And they even break this rule a few times on the example code, with valid reasons, which is a even nicer way of putting forwards rules/recommandations 😉

A quick side note: the authors didn’t explain the difference between mock, stub and fake. I guess it was obvious for them but it wasn’t for me (as a non native english speaker on top I guess) when I started looking at TDD and I read The Well Grounded Java Developer to find a nice overview which I gladly quote here as a reminder/note to the reader (and me):

  • dummy – An object that is passed around but never used. Typically used to fulfill the parameter list of a method.
  • Stub – An object that always returns the same canned response. May also hold some dummy state.
  • Fake – An actual working implementation (not of production quality or configuration) that can replace the real implementation.
  • Mock – An object that represents a series of expectations and provides canned responses.

And, in reference to the blog opening this post, Test Double, another quote from The Well Grounded Java Developer, which itself is a quote from xUnit Test Patterns:

A Test Double (think Stunt Double) is the generic term for any kind of pretend object used in place of a real object for testing purposes.

Am I having fun? A little bit yes, and furthermore it’s a nice occasion to tell The Well Grounded Java Developer was also a sweet book.

Simple test naming for the win 🙂

In my various workplaces, we tried different test naming patterns but it always felt a bit wrong. Old JUnit 3 asked for testFoo(), JUnit 4 just asks for #Test so you’re basically free. At a workshop Jean Laurent Morlhon suggested xxShouldYyyWhenZzz, for example createFooShouldReturnNullWhenCalledWithoutBar, or even if I remember correctly the same with underscores, ie create_foo_should_return_null_when_called_without_bar. We tried the first one. The latter, with underscores between names, seemed too verbose. Yet, even without the underscores, this should/when business is verbose as well, becoming unwanted repetition for the reader.

So, what did Steve Freeman and Nat Pryce suggest? Well, something close to « just describe the f****g functionality ».

For example: createsNullFooWhenBarIsNull. And then, they suggest the use of TestDox (coming as a plugin in IntelliJ) to display the functionalities. With the previous example and a test class named FooFactoryTest, it then returns FooFactory creates null foo when bar is null.

You got your functionalities list for free and it guides your readability like hell. No more mandatory ceremony for each test, just the functionality, no more no less, in a readable form. On top, if your functionalities list starts to grow too much, that’s easy to spot. Well, I didn’t use this extensively yet, but for once I’m convinced by the naming pattern. Let see how it’ll work out IRL.

Just one complaint by the way: the authors don’t really explain how they split end to end, integration, unit and acceptance tests. I guess it was shown somehow in the « real life code example » but it didn’t connect with my brain. Or maybe I missed it on the way. Dunno, and I don’t have a proper view on how I would organize all these nice tests, which turned out to be numerous. On top of that I guess I would love to have acceptance tests somehow apart, in order to recognize them easily. Well, something to ponder then (and check again on the book I guess).

Constructing Complex Test Data

They wrote as well a full chapter on « Constructing Complex Test Data ». I don’t know about you, but us, working in healthcare, we did struggle a few times with complex test data. How to best reuse it, how to share it and so forth.

The authors put forward a nice builder pattern, show casing the common pitfalls and their solutions (immutability wins as usual;)), which in the end kind of nicely close the topic of complex data. At least it did really ring a bell and I’ll use it for sure at work.

And plenty of other goodies

Well, I highlighted a few good points, but the book is basically full of them, so, well, go read it! I won’t spoil anymore, I guess you got the point: I really liked the book and I’m looking forward to applying it (and reading it again).

That’s all folks !

Side note: if ever any one is crazy enough to read all of such lengthy post, I would love to know it, so drop a small comment ^^

Étiquettes : , , , ,
  1. mars 19, 2014 à 8:50

    nice! thanks for sharing.

  1. No trackbacks yet.

Laisser un commentaire

Entrez vos coordonnées ci-dessous ou cliquez sur une icône pour vous connecter:


Vous commentez à l'aide de votre compte Déconnexion /  Changer )

Photo Google+

Vous commentez à l'aide de votre compte Google+. Déconnexion /  Changer )

Image Twitter

Vous commentez à l'aide de votre compte Twitter. Déconnexion /  Changer )

Photo Facebook

Vous commentez à l'aide de votre compte Facebook. Déconnexion /  Changer )


Connexion à %s

%d blogueurs aiment cette page :