1

I'll be using Uncle Bob's terms, so i'll call the use cases "interactors" and the domain entities "entities". As far as I understand, the most specific business rules that are most likely to change should not be in the domain entities, but in the application layer (in the interactors). And the business rules that are less likely to change (unless there is a foundational change to that entity) should go in the domain (in the entities).

So, I did some TDD kata's and watched several TDD demo's and conferences about TDD, and it is a discipline that I would like to incorporate into my development skills. But i'm having a hard time trying to combine TDD with clean architerture, i think there is something i am not understanding well enough. I don't really know if TDD has to fit into a clean architecture or let the clean architecture emerge from the TDD (maybe in the refactor phase). Am I explaining myself enough?

Just to give some context, imagine you start a web app project from scratch. What should you test? What should be the first test that you would do, an interactor test? A domain entity test? Should your entities emerge from tdding the interactors? Do I have to know upfront if what I am testing is a use case or an entity? Or should I just focus on the business rule being satisfied regardless of whether the logic I'm testing goes to the entity or the interactor?

In other words, should you start by applying TDD to the interactor business logic or should you start TDDing the business logic in the domain entities?

I hope someone can bring me some light to this issue.

3
  • TDD does not have any opinion on where you start, as long as you start with something which can be tested in isolation.
    – JacquesB
    Commented Apr 12, 2022 at 6:04
  • The idea of clean architecture is that you have a separation of concerns. Meaning that you don't have to know what is the DB for developing the service. This is actually what makes TDD very possible in clean architecture. You can write a test for some layer, and make this test passes without implementing the details in the other layers. Also, since layers are dependent on inner layers, especially the core entities, for testing the services (use-cases, interactors), you must have some entities. These are just DTOs. If you have a business policy logic in the core, you can test it seperatly
    – oren
    Commented Apr 12, 2022 at 8:39
  • TDD does not magically make some architecture emerge. TDD is a low-level implementation technique, not a high level design technique.
    – Doc Brown
    Commented Apr 13, 2022 at 10:36

3 Answers 3

5

I don't really know if TDD has to fit into a clean architecture or let the clean architecture emerge from the TDD

If an architecture were to emerge naturally, we wouldn't be calling it an architecture. Pretty much inherently, an architecture is a conscious implementation of a structure that facilitates the core goal.

Regardless of which architecture you'd use, you'd be building your application component per component, right? Even if e.g. your PersonService may not be as fleshed out in v0.1 as it will be in the future v1.0; it will be a component with an observable behavior, and therefore you can write tests to confirm that it works as you expect it to. As the service gets more fleshed out in the future, so do its tests.

Just to give some context, imagine you start a web app project from scratch. What should you test? What should be the first test that you would do, an interactor test? A domain entity test?

You've skipped the step of actually having something that needs to be tested. Somewhat obviously, that is inevitably required in order to write tests for it.

Different testing strategies exist. You could develop your component and then write tests for it, or you could develop your component's interface, write tests to confirm its (as of yet unexisting) behavior (= red phase), and then write its implementation until all the tests pass (= green phase).

Edit: TDD specifically describes writing tests before your implementation. The previous paragraph was intended to point out that the core of this answer is applicable in general, not just for TDD.

Whichever testing strategy you take, it always starts with knowing what component you're going to create (be it the interface or already its implementation). This is a necessary precursor to figure out your actual test suite.

Without that step, your question doesn't really make sense. You're like a painter holding a pot of paint but not having any walls that need painting, asking if you can make a wall appear by waving your wet brush in the air where you want there to be a wall.

Should your entities emerge from tdding the interactors? Do I have to know upfront if what I am testing is a use case or an entity?

You're conflating your testing strategy with your actual requirements and (technical) analysis. These are two very different things and one does not inherently also provide the other.

Some projects may organically grow their requirements as they go; other projects will receive analyzed requirements feature by feature; and other projects might receive a fully fleshed out analysis that describes the entire requirements package. None of this has any impact on what TDD is and what it prescribes.

Sure, if you receive the entire analysis in a single go, you could theoretically write your entire test suite from the get go, but this is a discussion on the size of your tasks rather than the development process that you follow in a given task (regardless of its size).

In short, TDD is not the dogma of "if you test it, it will come".

2
  • First of all, thank you, all you have said makes a lot of sense and it has helped me. So you really do have to know which component you are going to test. I mean, you know in advance how the component should behave and based on that you write the small tests to satisfy the expected behavior of that component. The way you want that component to behave is part of the architecture, it's your decision. For example, I expect domain entities to have certain validation logic according to "x" business rules and that's why I'm going to write a test class to test that this entity behaves as I expect. Commented Apr 12, 2022 at 11:48
  • @JordiPagès You are 100% on the money there.
    – Flater
    Commented Apr 12, 2022 at 12:46
0

I don't really know if TDD has to fit into a clean architecture or let the clean architecture emerge from the TDD (maybe in the refactor phase).

As a rule, the design of the underlying implementation happens during the "refactoring" phase, when we actually designing the pieces that lie behind the interface.

BUT choosing where to start testing... that's something of an art.

If you review Growing Object Oriented Software, Guided By Tests, you'll find that Pryce, Freeman et al start from the "outside". So in that case, I'd expect to see a test of the web app itself - an HTTP or REST client outside of app process itself, verifying that you get the correct response.

In contrast, if you look at the original bowling exercise, you'll see that Koss and Martin start from the other direction:

Shall we start at the end of the dependency chain and work backwards? That will make testing easier

They begin with a guess as to how the design will eventually be decomposed, choose a promising leaf in the graph, and get to work. As a rule, the starting point is to write tests that evaluate a piece of the logic, rather than the app as a whole.

Both approaches are "TDD", and both approaches work; they have somewhat different trade-offs.


As far as I can tell, there's nothing about The Dependency Rule that requires one starting approach or the other.

Beck wrote:

TDD is an awareness of the gap between decision and feedback during programming, and techniques to control that gap

Controlling that gap requires continual adjustment; we expect that over the lifetime of a project that the design will change (because we learn better ways to think about the elements of our design), and therefore we may need different test designs than we needed at the beginning.

0

Start with an interesting behavior.

That's what's worth testing. Write a test that proves you don't yet have this interesting behavior. Now write code that provides this behavior and makes the test pass.

Now you have behavior code and a test. What you don't have is a demo. For the demo you write infrastructure code that will show off the interesting behavior when the boss launches the app. This infrastructure demo code will hopefully be uninteresting code that doesn't need a test. It just provides the behavior code with what it was getting from the test and a way to show it's results.

Consider architecture when you have multiple interesting behaviors that need to be integrated together. That's when this starts to become important. You need the layers of the architecture to play nice with the multiple behaviors. As this settles in some completed behaviors might need adjusting to work with the architecture you settle on. Thankfully as you refactor you have tests that prove you didn't break anything.

Do it this way and it's easy to avoid adding unneeded complexity because you aren't blindly following a book. You're adding stuff when you can see it be useful. Avoid deciding you're going to have stuff before you're sure why you need it. Don't decide not to have it either. Just keep procrastinating decisions by not writing code that makes design decisions for you.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.