TDD offers a plethora of benefits, yet there are instances when tests can be inadequately designed, necessitating some degree of refactoring to enhance their structure. The creative nature of development means that both novices and seasoned veterans can sometimes fall into these pitfalls. Recognizing these traps and understanding how to navigate them are a developer's most effective strategies in tackling these challenges.

Common test smells to watch out for

  1. Exposing internal details:

    When a test needs to access the internal details of the object being tested, it's a clear indication that encapsulation is compromised.

    • An example of this would be making properties public to check state in a test.
    • Or relying too heavily on Mock assertions that methods were called in the correct order. Tests should be ignorant of internal details; they should only care about results.

  2. Passing too much information:

    If the object being tested has many parameters being passed into its constructor, it's time to break it up into several new objects.

    • A good rule of thumb is that if the object has 5 or more items being passed into the constructor, it's time to explore the single responsibility principle of SOLID and break up the object.
    • If any of the data being passed in is state for the object to operate on, try and move towards stateless objects. This means passing the state into the method that needs to operate on it, not in via the constructor.

  3. Tests replicating production code:

    Code should not be replicated. If you find yourself replicating production code in tests, stop and reverse; you're about to fall into a trap.

  4. Highly coupled code:

    This is often a result of neglecting to reflect and refactor after each TDD cycle.

    • It can result in fragile tests - when a test fails to run or compile due to unrelated changes to the object being tested.
    • It's often driven by poor or hasty design choices, or a lack of a reward structure that promotes cleaner, more maintainable code over short-term optimized feature production.

  5. Test code duplication:

    Often, tests need to perform similar tasks, leading to rapid code duplication which can cause maintenance problems in the future.

    • To manage duplication, it's recommended to use builder patterns such as factory methods and test data builders. Centralize the duplication by making use of Test Setup and Teardown attributes.

  6. Test case abuse:

    Developers often aim to minimize their workload. A useful feature to reuse a test is a test case, which allows varying the input into a single test.

    • The danger here is that an entire method becomes tested with a single test that has many test cases. When this happens, the test name no longer accurately reflects the scenario being tested. The test becomes a blunt instrument that the developer uses to batter the code in the name of efficiency.
    • Test Cases are often a result of Triangulation and Fake It green bar patterns. They exist to consolidate tests related to a single concept. They aren't meant as an efficiency tool for developers to avoid 'duplication' in code. When in doubt, be verbose and create a new test.

  7. Avoid logic in your tests

    When writing tests, it is important to keep the test logic simple and avoid using loops and conditionals whenever possible. Using simple statements reduces the chances of introducing bugs in the tests themselves, which can lead to false positives or false negatives.

    If loops or conditionals are used in the tests, it can be difficult to determine which path the test will take, making it harder to debug when things go wrong. Additionally, using loops in tests can result in shared state between tests, which can cause unexpected behavior and make the tests harder to maintain.

    By using simple statements in tests, you can ensure that each test is focused on a specific behavior or functionality, and that it is easy to understand and maintain. This can make it easier to identify and fix issues in the codebase, leading to more reliable and efficient software development.

  8. Don't test multiple concerns in the same test

    When testing a method that has multiple possible end results, it is important to write separate tests for each possible outcome. Testing each outcome separately can help you to more easily identify the source of any bugs that may arise.

    Testing each outcome separately can also help to ensure that each potential result is being handled correctly, and that there are no unexpected side effects or interactions between different outcomes. By testing each outcome separately, you can also ensure that the tests are focused and targeted, making them easier to write, read, and maintain.

    Overall, by testing each possible end result of a method separately, you can improve the reliability and accuracy of your tests, and ultimately create more robust and efficient software applications.

    Pay attention when writing "and" or "or" in your test names!

  9. Leaky Abstraction Trap

    It is very easy to misuse a mock. A prevalent misapplication of this concept results in what's known as the 'Leaky Abstraction Trap'. This situation arises, for example, when a test asserts that methods were invoked in a specific order. Such a test design makes it challenging to modify the internal structure of the system without necessitating concurrent test refactoring.

    It's a trap to think that tests need to change along with structural refactorings in your code. In fact, a test should only have two reasons to change: a bug was discovered, or the requirements changed. If you find yourself refactoring your tests in parallel with code refactorings, it's a sign that you've leaked too many implementation details into your tests, thus making them brittle.

    While using Mocks is perfectly acceptable and quite common in Test-Driven Development (TDD), relying on them almost exclusively as your primary mechanism for asserting test failure or success can lead to expensive and brittle tests. Your tests should be insulated as much as possible from the internals of what they are testing.

    An example of this 'Leaky Abstraction Trap' is using Mocks to verify that an orchestration service called all the correct dependencies in the correct order. Instead, it's better to use Stubs to set up the expected scenario, and then ensure that the correct result was achieved with little or no reliance on asserting through Mocks. The focus should be on asserting that the correct result was achieved, rather than confirming that the correct methods were called in the correct order.

    An exception to this is the use of Mocks for testing services that don't return values and interact with external systems, like the email service example given below. In such cases, using Mocks for assertions is a good strategy as trying to assert without Mocks could lead to more expensive and brittle tests.

    public class ResetPasswordUseCase
    {
        private readonly IEmailService _emailService;
    
        public ResetPasswordUseCase(IEmailService emailService)
        {
            _emailService = emailService;
        }
    
        public void Execute(string emailAddresss)
        {
            var message = new EmailMessage
            {
                Subject = "Password Reset",
                MailTo = emailAddresss,
                Message = "Please follow the link ..."
            };
    
            _emailService.SendMessage(message);
        }
    }
    
    public interface IEmailService
    {
        void SendMessage(EmailMessage message);
    }
    
    public class EmailMessage
    {
        public string Subject { get; set; }
        public string MailTo { get; set; }
        public string Message { get; set; }
    }
    
    [TestFixture]
    public class ResetPasswordUseCaseTest
    {
        [Test]
        public void Copy_WhenManyCharacters_ShouldCopyUntilNewline()
        {
            // Arrange
            var resetEmail = "jane@mail.com";
            var emailService = Substitute.For();
            var usecase = new ResetPasswordUseCase(emailService);
            // Act
            usecase.Execute(resetEmail);
            // Assert
            emailService.Received(1).SendMessage(Arg.Any()); 
        }
    }
    

    Figure 1: Good use of mocks

    Instead of calling a real email service, we create a mock email service using NSubstitute to assert that the SendMessage method was called once. We could have also inspected the argument passed to SendMessage using Arg.Is<EmailMessage>(message => message.MailTo == resetEmail) instead of Arg.Any<EmailMessage>(). This would add an extra layer of surety to our test ensuring the method was called with the correct data.