Category Open/Closed principle (OCP)

Liskov substitution principle (LSP) – Architectural Principles

The Liskov Substitution Principle (LSP) states that in a program, if we replace an instance of a superclass (supertype) with an instance of a subclass (subtype), the program should not break or behave unexpectedly.Imagine we have a base class called Bird with a function called Fly, and we add the Eagle and Penguin subclasses. Since a penguin can’t fly, replacing an instance of the Bird class with an instance of the Penguin subclass might cause problems because the program expects all birds to be able to fly.So, according to the LSP, our subclasses should behave so the program can still work correctly, even if it doesn’t know which subclass it’s using, preserving system stability.Before moving on with the LSP, let’s look at covariance and contravariance.

Covariance and contravariance

We won’t go too deep into this, so we don’t move too far away from the LSP, but since the formal definition mentions them, we must understand these at least a minimum.Covariance and contravariance represent specific polymorphic scenarios. They allow reference types to be converted into other types implicitly. They apply to generic type arguments, delegates, and array types. Chances are, you will never need to remember this, as most of it is implicit, yet, here’s an overview:

  • Covariance (out) enables us to use a more derived type (a subtype) instead of the supertype. Covariance is usually applicable to method return types. For instance, if a base class method returns an instance of a class, the equivalent method of a derived class can return an instance of a subclass.
  • Contravariance (in) is the reverse situation. It allows a less derived type (a supertype) to be used instead of the subtype. Contravariance is usually applicable to method argument types. If a method of a base class accepts a parameter of a particular class, the equivalent method of a derived class can accept a parameter of a superclass.

Let’s use some code to understand this more, starting with the model we are using:

public record class Weapon { }
public record class Sword : Weapon { }
public record class TwoHandedSword : Sword { }

Simple class hierarchy, we have a TwoHandedSword class that inherits from the Sword class and the Sword class that inherits from the Weapon class.

Covariance

To demo covariance, we leverage the following generic interface:

public interface ICovariant<out T>
{
    T Get();
}

In C#, the out modifier, the highlighted code, explicitly specifies that the generic parameter T is covariant. Covariance applies to return types, hence the Get method that returns the generic type T.Before testing this out, we need an implementation. Here’s a barebone one:

public class SwordGetter : ICovariant<Sword>
{
    private static readonly Sword _instance = new();
    public Sword Get() => _instance;
}

The highlighted code, which represents the T parameter, is of type Sword, a subclass of Weapon. Since covariance means you can return (output) the instance of a subtype as its supertype, using the Sword subtype allows exploring this with the Weapon supertype. Here’s the xUnit fact that demonstrates covariance:

[Fact]
public void Generic_Covariance_tests()
{
    ICovariant<Sword> swordGetter = new SwordGetter();
    ICovariant<Weapon> weaponGetter = swordGetter;
    Assert.Same(swordGetter, weaponGetter);
    Sword sword = swordGetter.Get();
    Weapon weapon = weaponGetter.Get();
    var isSwordASword = Assert.IsType<Sword>(sword);
    var isWeaponASword = Assert.IsType<Sword>(weapon);
    Assert.NotNull(isSwordASword);
    Assert.NotNull(isWeaponASword);
}

The highlighted line represents covariance, showing that we can implicitly convert the ICovariant<Sword> subtype to the ICovariant<Weapon> supertype.The code after that showcases what happens with that polymorphic change. For example, the Get method of the weaponGetter object returns a Weapon type, not a Sword, even if the underlying instance is a SwordGetter object. However, that Weapon is, in fact, a Sword, as the assertions demonstrate.Next, let’s explore contravariance.

Open/Closed principle (OCP) – Architectural Principles-2

Now the EntityService is composed of an EntityRepository instance, and there is no more inheritance. However, we still tightly coupled both classes, and it is impossible to change the behavior of the EntityService this way without changing its code.To fix our last issues, we can inject an EntityRepository instance into the class constructor where we set our private field like this:

namespace OCP.DependencyInjection;
public class EntityService
{
    private readonly EntityRepository _repository;
    public EntityService(EntityRepository repository)
    {
        _repository = repository;
    }
    public async Task ComplexBusinessProcessAsync(Entity entity)
    {
        // Do some complex things here
        await _repository.CreateAsync(entity);
        // Do more complex things here
    }
}

With the preceding change, we broke the tight coupling between the EntityService and the EntityRepository classes. We can also control the behavior of the EntityService class from the outside by deciding what instance of the EntityRepository class we inject into the EntityService constructor. We could even go further by leveraging an abstraction instead of a concrete class and explore this subsequently while covering the DIP.As we just explored, the OCP is a super powerful principle, yet simple, that allows controlling an object from the outside. For example, we could create two instances of the EntityService class with different EntityRepository instances that connect to different databases. Here’s a rough example:

using OCP;
using OCP.DependencyInjection;
// Create the entity in database 1
var repository1 = new EntityRepository(/* connection string 1 */);
var service1 = new EntityService(repository1);
// Create the entity in database 2
var repository2 = new EntityRepository(/* connection string 2 */);
var service2 = new EntityService(repository2);
// Save an entity in two different databases
var entity = new Entity();
await service1.ComplexBusinessProcessAsync(entity);
await service2.ComplexBusinessProcessAsync(entity);

In the preceding code, assuming we implemented the EntityRepository class and configured repository1 and repository2 differently, the result of executing the ComplexBusinessProcessAsync method on service1 and service2 would create the entity in two different databases. The behavior change between the two instances happened without changing the code of the EntityService class; composition: 1, inheritance: 0.

We explore the Strategy pattern—the best way of implementing the OCP—in Chapter 5, Strategy, Abstract Factory, and Singleton. We revisit that pattern and also learn to assemble our program’s well-designed pieces and sew them together using dependency injection in Chapter 6, Dependency Injection.

Next, we explore the principle we can perceive as the most complex of the five, yet the one we will use the less.

Open/Closed principle (OCP) – Architectural Principles-1

Let’s start this section with a quote from Bertrand Meyer, the person who first wrote the term open/closed principle in 1988:

“Software entities (classes, modules, functions, and so on) should be open for extension but closed for modification.”

OK, but what does that mean? It means you should be able to change the class behaviors from the outside without altering the code.As a bit of history, the first appearance of the OCP in 1988 referred to inheritance, and OOP has evolved a lot since then. Inheritance is still useful, but you should be careful as it is easily misused. Inheritance creates direct coupling between classes. You should, most of the time, opt for composition over inheritance.

“Composition over inheritance” is a principle that suggests it’s better to build objects by combining simple, flexible parts (composition) rather than by inheriting properties from a larger, more complex object (inheritance).

Think of it like building with LEGO® blocks. It’s easier to build and adjust your creation if you put together small blocks (composition) rather than trying to alter a big, single block that already has a fixed shape (inheritance).

Meanwhile, we explore three versions of a business process to illustrate the OCP.

Project – Open Close

First, we look at the Entity and EntityRepository classes used in the code samples:

public record class Entity();
public class EntityRepository
{
    public virtual Task CreateAsync(Entity entity)
        => throw new NotImplementedException();
}

The Entity class represents a simple fictive entity with no properties; consider it anything you’d like. The EntityRepository class has a single CreateAsync method that inserts an instance of an Entity in a database (if it was implemented).

The code sample has few implementation details because it is irrelevant to understanding the OCP. Please assume we implemented the CreateAsync logic using your favorite database.

For the rest of the sample, we refactor the EntityService class, beginning with a version that inherits the EntityRepository class, breaking the OCP:

namespace OCP.NoComposability;
public class EntityService : EntityRepository
{
    public async Task ComplexBusinessProcessAsync(Entity entity)
    {
        // Do some complex things here
        await CreateAsync(entity);
        // Do more complex things here
    }
}

As the namespace implies, the preceding EntityService class offers no composability. Moreover, we tightly coupled it with the EntityRepository class. Since we just covered the composition over inheritance principle, we can quickly isolate the problem: inheritance.As the next step to fix this mess, let’s extract a private _repository field to hold an EntityRepository instance instead:

namespace OCP.Composability;
public class EntityService
{
    private readonly EntityRepository _repository
        = new EntityRepository();
    public async Task ComplexBusinessProcessAsync(Entity entity)
    {
        // Do some complex things here
        await _repository.CreateAsync(entity);
        // Do more complex things here
    }
}

Single responsibility principle (SRP) – Architectural Principles-2

The ProductRepository class mixes public and private product logic. From that API alone, there are many possibilities where an error could lead to leaking restricted data to public users. That is also true because the class exposes the private logic to the public-facing consumers; someone else could make a mistake.We are ready to rethink the class now that we identified the responsibilities. We know it has two responsibilities, so breaking the class into two sounds like an excellent first step. Let’s start with extracting a public API:

namespace AfterSRP;
public class PublicProductReader
{
    public ValueTask<IEnumerable<Product>> GetAllAsync()
        => throw new NotImplementedException();
    public ValueTask<Product> GetOneAsync(int productId)
        => throw new NotImplementedException();
}

The PublicProductReader class now contains only two methods: GetAllAsync and GetOneAsync. When reading the name of the class and its methods, it is clear that the class handles only public product data. By lowering the complexity of the class, we made it easier to understandNext, let’s do the same for the private products:

namespace AfterSRP;
public class PrivateProductRepository
{
    public ValueTask<IEnumerable<Product>> GetAllAsync()
        => throw new NotImplementedException();
    public ValueTask<Product> GetOneAsync(int productId)
        => throw new NotImplementedException();
    public ValueTask CreateAsync(Product product)
        => throw new NotImplementedException();
    public ValueTask DeleteAsync(Product product)
        => throw new NotImplementedException();
    public ValueTask UpdateAsync(Product product)
        => throw new NotImplementedException();
}

The PrivateProductRepository class follows the same pattern. It includes the read methods, named the same as the PublicProductReader class, and the mutation methods only users with private access can use.We improved our code’s readability, flexibility, and security by splitting the initial class into two. However, one thing to be careful about with the SRP is not to over-separate classes. The more classes in a system, the more complex assembling the system can become, and the harder it can be to debug and follow the execution paths. On the other hand, many well-separated responsibilities should lead to a better, more testable system.It is tough to define one hard rule that defines “one reason” or “a single responsibility”. However, as a rule of thumb, aim at packing a cohesive set of functionalities in a single class that revolves around its responsibility. You should strip out any excess logic and add missing pieces.A good indicator of the SRP violation is when you don’t know how to name an element, which points towards the fact that the element should not reside there, that you should extract it, or that you should split it into multiple smaller pieces.

Using precise names for variables, methods, classes, and other elements is very important and should not be overlooked.

Another good indicator is when a method becomes too big, maybe containing many if statements or loops. In that case, you can split that method into multiple smaller methods, classes, or any other construct that suits your requirements. That should make the code easier to read and make the initial method’s body cleaner. It often also helps you get rid of useless comments and improve testability. Next, we explore how to change behaviors without modifying code, but before that, let’s look at interfaces.

Keep it simple, stupid (KISS) – Architectural Principles

This is another straightforward principle, yet one of the most important. Like in the real world, the more moving pieces, the more chances something breaks. This principle is a design philosophy that advocates for simplicity in design. It emphasizes the idea that systems work best when they are kept simple rather than made complex.Striving for simplicity might involve writing shorter methods or functions, minimizing the number of parameters, avoiding over-architecting, and choosing the simplest solution to solve a problem.Adding interfaces, abstraction layers, and complex object hierarchy adds complexity, but are the added benefits better than the underlying complexity? If so, they are worth it; otherwise, they are not.

As a guiding principle, when you can write the same program with less complexity, do it. This is also why predicting future requirements can often prove detrimental, as it may inadvertently inject unnecessary complexity into your codebase for features that might never materialize.

We study design patterns in the book and design systems using them. We learn how to apply a high degree of engineering to our code, which can lead to over-engineering if done in the wrong context. Towards the end of the book, we circle back on the KISS principle when exploring the vertical slice architecture and request-endpoint-response (REPR) patterns.Next, we delve into the SOLID principles, which are the key to flexible software design.

The SOLID principles

SOLID is an acronym representing five principles that extend the basic OOP concepts of Abstraction, Encapsulation, Inheritance, and Polymorphism. They add more details about what to do and how to do it, guiding developers toward more robust and flexible designs.It is crucial to remember that these are just guiding principles, not rules that you must follow, no matter what. Think about what makes sense for your specific project. If you’re building a small tool, it might be acceptable not to follow these principles as strictly as you would for a crucial business application. In the case of business-critical applications, it might be a good idea to stick to them more closely. Still, it’s usually a smart move to follow them, no matter the size of your app. That’s why we’re discussing them before diving into design patterns.The SOLID acronym represents the following:

  • Single responsibility principle
  • Open/Closed principle
  • Liskov substitution principle
  • Interface segregation principle
  • Dependency inversion principle

By following these principles, your systems should become easier to test and maintain.

Important testing principles – Automated Testing

Finally, we can create a dedicated class that instantiates WebApplicationFactory manually. It leverages the other workarounds but makes the test cases more readable. By encapsulating the setup of the test application in a class, you will improve the reusability and maintenance cost in most cases.First, we need to change the Program class visibility by adding the following line to the Project.cs file:

public partial class Program { }

Now that we can access the Program class without the need to allow internal visibility to our test project, we can create our test application like this:

namespace MyMinimalApiApp;
public class MyTestApplication : WebApplicationFactory<Program> {}

Finally, we can reuse the same code to test our program but instantiate MyTestApplication instead of WebApplicationFactory<Program>, highlighted in the following code:

namespace MyMinimalApiApp;
public class MyTestApplicationTest
{
    public class Get : ProgramTestWithoutFixture
    {
        [Fact]
        public async Task Should_respond_a_status_200_OK()
        {
            // Arrange
            await using var app = new MyTestApplication();
            var httpClient = app.CreateClient();
            // Act
            var result = await httpClient.GetAsync(“/”);
            // Assert
            Assert.Equal(HttpStatusCode.OK, result.StatusCode);
        }
    }
}

You can also leverage fixtures, but for the sake of simplicity, I decided to show you how to instantiate our new test application manually.And that’s it. We have covered multiple ways to work around integration testing minimal APIs simplistically and elegantly. Next, we explore a few testing principles before moving to architectural principles in the next chapter.

Important testing principles

One essential thing to remember when writing tests is to test use cases, not the code itself; we are testing features’ correctness, not code correctness. Of course, if the expected outcome of a feature is correct, that also means the codebase is correct. However, it is not always true the other way around; correct code may yield an incorrect outcome. Also, remember that code costs money to write, while features deliver value.To help with that, test requirements should revolve around inputs and outputs. When specific values go into your subject under test, you expect particular values to come out. Whether you are testing a simple Add method where the ins are two or more numbers, and the out is the sum of those numbers, or a more complex feature where the ins come from a form, and the out is the record getting persisted in a database, most of the time, we are testing that inputs produced an output or an outcome.Another concept is to divide those units as a query or a command. No matter how you organize your code, from a simple single-file application to a microservices architecture-base Netflix clone, all simple or compounded operations are queries or commands. Thinking about a system this way should help you test the ins and outs. We discuss queries and commands in several chapters, so keep reading to learn more.Now that we have laid this out, what if a unit must perform multiple operations, such as reading from a database, and then send multiple commands? You can create and test multiple smaller units (individual operations) and another unit that orchestrates those building blocks, allowing you to test each piece in isolation. We explore how to achieve this throughout the book.In a nutshell, when writing automated tests:

  • In case of a query, we assert the output of the unit undergoing testing based on its input parameters.
  • In case of a command, we assert the outcome of the unit undergoing testing based on its input parameters.

We explore numerous techniques throughout the book to help you achieve that level of separation, starting with architectural principles in the next chapter.

Summary

This chapter covered automated testing, such as unit and integration tests. We also briefly covered end-to-end tests, but covering that in only a few pages is impossible. Nonetheless, how to write integration tests can also be used for end-to-end testing, especially in the REST API space.We explored different testing approaches from a bird’s eye view, tackled technical debt, and explored multiple testing techniques like black-box, white-box, and grey-box testing. We also peaked at a few formal ways to choose the values to test, like equivalence partitioning and boundary value analysis.We then looked at xUnit, the testing framework used throughout the book, and a way of organizing tests. We explored ways to pick the correct type of test and some guidelines about choosing the right quantity for each kind of test. Then we saw how easy it is to test our ASP.NET Core web applications by running it in memory. Finally, we explored high-level concepts that should guide you in writing testable, flexible, and reliable programs.Now that we have talked about testing, we are ready to explore a few architectural principles to help us increase programs’ testability. Those are a crucial part of modern software engineering and go hand in hand with automated testing.

Minimal hosting – Automated Testing

Unfortunately, we must use a workaround to make the Program class discoverable when using minimal hosting. Let’s explore a few workarounds that leverage minimal APIs, allowing you to pick the one you prefer.

First workaround

The first workaround is to use any other class in the assembly as the TEntryPoint of WebApplicationFactory<TEntryPoint> instead of the Program or Startup class. This makes what WebApplicationFactory does a little less explicit, but that’s all. Since I tend to prefer readable code, I do not recommend this.

Second workaround

The second workaround is to add a line at the bottom of the Program.cs file (or anywhere else in the project) to change the autogenerated Program class visibility from internal to public. Here is the complete Program.cs file with that added line (highlighted):

var builder = WebApplication.CreateBuilder(args);
var app = builder.Build();
app.MapGet(“/”, () => “Hello World!”);
app.Run();
public partial class Program { }

Then, the test cases are very similar to the ones of the classic web application explored previously. The only difference is the program itself, both programs don’t do the same thing.

namespace MyMinimalApiApp;
public class ProgramTest : IClassFixture<WebApplicationFactory<Program>>
{
    private readonly HttpClient _httpClient;
    public ProgramTest(
        WebApplicationFactory<Program> webApplicationFactory)
    {
        _httpClient = webApplicationFactory.CreateClient();
    }
    public class Get : ProgramTest
    {
        public Get(WebApplicationFactory<Program> webApplicationFactory)
            : base(webApplicationFactory) { }
        [Fact]
        public async Task Should_respond_a_status_200_OK()
        {
            // Act
            var result = await _httpClient.GetAsync(“/”);
            // Assert
            Assert.Equal(HttpStatusCode.OK, result.StatusCode);
        }
        [Fact]
        public async Task Should_respond_hello_world()
        {
            // Act
            var result = await _httpClient.GetAsync(“/”);
            // Assert
            var contentText = await result.Content.ReadAsStringAsync();
            Assert.Equal(“Hello World!”, contentText);
        }
    }
}

The only change is the expected result as the endpoint returns the text/plain string Hello World! instead of a collection of strings serialized as JSON. The test cases would be identical if the two endpoints produced the same result.

Third workaround

The third workaround is to instantiate WebApplicationFactory manually instead of leveraging a fixture. We can use the Program class, which requires changing its visibility by adding the following line to the Program.cs file:

public partial class Program { }

However, instead of injecting the instance using the IClassFixture interface, we instantiate the factory manually. To ensure we dispose the WebApplicationFactory instance, we also implement the IAsyncDisposable interface.Here’s the complete example, which is very similar to the previous workaround:

namespace MyMinimalApiApp;
public class ProgramTestWithoutFixture : IAsyncDisposable
{
    private readonly WebApplicationFactory<Program> _webApplicationFactory;
    private readonly HttpClient _httpClient;
    public ProgramTestWithoutFixture()
    {
        _webApplicationFactory = new WebApplicationFactory<Program>();
        _httpClient = _webApplicationFactory.CreateClient();
    }
    public ValueTask DisposeAsync()
    {
        return ((IAsyncDisposable)_webApplicationFactory)
            .DisposeAsync();
    }
    // Omitted nested Get class
}

I omitted the test cases in the preceding code block because they are the same as the previous workarounds. The full source code is available on GitHub: https://adpg.link/vzkr.

Using class fixtures is more performant since the factory and the server get created only once per test run instead of recreated for every test method.

Organizing your tests – Automated Testing

There are many ways of organizing test projects inside a solution, and I tend to create a unit test project for each project in the solution and one or more integration test projects.A unit test is directly related to a single unit of code, whether it’s a method or a class. It is straightforward to associate a unit test project with its respective code project (assembly), leading to a one-on-one relationship. One unit test project per assembly makes them portable, easier to navigate, and even more so when the solution grows.

If you have a preferred way to organize yours that differs from what we are doing in the book, by all means, use that approach instead.

Integration tests, on the other hand, can span multiple projects, so having a single rule that fits all scenarios is challenging. One integration test project per solution is often enough. Sometimes we can need more than one, depending on the context.

I recommend starting with one integration test project and adding more as needed during development instead of overthinking it before getting started. Trust your judgment; you can always change the structure as your project evolves.

Folder-wise, at the solution level, creating the application and its related libraries in an src directory helps isolate the actual solution code from the test projects created under a test directory, like this:

 Figure 2.7: The Automated Testing Solution Explorer, displaying how the projects are organizedFigure 2.7: The Automated Testing Solution Explorer, displaying how the projects are organized 

That’s a well-known and effective way of organizing a solution in the .NET world.

Sometimes, it is not possible or unwanted to do that. One such use case would be multiple microservices written under a single solution. In that case, you might want the tests to live closer to your microservices and not split them between src and test folders. So you could organize your solution by microservice instead, like one directory per microservice that contains all the projects, including tests.

Let’s now dig deeper into organizing unit tests.

Unit tests

How you organize your test projects may make a big difference between searching for your tests or making it easy to find them. Let’s look at the different aspects, from the namespace to the test code itself.

Namespace

I find it convenient to create unit tests in the same namespace as the subject under test when creating unit tests. That helps get tests and code aligned without adding any additional using statements. To make it easier when creating files, you can change the default namespace used by Visual Studio when creating a new class in your test project by adding <RootNamespace>[Project under test namespace]</RootNamespace> to a PropertyGroup of the test project file (*.csproj), like this:<PropertyGroup>
  …
<RootNamespace>MyApp</RootNamespace>
</PropertyGroup>

Theories – Automated Testing-2

The third data feeds three more sets of data to the test method. However, that data originates from the GetData method of the ExternalData class, sending 10 as an argument during the execution (the start parameter). To do that, we must specify the MemberType instance where the method is located so xUnit knows where to look. In this case, we pass the argument 10 as the second parameter of the MemberData constructor. However, in other cases, you can pass zero or more arguments there.Finally, we are doing the same for the ExternalData.TypedData property, which is represented by the [MemberData(nameof(ExternalData.TypedData), MemberType = typeof(ExternalData))] attribute. Once again, the only difference is that the property is defined using TheoryData instead of IEnumerable<object[]>, which makes its intent clearer.When running the tests, the data provided by the [MemberData] attributes are combined, yielding the following result in the Test Explorer:

 Figure 2.5: Member data theory test resultsFigure 2.5: Member data theory test results 

These are only a few examples of what we can do with the [MemberData] attribute.

I understand that’s a lot of condensed information, but the goal is to cover just enough to get you started. I don’t expect you to become an expert in xUnit by reading this chapter.

Last but not least, the [ClassData] attribute gets its data from a class implementing IEnumerable<object[]> or inheriting from TheoryData<…>. The concept is the same as the other two. Here is an example:

public class ClassDataTest
{
    [Theory]
    [ClassData(typeof(TheoryDataClass))]
    [ClassData(typeof(TheoryTypedDataClass))]
    public void Should_be_equal(int value1, int value2, bool shouldBeEqual)
    {
        if (shouldBeEqual)
        {
            Assert.Equal(value1, value2);
        }
        else
        {
            Assert.NotEqual(value1, value2);
        }
    }
    public class TheoryDataClass : IEnumerable<object[]>
    {
        public IEnumerator<object[]> GetEnumerator()
        {
            yield return new object[] { 1, 2, false };
            yield return new object[] { 2, 2, true };
            yield return new object[] { 3, 3, true };
        }
        IEnumerator IEnumerable.GetEnumerator() => GetEnumerator();
    }
    public class TheoryTypedDataClass : TheoryData<int, int, bool>
    {
        public TheoryTypedDataClass()
        {
            Add(102, 104, false);
        }
    }
}

These are very similar to [MemberData], but we point to a type instead of pointing to a member.In TheoryDataClass, implementing the IEnumerable<object[]> interface makes it easy to yield return the results. On the other hand, in the TheoryTypedDataClass class, by inheriting TheoryData, we can leverage a list-like Add method. Once again, I find inheriting from TheoryData more explicit, but either way works with xUnit. You have many options, so choose the best one for your use case.Here is the result in the Test Explorer, which is very similar to the other attributes:

 Figure 2.6: Test ExplorerFigure 2.6: Test Explorer 

That’s it for the theories—next, a few last words before organizing our tests.

Theories – Automated Testing-1

For more complex test cases, we can use theories. A theory contains two parts:

  • A [Theory] attribute that marks the method as a theory.
  • At least one data attribute that allows passing data to the test method: [InlineData], [MemberData], or [ClassData].

When writing a theory, your primary constraint is ensuring that the number of values matches the parameters defined in the test method. For example, a theory with one parameter must be fed one value. We look at some examples next.

You are not limited to only one type of data attribute; you can use as many as you need to suit your needs and feed a theory with the appropriate data.

The [InlineData] attribute is the most suitable for constant values or smaller sets of values. Inline data is the most straightforward way of the three because of the proximity of the test values and the test method.Here is an example of a theory using inline data:

public class InlineDataTest
{
    [Theory]
    [InlineData(1, 1)]
    [InlineData(2, 2)]
    [InlineData(5, 5)]
    public void Should_be_equal(int value1, int value2)
    {
        Assert.Equal(value1, value2);
    }
}

That test method yields three test cases in the Test Explorer, where each can pass or fail individually. Of course, since 1 equals 1, 2 equals 2, and 5 equals 5, all three test cases are passing, as shown here:

 Figure 2.4: Inline data theory test resultsFigure 2.4: Inline data theory test results 

We can also use the [MemberData] and [ClassData] attributes to simplify the test method’s declaration when we have a large set of data to tests. We can also do that when it is impossible to instantiate the data in the attribute. We can also reuse the data in multiple test methods or encapsulate the data away from the test class.Here is a medley of examples of the [MemberData] attribute usage:

public class MemberDataTest
{
    public static IEnumerable<object[]> Data => new[]
    {
        new object[] { 1, 2, false },
        new object[] { 2, 2, true },
        new object[] { 3, 3, true },
    };
    public static TheoryData<int, int, bool> TypedData =>new TheoryData<int, int, bool>
    {
        { 3, 2, false },
        { 2, 3, false },
        { 5, 5, true },
    };
    [Theory]
    [MemberData(nameof(Data))]
    [MemberData(nameof(TypedData))]
    [MemberData(nameof(ExternalData.GetData), 10, MemberType = typeof(ExternalData))]
    [MemberData(nameof(ExternalData.TypedData), MemberType = typeof(ExternalData))]
    public void Should_be_equal(int value1, int value2, bool shouldBeEqual)
    {
        if (shouldBeEqual)
        {
            Assert.Equal(value1, value2);
        }
        else
        {
            Assert.NotEqual(value1, value2);
       }
    }
    public class ExternalData
    {
        public static IEnumerable<object[]> GetData(int start) => new[]
        {
            new object[] { start, start, true },
            new object[] { start, start + 1, false },
            new object[] { start + 1, start + 1, true },
        };
        public static TheoryData<int, int, bool> TypedData => new TheoryData<int, int, bool>
        {
            { 20, 30, false },
            { 40, 50, false },
            { 50, 50, true },
        };
    }
}

The preceding test case yields 12 results. If we break it down, the code starts by loading three sets of data from the Data property by decorating the test method with the [MemberData(nameof(Data))] attribute. This is how to load data from a member of the class the test method is declared in.Then, the second property is very similar to the Data property but replaces IEnumerable<object[]> with a TheoryData<…> class, making it more readable and type-safe. Like with the first attribute, we feed those three sets of data to the test method by decorating it with the [MemberData(nameof(TypedData))] attribute. Once again, it is part of the test class.

I strongly recommend using TheoryData<…> by default.