Category Getting started with .NET

Liskov substitution principle (LSP) – Architectural Principles

The Liskov Substitution Principle (LSP) states that in a program, if we replace an instance of a superclass (supertype) with an instance of a subclass (subtype), the program should not break or behave unexpectedly.Imagine we have a base class called Bird with a function called Fly, and we add the Eagle and Penguin subclasses. Since a penguin can’t fly, replacing an instance of the Bird class with an instance of the Penguin subclass might cause problems because the program expects all birds to be able to fly.So, according to the LSP, our subclasses should behave so the program can still work correctly, even if it doesn’t know which subclass it’s using, preserving system stability.Before moving on with the LSP, let’s look at covariance and contravariance.

Covariance and contravariance

We won’t go too deep into this, so we don’t move too far away from the LSP, but since the formal definition mentions them, we must understand these at least a minimum.Covariance and contravariance represent specific polymorphic scenarios. They allow reference types to be converted into other types implicitly. They apply to generic type arguments, delegates, and array types. Chances are, you will never need to remember this, as most of it is implicit, yet, here’s an overview:

  • Covariance (out) enables us to use a more derived type (a subtype) instead of the supertype. Covariance is usually applicable to method return types. For instance, if a base class method returns an instance of a class, the equivalent method of a derived class can return an instance of a subclass.
  • Contravariance (in) is the reverse situation. It allows a less derived type (a supertype) to be used instead of the subtype. Contravariance is usually applicable to method argument types. If a method of a base class accepts a parameter of a particular class, the equivalent method of a derived class can accept a parameter of a superclass.

Let’s use some code to understand this more, starting with the model we are using:

public record class Weapon { }
public record class Sword : Weapon { }
public record class TwoHandedSword : Sword { }

Simple class hierarchy, we have a TwoHandedSword class that inherits from the Sword class and the Sword class that inherits from the Weapon class.

Covariance

To demo covariance, we leverage the following generic interface:

public interface ICovariant<out T>
{
    T Get();
}

In C#, the out modifier, the highlighted code, explicitly specifies that the generic parameter T is covariant. Covariance applies to return types, hence the Get method that returns the generic type T.Before testing this out, we need an implementation. Here’s a barebone one:

public class SwordGetter : ICovariant<Sword>
{
    private static readonly Sword _instance = new();
    public Sword Get() => _instance;
}

The highlighted code, which represents the T parameter, is of type Sword, a subclass of Weapon. Since covariance means you can return (output) the instance of a subtype as its supertype, using the Sword subtype allows exploring this with the Weapon supertype. Here’s the xUnit fact that demonstrates covariance:

[Fact]
public void Generic_Covariance_tests()
{
    ICovariant<Sword> swordGetter = new SwordGetter();
    ICovariant<Weapon> weaponGetter = swordGetter;
    Assert.Same(swordGetter, weaponGetter);
    Sword sword = swordGetter.Get();
    Weapon weapon = weaponGetter.Get();
    var isSwordASword = Assert.IsType<Sword>(sword);
    var isWeaponASword = Assert.IsType<Sword>(weapon);
    Assert.NotNull(isSwordASword);
    Assert.NotNull(isWeaponASword);
}

The highlighted line represents covariance, showing that we can implicitly convert the ICovariant<Sword> subtype to the ICovariant<Weapon> supertype.The code after that showcases what happens with that polymorphic change. For example, the Get method of the weaponGetter object returns a Weapon type, not a Sword, even if the underlying instance is a SwordGetter object. However, that Weapon is, in fact, a Sword, as the assertions demonstrate.Next, let’s explore contravariance.

Open/Closed principle (OCP) – Architectural Principles-2

Now the EntityService is composed of an EntityRepository instance, and there is no more inheritance. However, we still tightly coupled both classes, and it is impossible to change the behavior of the EntityService this way without changing its code.To fix our last issues, we can inject an EntityRepository instance into the class constructor where we set our private field like this:

namespace OCP.DependencyInjection;
public class EntityService
{
    private readonly EntityRepository _repository;
    public EntityService(EntityRepository repository)
    {
        _repository = repository;
    }
    public async Task ComplexBusinessProcessAsync(Entity entity)
    {
        // Do some complex things here
        await _repository.CreateAsync(entity);
        // Do more complex things here
    }
}

With the preceding change, we broke the tight coupling between the EntityService and the EntityRepository classes. We can also control the behavior of the EntityService class from the outside by deciding what instance of the EntityRepository class we inject into the EntityService constructor. We could even go further by leveraging an abstraction instead of a concrete class and explore this subsequently while covering the DIP.As we just explored, the OCP is a super powerful principle, yet simple, that allows controlling an object from the outside. For example, we could create two instances of the EntityService class with different EntityRepository instances that connect to different databases. Here’s a rough example:

using OCP;
using OCP.DependencyInjection;
// Create the entity in database 1
var repository1 = new EntityRepository(/* connection string 1 */);
var service1 = new EntityService(repository1);
// Create the entity in database 2
var repository2 = new EntityRepository(/* connection string 2 */);
var service2 = new EntityService(repository2);
// Save an entity in two different databases
var entity = new Entity();
await service1.ComplexBusinessProcessAsync(entity);
await service2.ComplexBusinessProcessAsync(entity);

In the preceding code, assuming we implemented the EntityRepository class and configured repository1 and repository2 differently, the result of executing the ComplexBusinessProcessAsync method on service1 and service2 would create the entity in two different databases. The behavior change between the two instances happened without changing the code of the EntityService class; composition: 1, inheritance: 0.

We explore the Strategy pattern—the best way of implementing the OCP—in Chapter 5, Strategy, Abstract Factory, and Singleton. We revisit that pattern and also learn to assemble our program’s well-designed pieces and sew them together using dependency injection in Chapter 6, Dependency Injection.

Next, we explore the principle we can perceive as the most complex of the five, yet the one we will use the less.

Keep it simple, stupid (KISS) – Architectural Principles

This is another straightforward principle, yet one of the most important. Like in the real world, the more moving pieces, the more chances something breaks. This principle is a design philosophy that advocates for simplicity in design. It emphasizes the idea that systems work best when they are kept simple rather than made complex.Striving for simplicity might involve writing shorter methods or functions, minimizing the number of parameters, avoiding over-architecting, and choosing the simplest solution to solve a problem.Adding interfaces, abstraction layers, and complex object hierarchy adds complexity, but are the added benefits better than the underlying complexity? If so, they are worth it; otherwise, they are not.

As a guiding principle, when you can write the same program with less complexity, do it. This is also why predicting future requirements can often prove detrimental, as it may inadvertently inject unnecessary complexity into your codebase for features that might never materialize.

We study design patterns in the book and design systems using them. We learn how to apply a high degree of engineering to our code, which can lead to over-engineering if done in the wrong context. Towards the end of the book, we circle back on the KISS principle when exploring the vertical slice architecture and request-endpoint-response (REPR) patterns.Next, we delve into the SOLID principles, which are the key to flexible software design.

The SOLID principles

SOLID is an acronym representing five principles that extend the basic OOP concepts of Abstraction, Encapsulation, Inheritance, and Polymorphism. They add more details about what to do and how to do it, guiding developers toward more robust and flexible designs.It is crucial to remember that these are just guiding principles, not rules that you must follow, no matter what. Think about what makes sense for your specific project. If you’re building a small tool, it might be acceptable not to follow these principles as strictly as you would for a crucial business application. In the case of business-critical applications, it might be a good idea to stick to them more closely. Still, it’s usually a smart move to follow them, no matter the size of your app. That’s why we’re discussing them before diving into design patterns.The SOLID acronym represents the following:

  • Single responsibility principle
  • Open/Closed principle
  • Liskov substitution principle
  • Interface segregation principle
  • Dependency inversion principle

By following these principles, your systems should become easier to test and maintain.

Separation of concerns (SoC) – Architectural Principles

Before you begin: Join our book community on Discord

Give your feedback straight to the author himself and chat to other early readers on our Discord server (find the “architecting-aspnet-core-apps-3e” channel under EARLY ACCESS SUBSCRIPTION).

https://packt.link/EarlyAccess

This chapter delves into fundamental architectural principles: pillars of contemporary software development practices. These principles help us create flexible, resilient, testable, and maintainable code.We can use these principles to stimulate critical thinking, fostering our ability to evaluate trade-offs, anticipate potential issues, and create solutions that stand the test of time by influencing our decision-making process and helping our design choices.As we embark on this journey, we constantly refer to those principles throughout the book, particularly the SOLID principles, which improve our ability to build flexible and robust software systems.In this chapter, we cover the following topics:

  • The separation of concerns (SoC) principle
  • The DRY principle
  • The KISS principle
  • The SOLID principles

We also revise the following notions:

  • Covariance
  • Contravariance
  • Interfaces

Separation of concerns (SoC)

As its name implies, the idea is to separate our software into logical blocks, each representing a concern. A “concern” refers to a specific aspect of a program. It’s a particular interest or focus within a system that serves a distinct purpose. Concerns could be as broad as data management, as specific as user authentication, or even more specific, like copying an object into another. The Separation of Concerns principle suggests that each concern should be isolated and managed separately to improve the system’s maintainability, modularity, and understandability.

The Separation of Concerns principle applies to all programming paradigms. In a nutshell, this principle means factoring a program into the correct pieces. For example, modules, subsystems, and microservices are macro-pieces, while classes and methods are smaller pieces.

By correctly separating concerns, we can prevent changes in one area from affecting others, allow for more efficient code reuse, and make it easier to understand and manage different parts of a system independently.Here are a few examples:

  • Security and logging are cross-cutting concerns.
  • Rendering a user interface is a concern.
  • Handling an HTTP request is a concern.
  • Copying an object into another is a concern.
  • Orchestrating a distributed workflow is a concern.

Before moving to the DRY principle, it is imperative to consider concerns when dividing software into pieces to create cohesive units. A good separation of concerns helps create modular designs and face design dilemmas more effectively, leading to a maintainable application.

Writing ASP.NET Core integration tests – Automated Testing

When Microsoft built ASP.NET Core from the ground up, they fixed and improved so many things that I cannot enumerate them all here, including testability.Nowadays, there are two ways to structure a .NET program:

  • The classic ASP.NET Core Program and the Startup classes. This model might be found in existing projects (created before .NET 6).
  • The minimal hosting model introduced in .NET 6. This may look familiar to you if you know Node.js, as this model encourages you to write the start-up code in the Program.cs file by leveraging top-level statements. You will most likely find this model in new projects (created after the release of .NET 6).

No matter how you write your program, that’s the place to define how the application’s composition and how it boots. Moreover, we can leverage the same testing tools more or less seamlessly.In the case of a web application, the scope of our integration tests is often to call the endpoint of a controller over HTTP and assert the response. Luckily, in .NET Core 2.1, the .NET team added the WebApplicationFactory<TEntry> class to make the integration testing of web applications easier. With that class, we can boot up an ASP.NET Core application in memory and query it using the supplied HttpClient in a few lines of code. The test classes also provide extension points to configure the server, such as replacing implementations with mocks, stubs, or other test-specific elements.Let’s start by booting up a classic web application test.

Classic web application

In a classic ASP.NET Core application, the TEntry generic parameter of the WebApplicationFactory<TEntry> class is usually the Startup or Program class of your project under test.

The test cases are in the Automated Testing solution under the MyApp.IntegrationTests project.

Let’s start by looking at the test code structure before breaking it down:

namespace MyApp.IntegrationTests.Controllers;
public class ValuesControllerTest : IClassFixture<WebApplicationFactory<Startup>>
{
    private readonly HttpClient _httpClient;
    public ValuesControllerTest(
        WebApplicationFactory<Startup> webApplicationFactory)
    {
        _httpClient = webApplicationFactory.CreateClient();
    }
    public class Get : ValuesControllerTest
    {
        public Get(WebApplicationFactory<Startup> webApplicationFactory)
            : base(webApplicationFactory) { }
        [Fact]
        public async Task Should_respond_a_status_200_OK()
        {
            // Omitted Test Case 1
        }
        [Fact]
        public async Task Should_respond_the_expected_strings()
        {
            // Omitted Test Case 2
        }
    }
}

The first piece of the preceding code that is relevant to us is how we get an instance of the WebApplicationFactory<Startup> class. We inject a WebApplicationFactory<Startup> object into the constructor by implementing the IClassFixture<T> interface (a xUnit feature). We can also use the factory to configure the test server, but we don’t need to here, so we can only keep a reference on the HttpClient, preconfigured to connect to the in-memory test server.Then, we may have noticed we have the nested Get class that inherits the ValuesControllerTest class. The Get class contains the test cases. By inheriting the ValuesControllerTest class, we can leverage the _httpClient field from the test cases we are about to see.In the first test case, we use HttpClient to query the http://localhost/api/values URI, accessible through the in-memory server. Then, we assert that the status code of the HTTP response was a success (200 OK):

[Fact]
public async Task Should_respond_a_status_200_OK()
{
    // Act
    var result = await _httpClient
        .GetAsync(“/api/values”);
    // Assert
    Assert.Equal(HttpStatusCode.OK, result.StatusCode);
}

The second test case also sends an HTTP request to the in-memory server but deserializes the body’s content as a string[] to ensure the values are the same as expected instead of validating the status code:

[Fact]
public async Task Should_respond_the_expected_strings()
{
    // Act
    var result = await _httpClient
        .GetFromJsonAsync<string[]>(“/api/values”);
    // Assert
    Assert.Collection(result,
        x => Assert.Equal(“value1”, x),
        x => Assert.Equal(“value2”, x)
    );
}

As you may have noticed from the test cases, the WebApplicationFactory preconfigured the BaseAddress property for us, so we don’t need to prefix our requests with http://localhost.

When running those tests, an in-memory web server starts. Then, HTTP requests are sent to that server, testing the complete application. The tests are simple in this case, but you can create more complex test cases in more complex programs.Next, we explore how to do the same for minimal APIs.

Test class name – Automated Testing

By convention, I name test classes [class under test]Test.cs and create them in the same directory as in the original project. Finding tests is easy when following that simple rule since the test code is in the same location of the file tree as the code under test but in two distinct projects.

 Figure 2.8: The Automated Testing Solution Explorer, displaying how tests are organizedFigure 2.8: The Automated Testing Solution Explorer, displaying how tests are organized 

Test code inside the test class

For the test code itself, I follow a multi-level structure similar to the following:

  • One test class is named the same as the class under test.
  • One nested test class per method to test from the class under test.
  • One test method per test case of the method under test.

This technique helps organize tests by test case while keeping a clear hierarchy, leading to the following hierarchy:

  • Class under test
  • Method under test
  • Test case using that method

In code, that translates to the following:

namespace MyApp.IntegrationTests.Controllers;
public class ValuesControllerTest
{
    public class Get : ValuesControllerTest
    {
        [Fact]
        public void Should_return_the_expected_strings()
        {
            // Arrange
            var sut = new ValuesController();
            // Act
            var result = sut.Get();
            // Assert
            Assert.Collection(result.Value,
                x => Assert.Equal(“value1”, x),
                x => Assert.Equal(“value2”, x)
            );
        }
    }
}

This convention allows you to set up tests step by step. For example, by inheriting the outer class (the ValuesControllerTest class here) from the inner class (the Get nested class), you can create top-level private mocks or classes shared by all nested classes and test methods. Then, for each method to test, you can modify the setup or create other private test elements in the nested classes. Finally, you can do more configuration per test case inside the test method (the Should_return_the_expected_strings method here).

Don’t go too hard on reusability inside your test classes, as it can make tests harder to read from an external eye, such as a reviewer or another developer that needs to play there. Unit tests should remain focused, small, and easy to read: a unit of code testing another unit of code. Too much reusability may lead to a brittle test suite.

Now that we have explored organizing unit tests, let’s look at integration tests.

Integration tests

Integration tests are harder to organize because they depend on multiple units, can cross project boundaries, and interact with various dependencies.We can create one integration test project for most simple solutions or many for more complex scenarios.When creating one, you can name the project IntegrationTests or start with the entry point of your tests, like a REST API project, and name the project [Name of the API project].IntegrationTests. At this point, how to name the integration test project depends on your solution structure and intent.When you need multiple integration projects, you can follow a convention similar to unit tests and associate your integration projects one-to-one: [Project under test].IntegrationTests.Inside those projects, it depends on how you want to attack the problem and the structure of the solution itself. Start by identifying the features under test. Name the test classes in a way that mimics your requirements, organize those into sub-folders (maybe a category or group of requirements), and code test cases as methods. You can also leverage nested classes, as we did with unit tests.

We write tests throughout the book, so you will have plenty of examples to make sense of all this if it’s not clear now.

Next, we implement an integration test by leveraging ASP.NET Core features.

Closing words – Automated Testing

Now that facts, theories, and assertions are out of the way, xUnit offers other mechanics to allow developers to inject dependencies into their test classes. These are named fixtures. Fixtures allow dependencies to be reused by all test methods of a test class by implementing the IClassFixture<T> interface. Fixtures are very helpful for costly dependencies, like creating an in-memory database. With fixtures, you can create the dependency once and use it multiple times. The ValuesControllerTest class in the MyApp.IntegrationTests project shows that in action.It is important to note that xUnit creates an instance of the test class for every test run, so your dependencies are recreated every time if you are not using the fixtures.You can also share the dependency provided by the fixture between multiple test classes by using ICollectionFixture<T>, [Collection], and [CollectionDefinition] instead. We won’t get into the details here, but at least you know it’s possible and know what types to look for when you need something similar.Finally, if you have worked with other testing frameworks, you might have encountered setup and teardown methods. In xUnit, there are no particular attributes or mechanisms for handling setup and teardown code. Instead, xUnit uses existing OOP concepts:

  • To set up your tests, use the class constructor.
  • To tear down (clean up) your tests, implement IDisposable or IAsyncDisposable and dispose of your resources there.

That’s it, xUnit is very simple and powerful, which is why I adopted it as my main testing framework several years ago and chose it for this book.Next, we learn to write readable test methods.

Arrange, Act, Assert

Arrange, Act, Assert (AAA or 3A) is a well-known method for writing readable tests. This technique allows you to clearly define your setup (arrange), the operation under test (act), and your assertions (assert). One efficient way to use this technique is to start by writing the 3A as comments in your test case and then write the test code in between. Here is an example:

[Fact]
public void Should_be_equals()
{
    // Arrange
    var a = 1;
    var b = 2;
    var expectedResult = 3;
    // Act
    var result = a + b;
    // Assert
    Assert.Equal(expectedResult, result);
}

Of course, that test case cannot fail, but the three blocks are easily identifiable with the 3A comments.In general, you want the Act block of your unit tests to be a single line, making the test focus clear. If you need more than one line, the chances are that something is wrong in the test or the design.

When the tests are very small (only a few lines), removing the comments might help readability. Furthermore, when you have nothing to set up in your test case, delete the Arrange comment to improve its readability further.

Next, we learn how to organize tests into projects, directories, and files.

Theories – Automated Testing-1

For more complex test cases, we can use theories. A theory contains two parts:

  • A [Theory] attribute that marks the method as a theory.
  • At least one data attribute that allows passing data to the test method: [InlineData], [MemberData], or [ClassData].

When writing a theory, your primary constraint is ensuring that the number of values matches the parameters defined in the test method. For example, a theory with one parameter must be fed one value. We look at some examples next.

You are not limited to only one type of data attribute; you can use as many as you need to suit your needs and feed a theory with the appropriate data.

The [InlineData] attribute is the most suitable for constant values or smaller sets of values. Inline data is the most straightforward way of the three because of the proximity of the test values and the test method.Here is an example of a theory using inline data:

public class InlineDataTest
{
    [Theory]
    [InlineData(1, 1)]
    [InlineData(2, 2)]
    [InlineData(5, 5)]
    public void Should_be_equal(int value1, int value2)
    {
        Assert.Equal(value1, value2);
    }
}

That test method yields three test cases in the Test Explorer, where each can pass or fail individually. Of course, since 1 equals 1, 2 equals 2, and 5 equals 5, all three test cases are passing, as shown here:

 Figure 2.4: Inline data theory test resultsFigure 2.4: Inline data theory test results 

We can also use the [MemberData] and [ClassData] attributes to simplify the test method’s declaration when we have a large set of data to tests. We can also do that when it is impossible to instantiate the data in the attribute. We can also reuse the data in multiple test methods or encapsulate the data away from the test class.Here is a medley of examples of the [MemberData] attribute usage:

public class MemberDataTest
{
    public static IEnumerable<object[]> Data => new[]
    {
        new object[] { 1, 2, false },
        new object[] { 2, 2, true },
        new object[] { 3, 3, true },
    };
    public static TheoryData<int, int, bool> TypedData =>new TheoryData<int, int, bool>
    {
        { 3, 2, false },
        { 2, 3, false },
        { 5, 5, true },
    };
    [Theory]
    [MemberData(nameof(Data))]
    [MemberData(nameof(TypedData))]
    [MemberData(nameof(ExternalData.GetData), 10, MemberType = typeof(ExternalData))]
    [MemberData(nameof(ExternalData.TypedData), MemberType = typeof(ExternalData))]
    public void Should_be_equal(int value1, int value2, bool shouldBeEqual)
    {
        if (shouldBeEqual)
        {
            Assert.Equal(value1, value2);
        }
        else
        {
            Assert.NotEqual(value1, value2);
       }
    }
    public class ExternalData
    {
        public static IEnumerable<object[]> GetData(int start) => new[]
        {
            new object[] { start, start, true },
            new object[] { start, start + 1, false },
            new object[] { start + 1, start + 1, true },
        };
        public static TheoryData<int, int, bool> TypedData => new TheoryData<int, int, bool>
        {
            { 20, 30, false },
            { 40, 50, false },
            { 50, 50, true },
        };
    }
}

The preceding test case yields 12 results. If we break it down, the code starts by loading three sets of data from the Data property by decorating the test method with the [MemberData(nameof(Data))] attribute. This is how to load data from a member of the class the test method is declared in.Then, the second property is very similar to the Data property but replaces IEnumerable<object[]> with a TheoryData<…> class, making it more readable and type-safe. Like with the first attribute, we feed those three sets of data to the test method by decorating it with the [MemberData(nameof(TypedData))] attribute. Once again, it is part of the test class.

I strongly recommend using TheoryData<…> by default.

State Transition Testing – Automated Testing

We usually use State Transition Testing to test software with a state machine since it tests the different system states and their transitions. It’s handy for systems where the system behavior can change based on its current state. For example, a program with states like “logged in” or “logged out”.To perform State Transition Testing, we need to identify the states of the system and then the possible transitions between the states. For each transition, we need to create a test case. The test case should test the software with the specified input values and verify that the software transitions to the correct state. For example, a user with the state “logged in” must transition to the state “logged out” after signing out.The main advantage of State Transition Testing is that it tests sequences of events, not just individual events, which could reveal defects not found by testing each event in isolation. However, State Transition Testing can become complex and time-consuming for systems with many states and transitions.

Use Case Testing

This technique validates that the system behaves as expected when used in a particular way by a user. Use cases could have formal descriptions, be user stories, or take any other form that fits your needs.A use case involves one or more actors executing steps or taking actions that should yield a particular result. A use case can include inputs and expected outputs. For example, when a user (actor) that is “signed in” (precondition) clicks the “sign out” button (action), then navigates to the profile page (action), the system denies access to the page and redirects the users to the sign in page, displaying an error message (expected behaviors).Use case testing is a systematic and structured approach to testing that helps identify defects in the software’s functionality. It is very user-centric, ensuring the software meets the users’ needs. However, creating test cases for complex use cases can be difficult. In the case of a user interface, the time to execute end-to-end tests of use cases can take a long time, especially as the number of tests grows.

It is an excellent approach to think of your test cases in terms of functionality to test, whether using a formal use case or just a line written on a napkin. The key is to test behaviors, not code.

Now that we have explored these techniques, it is time to introduce the xUnit library, ways to write tests, and how tests are written in the book. Let’s start by creating a test project.

Equivalence Partitioning – Automated Testing

This technique divides the input data of the software into different equivalence data classes and then tests these classes rather than individual inputs. An equivalence data class means that all values in that partition set should lead to the same outcome or yield the same result. Doing this allows for limiting the number of tests considerably.For example, consider an application that accepts an integer value between 1 and 100 (inclusive). Using equivalence partitioning, we can divide the input data into two equivalence classes:

  • Valid
  • Invalid

To be more precise, we could further divide it into three equivalence classes:

  • Class 1: Less than 1 (Invalid)
  • Class 2: Between 1 and 100 (Valid)
  • Class 3: Greater than 100 (Invalid)

Then we can write three tests, picking one representative from each class (e.g., 0, 50, and 101) to create our test cases. Doing so ensures a broad coverage with minimal test cases, making our testing process more efficient.

Boundary Value Analysis

This technique focuses on the values at the boundary of the input domain rather than the center. This technique is based on the principle that errors are most likely to occur at the boundaries of the input domain.The input domain represents the set of all possible inputs for a system. The boundaries are the edges of the input domain, representing minimum and maximum values.For example, if we expect a function to accept an integer between 1 and 100 (inclusive), the boundary values would be 1 and 100. With Boundary Value Analysis, we would create test cases for these values, values just outside the boundaries (like 0 and 101), and values just inside the boundaries (like 2 and 99).Boundary Value Analysis is a very efficient testing technique that provides good coverage with a relatively small number of test cases. However, it’s unsuitable for finding errors within the boundaries or for complex logic errors. Boundary Value Analysis should be used on top of other testing methods, such as equivalence partitioning and decision table testing, to ensure the software is as defect-free as possible.

Decision Table Testing

This technique uses a decision table to design test cases. A decision table is a table that shows all possible combinations of input values and their corresponding outputs.It’s handy for complex business rules that can be expressed in a table format, enabling testers to identify missing and extraneous test cases.For example, our system only allows access to a user with a valid username and password. Moreover, the system denies access to users when it is under maintenance. The decision table would have three conditions (username, password, and maintenance) and one action (allow access). The table would list all possible combinations of these conditions and the expected action for each combination. Here is an example:

Valid UsernameValid PasswordSystem under MaintenanceAllow Access
TrueTrueFalseYes
TrueTrueTrueNo
TrueFalseFalseNo
TrueFalseTrueNo
FalseTrueFalseNo
FalseTrueTrueNo
FalseFalseFalseNo
FalseFalseTrueNo

The main advantage of Decision Table Testing is that it ensures we test all possible input combinations. However, it can become complex and challenging to manage when systems have many input conditions, as the number of rules (and therefore test cases) increases exponentially with the number of conditions.