Black-box testing – Automated Testing

Black-box testing is a software testing method where a tester examines an application’s functionality without knowing the internal structure or implementation details. This form of testing focuses solely on the inputs and outputs of the system under test, treating the software as a “black box” that we can’t see into.The main goal of black-box testing is to evaluate the system’s behavior against expected results based on requirements or user stories. Developers writing the tests do not need to know the codebase or the technology stack used to build the software.We can use black-box testing to assess the correctness of several types of requirements, like:

  1. Functional testing: This type of testing is related to the software’s functional requirements, emphasizing what the system does, a.k.a. behavior verification.
  2. Non-functional testing: This type of testing is related to non-functional requirements such as performance, usability, reliability, and security, a.k.a. performance evaluation.
  3. Regression testing: This type of testing ensures the new code does not break existing functionalities, a.k.a. change impact.

Next, let’s explore a hybrid between white-box and black-box testing.

Grey-box testing

Grey-box testing is a blend between white-box and black-box testing. Testers need only partial knowledge of the application’s internal workings and use a combination of the software’s internal structure and external behavior to craft their tests.We implement grey-box testing use cases in Chapter 16, Request-Endpoint-Response (REPR). Meanwhile, let’s compare the three techniques.

White-box vs. Black-box vs. Grey-box testing

To start with a concise comparison, here’s a table that compares the three broad techniques:

FeatureWhitebox TestingBlackbox TestingGray-box Testing
DefinitionTesting based on the internal design of the softwareTesting based on the behavior and functionality of the softwareTesting that combines the internal design and behavior of the software
Knowledge of code requiredYesNoYes
Types of defects foundLogic, data structure, architecture, and performance issuesFunctionality, usability, performance, and security issuesMost types of issues
Coverage per testSmall; targeted on a unitLarge; targeted on a use caseUp to large; can vary in scope
TestersUsually performed by developers.Testers can write the tests without specific technical knowledge of the application’s internal structure.Developers can write the tests, while testers also can with some knowledge of the code.
When to use each style?Write unit tests to validate complex algorithms or code that yields multiple results based on many inputs. These tests are usually high-speed so you can have many of them.Write if you have specific scenarios you want to test, like UI tests, or if testers and developers are two distinct roles in your organization. These usually run the slowest and require you to deploy the application to test it. You want as few as possible to improve the feedback time.Write to avoid writing black-box or white-box tests. Layer the tests to cover as much as possible with as few tests as possible. Depending on the application’s architecture, this type of test can yield optimal results for many scenarios.

Let’s conclude next and explore a few advantages and disadvantages of each technique.

Unit testing – Automated Testing

Unit tests focus on individual units, like testing the outcome of a method. Unit tests should be fast and not rely on any infrastructure, such as a database. Those are the kinds of tests you want the most because they run fast, and each one tests a precise code path. They should also help you design your application better because you use your code in the tests, so you become its first consumer, leading to you finding some design flaws and making your code better. If you don’t like using your code in your tests, that is a good indicator that nobody else will. Unit tests should focus on testing algorithms (the ins and outs) and domain logic, not the code itself; how you wrote the code should have no impact on the intent of the test. For example, you are testing that a Purchase method executes the logic required to purchase one or more items, not that you created the variable X, Y, or Z inside that method.

Don’t discourage yourself if you find it challenging; writing a good test suite is not as easy as it sounds.

Integration testing

Integration tests focus on the interaction between components, such as what happens when a component queries the database or what happens when two components interact with each other.Integration tests often require some infrastructure to interact with, which makes them slower to run. By following the classic testing model, you want integration tests, but you want fewer of them than unit tests. An integration test can be very close to an E2E test but without using a production-like environment.

We will break the test pyramid rule later, so always be critical of rules and principles; sometimes, breaking or bending them can be better. For example, having one good integration test can be better than N unit tests; don’t discard that fact when writing your tests. See also Grey-box testing.

End-to-end testing

End-to-end tests focus on application-wide behaviors, such as what happens when a user clicks on a specific button, navigates to a particular page, posts a form, or sends a PUT request to some web API endpoint. E2E tests are usually run on infrastructure to test your application and deployment.

Other types of tests

There are other types of automated tests. For example, we could do load testing, performance testing, regression testing, contract testing, penetration testing, functional testing, smoke testing, and more. You can automate tests for anything you want to validate, but some tests are more challenging to automate or more fragile than others, such as UI tests.

If you can automate a test in a reasonable timeframe, think ROI: do it! In the long run, it should pay off.

One more thing; don’t blindly rely on metrics such as code coverage. Those metrics make for cute badges in your GitHub project’s readme.md file but can lead you off track, resulting in you writing useless tests. Don’t get me wrong, code coverage is a great metric when used correctly, but remember that one good test can be better than a lousy test suite covering 100% of your codebase. If you are using code coverage, ensure you and your team are not gaming the system.Writing good tests is not easy and comes with practice.

One piece of advice: keep your test suite healthy by adding missing test cases and removing obsolete or useless tests. Think about use case coverage, not how many lines of code are covered by your tests.

Before moving forward to testing styles, let’s inspect a hypothetical system and explore a more efficient way to test it.

Introduction to automated testing – Automated Testing

Before you begin: Join our book community on Discord

Give your feedback straight to the author himself and chat to other early readers on our Discord server (find the “architecting-aspnet-core-apps-3e” channel under EARLY ACCESS SUBSCRIPTION).

https://packt.link/EarlyAccess

This chapter focuses on automated testing and how helpful it can be for crafting better software. It also covers a few different types of tests and the foundation of test-driven development (TDD). We also outline how testable ASP.NET Core is and how much easier it is to test ASP.NET Core applications than old ASP.NET MVC applications. This chapter overviews automated testing, its principles, xUnit, ways to sample test values, and more. While other books cover this topic more in-depth, this chapter covers the foundational aspects of automated testing. We are using parts of this throughout the book, and this chapter ensures you have a strong enough base to understand the samples.In this chapter, we cover the following topics:

  • An overview of automated testing
  • Testing .NET applications
  • Important testing principles

Introduction to automated testing

Testing is an integral part of the development process, and automated testing becomes crucial in the long run. You can always run your ASP.NET Core website, open a browser, and click everywhere to test your features. That’s a legitimate approach, but it is harder to test individual rules or more complex algorithms that way. Another downside is the lack of automation; when you first start with a small app containing a few pages, endpoints, or features, it may be fast to perform those tests manually. However, as your app grows, it becomes more tedious, takes longer, and increases the likelihood of making a mistake. Of course, you will always need real users to test your applications, but you want those tests to focus on the UX, the content, or some experimental features you are building instead of bug reports that automated tests could have caught early on.There are multiple types of tests and techniques in the testing space. Here is a list of three broad categories that represent how we can divide automated testing from a code correctness standpoint:

  • Unit tests
  • Integration tests
  • End-to-end (E2E) tests

Usually, you want a mix of those tests, so you have fast unit tests testing your algorithms, slower tests that ensure the integrations between components are correct, and slow E2E tests that ensure the correctness of the system as a whole.The test pyramid is a good way of explaining a few concepts around automated testing. You want different granularity of tests and a different number of tests depending on their complexity and speed of execution. The following test pyramid shows the three types of tests stated above. However, we could add other types of tests in there as well. Moreover, that’s just an abstract guideline to give you an idea. The most important aspect is the return on investment (ROI) and execution speed. If you can write one integration test that covers a large surface and is fast enough, this might be worth doing instead of multiple unit tests.

 Figure 2.1: The test pyramidFigure 2.1: The test pyramid 

I cannot stress this enough; the execution speed of your tests is essential to receive fast feedback and know immediately that you have broken something with your code changes. Layering different types of tests allows you to execute only the fastest subset often, the not-so-fast occasionally, and the very slow tests infrequently. If your test suite is fast-enough, you don’t even have to worry about it. However, if you have a lot of manual or E2E UI tests that take hours to run, that’s another story (that can cost a lot of money).

Finally, on top of running your tests using a test runner, like in Visual Studio, VS Code, or the CLI, a great way to ensure code quality and leverage your automated tests is to run them in a CI pipeline, validating code changes for issues.Tech-wise, back when .NET Core was in pre-release, I discovered that the .NET team was using xUnit to test their code and that it was the only testing framework available. xUnit has become my favorite testing framework since, and we use it throughout the book. Moreover, the ASP.NET Core team made our life easier by designing ASP.NET Core for testability; testing is easier than before.Why are we talking about tests in an architectural book? Because testability is a sign of a good design. It also allows us to use tests instead of words to prove some concepts. In many code samples, the test cases are the consumers, making the program lighter without building an entire user interface and focusing on the patterns we are exploring instead of getting our focus scattered over some boilerplate UI code.

To ensure we do not deviate from the matter at hand, we use automated testing moderately in the book, but I strongly recommend that you continue to study it, as it will help improve your code and design

Now that we have covered all that, let’s explore those three types of tests, starting with unit testing.

Command Description – Introduction

CommandDescription
dotnet restoreRestore the dependencies (a.k.a. NuGet packages) based on the .csproj or .sln file present in the current dictionary.
dotnet buildBuild the application based on the .csproj or .sln file present in the current dictionary. It implicitly runs the restore command first.
dotnet runRun the current application based on the .csproj file present in the current dictionary. It implicitly runs the build and restore commands first.
dotnet watch runWatch for file changes. When a file has changed, the CLI updates the code from that file using the hot-reload feature. When that is impossible, it rebuilds the application and then reruns it (equivalent to executing the run command again). If it is a web application, the page should refresh automatically.
dotnet testRun the tests based on the .csproj or .sln file present in the current directory. It implicitly runs the build and restore commands first. We cover testing in the next chapter.
dotnet watch testWatch for file changes. When a file has changed, the CLI reruns the tests (equivalent to executing the test command again).
dotnet publishPublish the current application, based on the .csproj or .sln file present in the current directory, to a directory or remote location, such as a hosting provider. It implicitly runs the build and restore commands first.
dotnet packCreate a NuGet package based on the .csproj or .sln file present in the current directory. It implicitly runs the build and restore commands first. You don’t need a .nuspec file.
dotnet cleanClean the build(s) output of a project or solution based on the .csproj or .sln file present in the current directory.

Technical requirements

Throughout the book, we will explore and write code. I recommend installing Visual Studio, Visual Studio Code, or both to help with that. I use Visual Studio and Visual Studio Code. Other alternatives are Visual Studio for Mac, Riders, or any other text editor you choose.Unless you install Visual Studio, which comes with the .NET SDK, you may need to install it. The SDK comes with the CLI we explored earlier and the build tools for running and testing your programs. Look at the README.md file in the GitHub repository for more information and links to those resources.The source code of all chapters is available for download on GitHub at the following address: https://adpg.link/net6.

Summary

This chapter looked at design patterns, anti-patterns, and code smells. We also explored a few of them. We then moved on to a recap of a typical web application’s request/response cycle.We continued by exploring .NET essentials, such as SDK versus runtime and app targets versus .NET Standard. We then dug a little more into the .NET CLI, where I laid down a list of essential commands, including dotnet build and dotnet watch run. We also covered how to create new projects. This has set us up to explore the different possibilities we have when building our .NET applications.In the next two chapters, we explore automated testing and architectural principles. These are foundational chapters for building robust, flexible, and maintainable applications.

Note – Introduction

I’m sure we will see .NET Standard libraries stick around for a while. All projects will not just migrate from .NET Framework to .NET 5+ magically, and people will want to continue sharing code between the two.

The next versions of .NET are built over .NET 5+, while .NET Framework 4.X will stay where it is today, receiving only security patches and minor updates. For example, .NET 8 is built over .NET 7, iterating over .NET 6 and 5.Next, let’s look at some tools and code editors.

Visual Studio Code versus Visual Studio versus the command-line interface

How can one of these projects be created? .NET Core comes with the dotnet command-line interface (CLI), which exposes multiple commands, including new. Running the dotnet new command in a terminal generates a new project.To create an empty class library, we can run the following commands:

md MyProject
cd MyProject
dotnet new classlib

That would generate an empty class library in the newly created MyProject directory.The -h option helps discover available commands and their options. For example, you can use dotnet -h to find the available SDK commands or dotnet new -h to find out about options and available templates.It is fantastic that .NET now has the dotnet CLI. The CLI enables us to automate our workflows in continuous integration (CI) pipelines while developing locally or through any other process.The CLI also makes it easier to write documentation that anyone can follow; writing a few commands in a terminal is way easier and faster than installing programs like Visual Studio and emulators.Visual Studio Code is my favourite text editor. I don’t use it much for .NET coding, but I still do to reorganize projects, when it’s CLI time, or for any other task that is easier to complete using a text editor, such as writing documentation using Markdown, writing JavaScript or TypeScript, or managing JSON, YAML, or XML files. To create a C# project, a Visual Studio solution, or to add a NuGet package using Visual Studio Code, open a terminal and use the CLI.As for Visual Studio, my favourite C# IDE, it uses the CLI under the hood to create the same projects, making it consistent between tools and just adding a user interface on top of the dotnet new CLI command.You can create and install additional dotnet new project templates in the CLI or even create global tools. You can also use another code editor or IDE if you prefer. Those topics are beyond the scope of this book.

An overview of project templates

Here is an example of the templates that are installed (dotnet new –list):

 Figure 1.1: Project templatesFigure 1.1: Project templates 

A study of all the templates is beyond the scope of this book, but I’d like to visit the few that are worth mentioning, some of which we will use later:

  • dotnet new console creates a console application
  • dotnet new classlib creates a class library
  • dotnet new xunit creates an xUnit test project
  • dotnet new web creates an empty web project
  • dotnet new mvc scaffolds an MVC application
  • dotnet new webapi scaffolds a web API application

Running and building your program

If you are using Visual Studio, you can always hit the play button, or F5, and run your app. If you are using the CLI, you can use one of the following commands (and more). Each of them also offers different options to control their behaviour. Add the -h flag with any command to get help on that command, such as dotnet build -h:

Getting started with .NET – Introduction

A bit of history: .NET Framework 1.0 was first released in 2002. .NET is a managed framework that compiles your code into an Intermediate Language (IL) named Microsoft Intermediate Language (MSIL). That IL code is then compiled into native code and executed by the Common Language Runtime (CLR). The CLR is now known simply as the .NET runtime. After releasing several versions of .NET Framework, Microsoft never delivered on the promise of an interoperable stack. Moreover, many flaws were built into the core of .NET Framework, tying it to Windows.Mono, an open-source project, was developed by the community to enable .NET code to run on non-Windows OSes. Mono was used and supported by Xamarin, acquired by Microsoft in 2016. Mono enabled .NET code to run on other OSes like Android and iOS. Later, Microsoft started to develop an official cross-platform .NET SDK and runtime they named .NET Core.The .NET team did a magnificent job building ASP.NET Core from the ground up, cutting out compatibility with the older .NET Framework versions. That brought its share of problems at first, but .NET Standard alleviated the interoperability issues between the old .NET and the new .NET.After years of improvements and two major versions in parallel (Core and Framework), Microsoft reunified most .NET technologies into .NET 5+ and the promise of a shared Base Class Library (BCL). With .NET 5, .NET Core simply became .NET while ASP.NET Core remained ASP.NET Core. There is no .NET “Core” 4, to avoid any potential confusion with .NET Framework 4.X.New major versions of .NET release every year now. Even-number releases are Long-Term Support (LTS) releases with free support for 3 years, and odd-number releases (Current) have free support for only 18 months.The good thing behind this book is that the architectural principles and design patterns covered should remain relevant in the future and are not tightly coupled with the versions of .NET you are using. Minor changes to the code samples should be enough to migrate your knowledge and code to new versions.Next, let’s cover some key information about the .NET ecosystem.

.NET SDK versus runtime

You can install different binaries grouped under SDKs and runtimes. The SDK allows you to build and run .NET programs, while the runtime only allows you to run .NET programs.As a developer, you want to install the SDK on your deployment environment. On the server, you want to install the runtime. The runtime is lighter, while the SDK contains more tools, including the runtime.

.NET 5+ versus .NET Standard

When building .NET projects, there are multiple types of projects, but basically, we can separate them into two categories:

  • Applications
  • Libraries

Applications target a version of .NET, such as net5.0 and net6.0. Examples of that would be an ASP.NET application or a console application.Libraries are bundles of code compiled together, often distributed as a NuGet package. .NET Standard class library projects allow sharing code between .NET 5+, and .NET Framework projects. .NET Standard came into play to bridge the compatibility gap between .NET Core and .NET Framework, which eased the transition. Things were not easy when .NET Core 1.0 first came out.With .NET 5 unifying all the platforms and becoming the future of the unified .NET ecosystem, .NET Standard is no longer needed. Moreover, app and library authors should target the base Target Framework Moniker (TFM), for example, net8.0. You can also target netstandard2.0 or netstandard2.1 when needed, for example, to share code with .NET Framework. Microsoft also introduced OS-specific TFMs with .NET 5+, allowing code to use OS-specific APIs like net8.0-android and net8.0-tvos. You can also target multiple TFMs when needed.