Category Getting started with .NET

Conclusion – Automated Testing

White-box testing includes unit and integration tests. Those tests run fast, and developers use them to improve the code and test complex algorithms. However, writing a large quantity of those tests takes time. Writing brittle tests that are tightly coupled with the code itself is easier due to the proximity to the code, increasing the maintenance cost of such test suites. It also makes it prone to overengineering your application in the name of testability.Black-box testing encompasses different types of tests that tend towards end-to-end testing. Since the tests target the external surface of the system, they are less likely to break when the system changes. Moreover, they are excellent at testing behaviors, and since each test tests an end-to-end use case, we need fewer of them, leading to a decrease in writing time and maintenance costs. Testing the whole system has drawbacks, including the slowness of executing each test, so combining black-box testing with other types of tests is very important to find the right balance between the number of tests, test case coverage, and speed of execution of the tests.Grey-box testing is a fantastic mix between the two others; you can treat any part of the software as a black box, leverage your inner-working knowledge to mock or stub parts of the test case (like to assert if the system persisted a record in the database), and test end-to-end scenarios more efficiently. It brings the best of both worlds, significantly reducing the number of tests while increasing the test surface considerably for each test case. However, doing grey-box testing on smaller units or heavily mocking the system may yield the same drawbacks as white-box testing. Integration tests or almost-E2E tests are good candidates for grey-box testing. We implement grey-box testing use cases in Chapter 16, Request-Endpoint-Response (REPR). Meanwhile, let’s explore a few techniques to help optimize our test case creation by applying different techniques, like testing a small subset of values to assert the correctness of our programs by writing an optimal number of tests.

Test case creation

Multiple ways exist to break down and create test cases to help find software defects with a minimal test count. Here are some techniques to help minimize the number of tests while maximizing the test coverage:

  • Equivalence Partitioning
  • Boundary Value Analysis
  • Decision Table Testing
  • State Transition Testing
  • Use Case Testing

I present the techniques theoretically. They apply to all sorts of tests and should help you write better test suites. Let’s have a quick look at each.

Black-box testing – Automated Testing

Black-box testing is a software testing method where a tester examines an application’s functionality without knowing the internal structure or implementation details. This form of testing focuses solely on the inputs and outputs of the system under test, treating the software as a “black box” that we can’t see into.The main goal of black-box testing is to evaluate the system’s behavior against expected results based on requirements or user stories. Developers writing the tests do not need to know the codebase or the technology stack used to build the software.We can use black-box testing to assess the correctness of several types of requirements, like:

  1. Functional testing: This type of testing is related to the software’s functional requirements, emphasizing what the system does, a.k.a. behavior verification.
  2. Non-functional testing: This type of testing is related to non-functional requirements such as performance, usability, reliability, and security, a.k.a. performance evaluation.
  3. Regression testing: This type of testing ensures the new code does not break existing functionalities, a.k.a. change impact.

Next, let’s explore a hybrid between white-box and black-box testing.

Grey-box testing

Grey-box testing is a blend between white-box and black-box testing. Testers need only partial knowledge of the application’s internal workings and use a combination of the software’s internal structure and external behavior to craft their tests.We implement grey-box testing use cases in Chapter 16, Request-Endpoint-Response (REPR). Meanwhile, let’s compare the three techniques.

White-box vs. Black-box vs. Grey-box testing

To start with a concise comparison, here’s a table that compares the three broad techniques:

FeatureWhitebox TestingBlackbox TestingGray-box Testing
DefinitionTesting based on the internal design of the softwareTesting based on the behavior and functionality of the softwareTesting that combines the internal design and behavior of the software
Knowledge of code requiredYesNoYes
Types of defects foundLogic, data structure, architecture, and performance issuesFunctionality, usability, performance, and security issuesMost types of issues
Coverage per testSmall; targeted on a unitLarge; targeted on a use caseUp to large; can vary in scope
TestersUsually performed by developers.Testers can write the tests without specific technical knowledge of the application’s internal structure.Developers can write the tests, while testers also can with some knowledge of the code.
When to use each style?Write unit tests to validate complex algorithms or code that yields multiple results based on many inputs. These tests are usually high-speed so you can have many of them.Write if you have specific scenarios you want to test, like UI tests, or if testers and developers are two distinct roles in your organization. These usually run the slowest and require you to deploy the application to test it. You want as few as possible to improve the feedback time.Write to avoid writing black-box or white-box tests. Layer the tests to cover as much as possible with as few tests as possible. Depending on the application’s architecture, this type of test can yield optimal results for many scenarios.

Let’s conclude next and explore a few advantages and disadvantages of each technique.

Technical debt – Automated Testing

Technical debt represents the corners you cut short while developing a feature or a system. That happens no matter how hard you try because life is life, and there are delays, deadlines, budgets, and people, including developers (yes, that’s you and me).The most crucial point is understanding that you cannot avoid technical debt altogether, so it’s better to embrace that fact and learn to live with it instead of fighting it. From that point forward, you can only try to limit the amount of technical debt you, or someone else, generate and ensure to always refactor some of it over time each sprint (or the unit of time that fits your projects/team/process).One way to limit the piling up of technical debt is to refactor the code often. So, factor the refactoring time into your time estimates. Another way is to improve collaboration between all the parties involved. Everyone must work toward the same goal if you want your projects to succeed.You will sometimes cut the usage of best practices short due to external forces like people or time constraints. The key is coming back at it as soon as possible to repay that technical debt, and automated tests are there to help you refactor that code and eliminate that debt elegantly. Depending on the size of your workplace, there will be more or less people between you and that decision.

Some of these things might be out of your control, so you may have to live with more technical debt than you had hoped. However, even when things are out of your control, nothing stops you from becoming a pioneer and working toward improving the enterprise’s culture. Don’t be afraid to become an agent of change and lead the charge.

Nevertheless, don’t let the technical debt pile up too high, or you may not be able to pay it back, and at some point, that’s where a project begins to break and fail. Don’t be mistaken; a project in production can be a failure. Delivering a product does not guarantee success, and I’m talking about the quality of the code here, not the amount of generated revenue (I’ll leave that to other people to evaluate).Next, we look at different ways to write tests, requiring more or less knowledge of the inner working of the code.

Testing techniques

Here we look at different ways to approach our tests. Should we know the code? Should we test user inputs and compare them against the system results? How to identify a proper value sample? Let’s start with white-box testing.

White-box testing

White-box testing is a software testing technique that uses knowledge of the internal structure of the software to design tests. We can use white-box testing to find defects in the software’s logic, data structures, and algorithms.

This type of testing is also known as clear-box testing, open-box testing, transparent-box testing, glass-box testing, and code-based testing.

Another benefit of white-box testing is that it can help optimize the code. By reviewing the code to write tests, developers can identify and improve inefficient code structures, improving overall software performance. The developer can also improve the application design by finding architectural issues while testing the code.

White-box testing encompasses most unit and integration tests.

Next, we look at black-box testing, the opposite of white-box testing.

Testing approaches – Automated Testing

There are various approaches to testing, such as behavior-driven development (BDD), acceptance test-driven development (ATDD), and test-driven development (TDD). The DevOps culture brings a mindset that embraces automated testing in line with its continuous integration (CI) and continuous deployment (CD) ideals. We can enable CD with a robust and healthy suite of tests that gives a high degree of confidence in our code, high enough to deploy the program when all tests pass without fear of introducing a bug.

TDD

TDD is a software development method that states that you should write one or more tests before writing the actual code. In a nutshell, you invert your development flow by following the Red-Green-Refactor technique, which goes like this:

  1. You write a failing test (red).
  2. You write just enough code to make your test pass (green).
  3. You refactor that code to improve the design by ensuring all the tests pass.

We explore the meaning of refactoring next.

ATDD

ATDD is similar to TDD but focuses on acceptance (or functional) tests instead of software units and involves multiple parties like customers, developers, and testers.

BDD

BDD is another complementary technique originating from TDD and ATDD. BDD focuses on formulating test cases around application behaviors using spoken language and involves multiple parties like customers, developers, and testers. Moreover, practitioners of BDD often leverage the given–when–then grammar to formalize their test cases. Because of that, BDD output is in a human-readable format allowing stakeholders to consult such artifacts.The given–when–then template defines the way to describe the behavior of a user story or acceptance test, like this:

  • Given one or more preconditions (context)
  • When something happens (behavior)
  • Then one or more observable changes are expected (measurable side effects)

ATDD and BDD are great areas to dig deeper into and can help design better apps; defining precise user-centric specifications can help build only what is needed, prioritize better, and improve communication between parties. For the sake of simplicity, we stick to unit testing, integration testing, and a tad of TDD in the book. Nonetheless, let’s go back to the main track and define refactoring.

Refactoring

Refactoring is about (continually) improving the code without changing its behavior.An automated test suite should help you achieve that goal and should help you discover when you break something. No matter whether you do TDD or not, I do recommend refactoring as often as possible; this helps clean your codebase, and it should also help you get rid of some technical debt at the same time.Okay, but what is technical debt?

Picking the right test style – Automated Testing

Next is a dependency map of a hypothetical system. We use that diagram to pick the most meaningful type of test possible for each piece of the program. In real life, that diagram will most likely be in your head, but I drew it out in this case. Let’s inspect that diagram before I explain its content:

 Figure 2.2: Dependency map of a hypothetical systemFigure 2.2: Dependency map of a hypothetical system 

In the diagram, the Actor can be anything from a user to another system. Presentation is the piece of the system that the Actor interacts with and forwards the request to the system itself (this could be a user interface). D1 is a component that has to decide what to do next based on the user input. C1 to C6 are other components of the system (could be classes, for example). DB is a database.D1 must choose between three code paths: interact with the components C1, C4, or C6. This type of logic is usually a good subject for unit tests, ensuring the algorithm yields the correct result based on the input parameter. Why pick a unit test? We can quickly test multiple scenarios, edge cases, out-of-bound data cases, and more. We usually mock the dependencies away in this type of test and assert that the subject under test made the expected call on the desired component.Then, if we look at the other code paths, we could write one or more integration tests for component C1, testing the whole chain in one go (C1, C5, and C3) instead of writing multiple mock-heavy unit tests for each component. If there is any logic that we need to test in components C1, C5, or C3, we can always add a few unit tests; that’s what they are for.Finally, C4 and C6 are both using C2. Depending on the code (that we don’t have here), we could write integration tests for C4 and C6, testing C2 simultaneously. Another way would be to unit test C4 and C6, and then write integration tests between C2 and the DB. If C2 has no logic, the latter could be the best and the fastest, while the former will most likely yield results that give you more confidence in your test suite in a continuous delivery model.When it is an option, I recommend evaluating the possibility of writing fewer meaningful integration tests that assert the correctness of a use case over a suite of mock-heavy unit tests. Remember always to keep the execution speed in mind.That may seem to go “against” the test pyramid, but does it? If you spend less time (thus lower costs) testing more use cases (adding more value), that sounds like a win to me. Moreover, we must not forget that mocking dependencies tends to make you waste time fighting the framework or other libraries instead of testing something meaningful and can add up to a high maintenance cost over time.Now that we have explored the fundamentals of automated testing, it is time to explore testing approaches and TDD, which is a way to apply those testing concepts.

Getting started with .NET – Introduction

A bit of history: .NET Framework 1.0 was first released in 2002. .NET is a managed framework that compiles your code into an Intermediate Language (IL) named Microsoft Intermediate Language (MSIL). That IL code is then compiled into native code and executed by the Common Language Runtime (CLR). The CLR is now known simply as the .NET runtime. After releasing several versions of .NET Framework, Microsoft never delivered on the promise of an interoperable stack. Moreover, many flaws were built into the core of .NET Framework, tying it to Windows.Mono, an open-source project, was developed by the community to enable .NET code to run on non-Windows OSes. Mono was used and supported by Xamarin, acquired by Microsoft in 2016. Mono enabled .NET code to run on other OSes like Android and iOS. Later, Microsoft started to develop an official cross-platform .NET SDK and runtime they named .NET Core.The .NET team did a magnificent job building ASP.NET Core from the ground up, cutting out compatibility with the older .NET Framework versions. That brought its share of problems at first, but .NET Standard alleviated the interoperability issues between the old .NET and the new .NET.After years of improvements and two major versions in parallel (Core and Framework), Microsoft reunified most .NET technologies into .NET 5+ and the promise of a shared Base Class Library (BCL). With .NET 5, .NET Core simply became .NET while ASP.NET Core remained ASP.NET Core. There is no .NET “Core” 4, to avoid any potential confusion with .NET Framework 4.X.New major versions of .NET release every year now. Even-number releases are Long-Term Support (LTS) releases with free support for 3 years, and odd-number releases (Current) have free support for only 18 months.The good thing behind this book is that the architectural principles and design patterns covered should remain relevant in the future and are not tightly coupled with the versions of .NET you are using. Minor changes to the code samples should be enough to migrate your knowledge and code to new versions.Next, let’s cover some key information about the .NET ecosystem.

.NET SDK versus runtime

You can install different binaries grouped under SDKs and runtimes. The SDK allows you to build and run .NET programs, while the runtime only allows you to run .NET programs.As a developer, you want to install the SDK on your deployment environment. On the server, you want to install the runtime. The runtime is lighter, while the SDK contains more tools, including the runtime.

.NET 5+ versus .NET Standard

When building .NET projects, there are multiple types of projects, but basically, we can separate them into two categories:

  • Applications
  • Libraries

Applications target a version of .NET, such as net5.0 and net6.0. Examples of that would be an ASP.NET application or a console application.Libraries are bundles of code compiled together, often distributed as a NuGet package. .NET Standard class library projects allow sharing code between .NET 5+, and .NET Framework projects. .NET Standard came into play to bridge the compatibility gap between .NET Core and .NET Framework, which eased the transition. Things were not easy when .NET Core 1.0 first came out.With .NET 5 unifying all the platforms and becoming the future of the unified .NET ecosystem, .NET Standard is no longer needed. Moreover, app and library authors should target the base Target Framework Moniker (TFM), for example, net8.0. You can also target netstandard2.0 or netstandard2.1 when needed, for example, to share code with .NET Framework. Microsoft also introduced OS-specific TFMs with .NET 5+, allowing code to use OS-specific APIs like net8.0-android and net8.0-tvos. You can also target multiple TFMs when needed.

Important note about cookies – Introduction

The client sends cookies, and the server returns them for every request-response cycle. This could kill your bandwidth or slow down your application if you pass too much information back and forth (cookies or otherwise). One good example would be a serialized identity cookie that is very large.

Another example, unrelated to cookies but that created such a back-and-forth, was the good old Web Forms ViewState. This was a hidden field sent with every request. That field could become very large when left unchecked.

Nowadays, with high-speed internet, it is easy to forget about those issues, but they can significantly impact the user experience of someone on a slow network.

When the server decides to respond to the request, it returns a header and an optional body, following the same principles as the request. The first line indicates the request’s status: whether it was successful. In our case, the status code was 200, which indicates success. Each server can add more or less information to its response. You can also customize the response with code.Here is the response to the previous request:

HTTP/1.1 200 OK
Server: GitHub.com
Content-Type: text/html; charset=utf-8
Last-Modified: Wed, 03 Oct 2018 21:35:40 GMT
ETag: W/”5bb5362c-f677″
Access-Control-Allow-Origin: *
Expires: Fri, 07 Dec 2018 02:11:07 GMT
Cache-Control: max-age=600
Content-Encoding: gzip
X-GitHub-Request-Id: 32CE:1953:F1022C:1350142:5C09D460
Content-Length: 10055
Accept-Ranges: bytes
Date: Fri, 07 Dec 2018 02:42:05 GMT
Via: 1.1 varnish
Age: 35
Connection: keep-alive
X-Served-By: cache-ord1737-ORD
X-Cache: HIT
X-Cache-Hits: 2
X-Timer: S1544150525.288285,VS0,VE0
Vary: Accept-Encoding
X-Fastly-Request-ID: 98a36fb1b5642c8041b88ceace73f25caaf07746
<Response body truncated for brevity>

Now that the browser has received the server’s response, it renders the HTML webpage. Then, for each resource, it sends another HTTP call to its URI and loads it. A resource is an external asset, such as an image, a JavaScript file, a CSS file, or a font.After the response, the server is no longer aware of the client; the communication has ended. It is essential to understand that to create a pseudo-state between each request, we need to use an external mechanism. That mechanism could be the session-state leveraging cookies, simply using cookies, or some other ASP.NET Core mechanisms, or we could create a stateless application. I recommend going stateless whenever possible. We write primarily stateless applications in the book.

Note

If you want to learn more about session and state management, I left a link in the Further reading section at the end of the chapter.

As you can imagine, the backbone of the internet is its networking stack. The Hypertext Transfer Protocol (HTTP) is the highest layer of that stack (layer 7). HTTP is an application layer built on the Transmission Control Protocol (TCP). TCP (layer 4) is the transport layer, which defines how data is moved over the network (for instance, the transmission of data, the amount of transmitted data, and error checking). TCP uses the Internet Protocol (IP) layer to reach the computer it tries to talk to. IP (layer 3) represents the network layer, which handles packet IP addressing.A packet is a chunk of data that is transmitted over the wire. We could send a large file directly from a source to a destination machine, but that is not practical, so the network stack breaks down large items into smaller packets. For example, the source machine breaks a file into multiple packets, sends them to the target machine, and then the target reassembles them back into the source file. This process allows numerous senders to use the same wire instead of waiting for the first transmission to be done. If a packet gets lost in transit, the source machine can also send only that packet back to the target machine.Rest assured, you don’t need to understand every detail behind networking to program web applications, but it is always good to know that HTTP uses TCP/IP and chunks big payloads into smaller packets. Moreover, HTTP/1 limits the number of parallel requests a browser can open simultaneously. This knowledge can help you optimize your apps. For example, a high number of assets to load, their size, and the order in which they are sent to the browser can increase the page load time, the perceived page load time, or the paint time.To conclude this subject and not dig too deep into networking, HTTP/1 is older but foundational. HTTP/2 is more efficient and supports streaming multiple assets using the same TCP connection. It also allows the server to send assets to the client before it requests the resources, called a server push.If you find HTTP interesting, HTTP/2 is an excellent place to start digging deeper, as well as the HTTP/3 proposed standard that uses the QUIC transport protocol instead of HTTP (RFC 9114). ASP.NET Core 7.0+ supports HTTP/3, which is enabled by default in ASP.NET Core 8.0.Next, let’s quickly explore .NET.