Acceptance Test-Driven Development in .net core with Specflow

1/29/2020

Test-driven development is awesome, it gives you a safety net to rely on so the code can be confidently refactored. TDD learners normally start at unit testing, those cover independent parts of the software to the detail and are a great thing, but what is the guarantee that when you glue all classes together they work as expected? For testing that everything works together, we need different types of tests.

Before we go further into ATDD

You must have a basic understanding of TDD, which you can learn here if you are not familiar with it:

What is ATDD?

ATDD is Acceptance test-driven development, the tests will drive the implementation. I use those in the concept of Double-loop TDD, so the outer loop is the acceptance test:

Double-loop TDD cycle — Image Source

Instead of only small TDD cycles of Red > Green > Refactor, we will do an outer loop into this, making the acceptance test fail, then implement multiple TDD cycles for the unit tests until the acceptance test passes.


The acceptance tests can be expressed as:

Normal test codes

Example test Calculator

Those are written in the same language you write unit tests, you can extract many methods out of the test to make it more human-readable as the scope can become fairly big.

BDD language

Example test written in Specflow

Because it is written a Given/When/Then in a human-readable format, that’s great to document the features of the application and a good way to exemplify to the business what you are trying to develop for them.

I find BDD tests more appropriate for the acceptance tests because the scope is normally large, and BDD can display more data more efficiently.

Test scope

When creating your acceptance tests, you will have to define how much of your system are you mocking, if any at all:

Testing Scopes — Created using http://draw.io

End to end

Those would be black-box tests, this means the tests are running against a deployed instance without internal knowledge of how the system works. That can be useful to migrate technologies later.

This provides the best coverage you can get, but they come with downsides in test execution speed, having to rely on pre-existing data and getting false positives because of dependency failures.

Integration tests

Those would not run against a deployed instance, but rather in some sort of bootstrapping of the application, using real dependencies like databases and external systems. Those dependencies can be mock instances, mocking another teams API, but the way to communicate to them will be real over HTTP. Those tests suffer a bit of the same issues from the End to ends

In-process, no dependencies called

As the integration tests, somehow we will bootstrap the application entry point, but without calling any real dependency, we will mock all dependencies at the last level. For example on the diagram, the SqlProductSearch class is out of the test coverage.

The benefit of that is the tests will run very fast, you can even continuously run them with tools like NCrunch. Also, they won’t have false positives from dependency failures. The downside is that you are mocking things, which sometimes you plug the real dependency and unexpected things happen.


Although there isn’t one right answer, I would evaluate for each application and team experience what scope and test structures to use. My personal preference normally is BDD with in-process, so that’s what I’m going to show how to do next:

Setting-up spec flow with .net core

First, we will need a console app and a Xunit test project in .net core with Visual Studio 2019:

Console App in Visual Studio 2019

Add a new XUnit project to it:

New xUnit project — Visual Studio 2019
Console plus the XUnit project example

Installing the SpecFlow Visual Studio extension:

For developing with SpecFlow it is useful to have IDE benefits like syntax highlighting and auto-formatting, so we need to install a Visual Studio extension as:

After downloading, it, Visual studio needs to be closed so the installer is triggered:

After the installation is complete, re-open the solution. Right-click on the project, Add > New Item…

Now that Specflow extension has been added, while adding new items you can see a Specflow section:

Package Dependencies

After installing the extension into visual studio, we still need to configure our test project to use SpecFlow, and with the correct test runner. (XUnit, NUnit, MsTest etc). We will do this tutorial with XUnit.

Right-Click to the project dependencies, Manage NuGet Packages…

Search for SpecFlow and install the Specflow.xUnit package:

This got me an error:

So I updated added the package XUnit.Core to 2.4.1 and the installation is complete.

Code-behind

In previous instances of Specflow, there was the concept a code-behind, a C# file that was a child of the .feature file, Feature.cs used to be regenerated every time you changed the .feature file.

With the evolution of Specflow, those files are now generated at build time, by installing the package Speflow.Tools.MsBuildGeneration.

As those are generated at build, I recommend you don’t source control them. By adding *.feature.cs to the .gitignore.

With all this, now we finally can see and run our Specflow test in the Test Explorer.

Alternatively, those packages can be installed from the command line:

Implementing simple SpecFlow tests:

Specflow comes with a default example for creating a calculator, that will be useful so we get a very basic example working:

As you can see, our scenario “Add two numbers”, all steps are being highlighted in purple, that means SpecFlow doesn’t know what to do in them. For telling SpecFlow what code to run for each step, we need to generate step definitions:

Specflow — Generate step definitions menu item Specflow — Generate step definitions popup
The default generated Specflow steps file

Now the feature file should look like this:

The initial setup is bit flaky, so you might need to restart visual studio and do a few rebuilds until it looks correct.

A couple of points here:

  • All steps in white means that SpecFlow knows which code to execute for each step. You can right-click on them and “Go To Step Definition” to test it’s mapping correctly.
  • The numbers are in a darker grey are an indication those are parameters to the step method.

Specflow knows how to receive the parameter, by doing a regular expression in the binding string, so anything that comes into that (.*) position is going to be converted to int as is defined in the method signature. You can receive multiple parameters into a method, those will be passed to the method based on the order. The parameter name doesn’t matter, so feel free to add meaningful names.

Scenario.Context.Current.Pending(); is the SpecFlow way to fail the test as this is pending implementation.

So if we start linking, each step to what the code should do, we should get to something like this:

CalculatorSteps.cs Spec passed test picture

More realistic spec flow test

Good, we got our first basic test running and passing! Yey!

Now let’s start getting close that what a production test looks like. Imagine we have a simple requirement, we need to sync orders from a message queue into our back-office:

The important bit here is our test scope, we don’t necessarily know upfront which classes we are going to need for syncing the order to the database, but we know our external dependencies:

  • OrderProcessor entry point for our process
  • ISalesChannelQuery — To get the sales channel id based on the description
  • IOrderPersister — Class to insert it into the database
  • ? is being used we don’t know our design yet.
Specflow — Dependency test example
Specflow — Dependency test example default steps

The test perfectly describes, what the software receives from the outside world and what is expected to send to the outside world.

Other things to notice is we are using a few more advanced concepts of Specflow, we are using two parameters in the Given, with one of them being a string (Is just regex like before).

On the “When” we are using a strange syntax with all those pipes, that’s a SpecFlow table. That’s useful when you need to pass multiple parameters or multiple rows.

Double-loop TDD cycle — Image Source

As in the double-loop TDD, we are going to make that acceptance test fail first, for the right reasons.

We get to something like this:

Specflow — Dependency test implemented steps

And we get the failure for the right reason:

Explaining the Steps implementation:

The “Given” is a simple class collaboration technique, we are stubbing the sales channel query to return the correct id when needed.

The “When” is your “act”, we need to perform the action in the test which in this case is the software entry point. You can see a “table.CreateInstance”:

That’s just like an MVC binding, it parses the properties based on property names via reflection. If there were multiple rows in the table, you could use CreateSet instead of Create Instance.

For the “Then”, you will see I’m not doing parameter name via reference equality so I’m having to check each property individually. That’s because unlike unit tests, I don’t have access to stub the object creation.

Example of comparing properties in a Mock verify call

Implementation

After having the spec nicely setup, we got the test driving the implementation, by each failure pointing what is missing on the implementation. For not prolonging the blog post for too long I won’t go into the step by step. I followed the steps explained here. You can still see the final result in the Github.

Are my dependencies tested anyhow?

If you are using the strategy of In-Process no dependencies, so how do I test my dependencies?

For example, to test that we can get the correct Sales channel id, we would have to set up data in the database or use pre-existing data, so we can check when we query by the sales channel description, then we get the expected id.

Conclusion

For me, those acceptance tests in memory are like first discovering TDD, how could I work without it before? How did I test my application? Hours spent trying to set up data so I could check specific scenarios worked or fixing obvious bugs that those tests would pick it up.

I hope with this post you got to:

  • Get a basic idea of ATDD and it’s variations
  • Be able to bootstrap a Specflow unit test project with .net core
  • Get the entry point into applying ATDD into your day job

Cheers, for a better programming industry =)

References

Cheers, for a better programming industry =)

References