This post is part 3 of a 3-part series:
- Part 1: How to test a database in C# with TestContainers
- Part 2: Resetting your test database in C# with Respawn
- Part 3: Creating a clean test suite for C# integration tests
Overview
Now, we’ve got a test suite successfully running that uses a containerized database with our database provider of choice, and we also have the database being cleaned and reseeded after every test. Functionally, there’s nothing left to do, but for maintainability, we can make our code a little cleaner to be more manageable in the long run.
Base class for database tests
Since all of our tests will share some common requirements, such as setting up the DbContext, we can put them in a common class. In this case, I’ve called it DatabaseTest:
[Collection(nameof(DatabaseTestCollection))]
public abstract class DatabaseTest : IAsyncLifetime
{
private Func<Task> _resetDatabase;
protected readonly ProductContext Db;
protected readonly Fixture Fixture;
public DatabaseTest(IntegrationTestFactory factory)
{
_resetDatabase = factory.ResetDatabase;
Db = factory.Db;
Fixture = new Fixture();
Fixture.Customize(new NoCircularReferencesCustomization());
Fixture.Customize(new IgnoreVirtualMembersCustomization());
}
public async Task Insert<T>(T entity) where T : class
{
await Db.AddAsync(entity);
await Db.SaveChangesAsync();
}
public Task InitializeAsync() => Task.CompletedTask;
public Task DisposeAsync() => _resetDatabase();
}
Even though this class is fairly small, it helps a lot in the long run as you create more classes for tests and have more common setup logic.
Here’s the new FavoriteServiceTests from the previous post with DatabaseTest inherited:
public class FavoriteServiceTests : DatabaseTest, IAsyncLifetime
{
private readonly FavoriteService _service;
private Product _existingProduct = null!;
private User _existingUser = null!;
public FavoriteServiceTests(IntegrationTestFactory factory) : base(factory)
{
_service = new FavoriteService(Db);
}
[Fact]
public async Task When_User_Has_Not_Favorited_Product_Yet_Then_Inserts_Favorite_Record()
{
var expectedFavorite = new ProductFavorite
{
ProductId = _existingProduct.Id,
UserId = _existingUser.Id
};
await _service.FavoriteProduct(_existingProduct.Id, _existingUser.Id);
var allFavorites = Db.ProductFavorite.ToList();
allFavorites
.Should().ContainSingle()
.Which.Should().BeEquivalentTo(expectedFavorite, options => options
.Excluding(x => x.Id)
.Excluding(x => x.Product)
.Excluding(x => x.User)
);
}
[Fact]
public async Task When_User_Has_Already_Favorited_Product_Then_Does_Not_Insert_Another_Favorite_Record()
{
await Insert(new ProductFavorite
{
ProductId = _existingProduct.Id,
UserId = _existingUser.Id
});
await _service.FavoriteProduct(_existingProduct.Id, _existingUser.Id);
var allFavorites = Db.ProductFavorite.ToList();
allFavorites.Should().ContainSingle();
}
public new async Task InitializeAsync()
{
await SeedDb();
}
private async Task SeedDb()
{
_existingProduct = Fixture.Create<Product>();
_existingUser = Fixture.Create<User>();
await Insert(_existingUser);
await Insert(_existingProduct);
}
}
The change was pretty simple, and we can even still override the IAsyncLifetime methods as needed – in this, to do some database seeding specific to the tests in this class.
Other cleanup
Other than the new DatabaseTest class, I’ve also moved the common and setup classes into their own folder called Setup – for these posts, that’s DatabaseTest.cs, IntegrationTestFactory.cs, and AutoFixtureExtensions.cs
Github example
You can find a full working example of this at the following Github repository: https://github.com/danielwarddev/TestingWithDb
Hello Daniel.
I’ve been reading your blog for a little over a year now, and enjoy all the content you write regarding testing best practices.
I have a question regarding running integration tests using postgres databases.
As I understand it, xUnit fixtures run tests sequentially. So if I apply the collection fixture mentioned in this article across all test classes all tests run sequentially. Even with respawn enabled is that faster than running them in parallel?
And now the million dollar question:
Would one theoretically be able to run every single test in parallel against a single postgres testcontainer?
I have been playing with that thought for a long long time.
Some ideas I had to avoid shared state / context bleed.
1. Inside the postgres testcontainer create separate logical databases per test, that each test can use.
2. Use the same logical database inside the postgres testcontainer, but create a separate schema per test.
You seem to be extremely knowledgeable when it comes to testing, so I would be happy if you could share your own experience regarding this?
The goal is to be able to write and use your test data as you see fit without worrying about that test data clashing with other tests due to shared state / context bleed.
Hello – great question!
You’re correct that putting all of your tests in a single collection will make them all run sequentially. By default, xUnit treats each class as a separate collection, and you can create your own collections using attributes. It runs all collections in parallel, but tests within a collection sequentially (see more here, under “Parallelism in Test Frameworks”: https://xunit.net/docs/running-tests-in-parallel.html).
As far as I can tell, xUnit doesn’t seem to support running tests within a collection in parallel (and I’ve never tried, either). You could do this yourself with things like putting each test in its own class/collection, but that seems to be fighting against the framework, I think.
I have to admit I’ve never felt the need to do that. You probably could accomplish what you want using different schema/table names and ensuring only one container gets spun up for your whole suite, but it would be a lot of work to create and maintain that part of your project. In my experience, spinning up multiple containers each with their own database hasn’t been a performance issue, so I recommend first trying that. That eliminates having to worry about state bleed while also not needing any custom implementation.