I have a unit test which randomly fails. I would expect this unit test to always work.
This is the test code (as you can see I'm using NUnit
as the testing framework):
using NUnit.Framework;
[TestFixture]
public class TestClass
{
[Test]
public async Task Test()
{
var start = DateTimeOffset.UtcNow;
await Task.Delay(TimeSpan.FromMilliseconds(500)).ConfigureAwait(false);
var end = DateTimeOffset.UtcNow;
Assert.GreaterOrEqual((end - start), TimeSpan.FromMilliseconds(500));
}
}
This is the csproj
file of the test project:
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<TargetFramework>net6.0</TargetFramework>
<IsPackable>false</IsPackable>
<AnalysisMode>All</AnalysisMode>
<Nullable>enable</Nullable>
<ImplicitUsings>enable</ImplicitUsings>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="Microsoft.NET.Test.Sdk" Version="17.4.1" />
<PackageReference Include="NUnit" Version="3.13.3" />
<PackageReference Include="NUnit3TestAdapter" Version="4.3.1" />
</ItemGroup>
</Project>
The failure happens on both Windows 10 development machine and Ubuntu 20.04.5 machine (executed via WSL2 on the Windows 10 host). There is a difference between the observed behavior: test failure happens much more frequently on Linux. Failures in Windows are unusual, but sometimes it happens on Windows too.
Does anyone have an explanation for this weird behavior ? Based on my knowledge and understanding, this doesn't make sense.