Skip to main content

.NET Core code coverage - the simplest ever solution

Ever wanted to see test coverage of your solution?

0. add coverlet.collector nuget  to test project

1. Install reporting tool: 
dotnet tool install -g dotnet-reportgenerator-globaltool 

2. Run tests and store results in a temp folder (you don't want those result file trashing your solution folder):
dotnet test --collect:"XPlat Code Coverage" -r $env:TEMP\CodeCoverage; 

or if you want to exclude some files, just add runsettings.xml:
dotnet test --settings runsettings.xml -r $env:TEMP\CodeCoverage; 

3. Generate report:
Test results will be stored in a folder with guid, so the tricky part here is to find the lastest created folder 
reportgenerator -reports:((gci $env:TEMP\CodeCoverage | sort CreationTime -desc | select -f 1).FullName +"\coverage.cobertura.xml") -targetdir:$env:TEMP\CodeCoverResult -reporttypes:Html;

4. Open it:
start $env:TEMP\CodeCoverResult\index.html

Join all in one PowerShell file and enjoy one-click code coverage or use it on CI\CD

Comments

Popular posts from this blog

Using MinIO as on premises object storage with .NET and S3 SDK

Ever tried to find a blob store that can work on-premises as well as in a cloud, support meta-data, scale well and have .NET client libraries? I did and stopped on MinIO . Well, honestly to my surprise I was quite limited in my choice. It's free, it's open-source, it can work on-premises and has helm charts for k8s. The best thing is that its S3 compatible, so if one day you move to the cloud the only thing you`ll need to change in your code is a connection string. The easiest way to start is by starting a docker image. Pull the image: docker pull minio/minio start for testing (data will be part of the container, so after a restart, all files will be gone docker run -p 9000:9000 minio/minio server /data Or start with a mapped image in windows: docker run -p 9000:9000 --name minio1 \ -v C:\data:/data \ minio/minio server /data When the server is up you can access it by http://127.0.0.1:9000/minio/login default user/password: minioadmin/minioadmin Working wi...

Avoiding distributed transactions (DTC) with SQL Server and async code

Wrapping async code in transaction scope is not as straightforward as sync one. Let's say we have some simple code: await using (var connection = new SqlConnection(connectionString)) { await using var command = new SqlCommand("select 1", connection); await connection.OpenAsync(); await command.ExecuteScalarAsync(); } We can wrap it in transaction scope and test that it still works: using var ts = new TransactionScope(); await using (var connection = new SqlConnection(connectionString)) { await using var command = new SqlCommand("select 1", connection); await connection.OpenAsync(); await command.ExecuteScalarAsync(); } ts.Complete(); But if you try to run this code you will get: "A TransactionScope must be disposed on the same thread that it was created" exception.  The fix is easy: we need to add TransactionScopeAsyncFlowOption.Enabled option to the constructor: var options = new TransactionOptions { IsolationLevel = IsolationLevel.ReadCom...

Fluent-Bit and Kibana in Kubernetes cluster or minikube

Agenda I`ll show how to setup a centralized logging solution running in k8s cluster that works beyond hello world examples.I`ll use local minikube but the same charts with adjustments could be used for normal k8s cluster (the real diff usually comes with usage of persistent storage). What you need to be installed: K8s Cluster (as I said, I use minikube ) Helm ( https://helm.sh/docs/intro/install/ ) Code: https://github.com/Vfialkin/vf-observability A bit of theory first: Let’s start with how logging works by default in Docker and Kubernetes. application log appender should forward logs to standard output, this way it will be passed to Docker container.  default container logging driver will forward them to Pod where logs are stored as JSON files (see: configure logging drivers ). There are other options for log drivers like  syslog, fluentd or splunk , but for now, I’ll limit scenario to default driver. at the end all those files will end-up in a node folde...