Writing unit tests with GitHub Copilot

Introduction

A lot of developers do not like to write unit tests. Generative AI tools like GitHub Copilot can help you write unit tests. At least that is what these tools promise. In this blog post, I will write some unit tests using Github Copilot (with Chat) for a simple example to check if the promise is kept and what kind of changes to the generated code I have to make as a developer to get decent unit tests.

Example

For this blog post, I will use a probably familiar example of order and orderline items. 





First tests 

This code does not have a lot of logic to test. So let's add some logic to the order class. I ask GitHub Copilot to generate the code for calculating the total price for an order. In this code, I also changed the public list into a private one.
 

I can now ask Copilot to generate a unit test for this code using the prompt: "Can you generate a xunit test for the order and orderlineitem class where the order has 2 orderlineitems".

Three unit tests were generated upon this request. One for TotalPrice and two for AddOrderLineItem.



Copilot did generate the code but did not take our own OrderLineItem into account. Our own class does not have a constructor and also does not contain something that looks like an id. Furthermore, if I calculate the total order price for the generated test I get 80 and not 70. In order to get this generated code working I do have to change the code. I do like the usage of the Arrange, Act, Assert pattern in the generated code. This pattern is widely used in all kinds of frameworks. If you are not familiar with the pattern I will give you a short summary. In the arrange section everything needed in the unit test is set up. The actual call to the code is done in the act section. The checks, if everything was executed correctly, are done in de assert section.  Let's fix the generated code. 

Fix generated code

In unit tests, I usually use factory methods to generate the class to test (or subject under test, sut). 
I asked Copilot to generate a factory method for me, but at the first attempt it generated a separate factory class and the code did not take our own OrderLineItem definition into account. I updated the code to match our code and moved the generated code back into the unit test class. This code generates a precalculated price and quantity. I could have used random numbers in this code, but then it would be impossible for me to check the calculated TotalPrice. I like my unit tests to be as specific as possible to gain the most prediction from them, so I choose the fixed price option.


I can now use this method in the first test case.


The other unit tests are replaced as well.



In these unit tests only one orderlineitem is added. The use of Single was proposed by the code inspections within Visual Studio. The assert for the second test was generated by Copilot using the prompt before the code. The generated code is now fixed, but from a testing perspective, I am not satisfied with the unit tests. 

Add additional test

I want to be sure that the calculation of TotalPrice also works if no items have been added to the list. I asked Copilot to generate a new unit test for this case.


Given the code, I am now satisfied with the unit tests. Let's run them.


At least I now have 4 unit tests for my code. This code is still a bit simple. Let's add some complexity to the code since most production code is more complex than this simple example.

Extra complexity

In a normal system, discount is given for orders over a certain amount. Let's change this in our code as well. For this example, I choose to give a 5% discount for orders over 50 and 10% for orders over 100.


The code for calculating the discount has been generated by Copilot based on the prompts in the code. Of course, I asked Copilot to generate unit tests for this code. It generated 2 unit tests for this code. One for over 50 and one for over 100.


 

The generated code does not match the code in Order again and the total is not calculated correctly, so we do have to fix the code.

Fix generated code and include some test knowledge in it

Let's first fix the testcase for a total price above 50. The value of 50 should be considered a boundary value for testing. According to the test theory of equivalence classes we should find a test case in the middle of the test range. Since the next boundary value is 100, I take a value within the middle of that range. For this test case I choose 70. The discount of 5% is applied to this case, so I should get 66,50 as a total price. The precision of the result is in this case not an issue, but you should take the precision into account when asserting the values.


So, I now have a good test case for the 5% discount code. The test code does not check the behaviour around the boundary value itself, however. According to the theory, we should check value -1, value and value + 1 to ensure the correct behaviour around the boundary value. So, in this case, we should check 49, 50 and 51.







For the boundary value 50 we have now sufficient test cases. Now let's focus on the boundary value of 100. We should include a case way above 100 given the equivalence class. For this test case, I choose 200. And, of course, we need test cases for 99, 100 and 101. I could write 4 different unit tests for these cases. To show you however that there is an alternative way to test these cases, let's use the [Theory] option instead of the [Fact] attribute. The Theory does allow you to pass test values into the test code. For this scenario, I have to pass the quantity and the expected value to the method.



I do have to pass the parameters to the test case. I can use InlineData for this.

Normally, I would not surround the data with a region. I only added this region to allow for clean screenshots of the code. 

As an alternative to using InlineData, I could also use the MemberData option.
 

For this option to work, I need to set up a static array with the values and pass the name of this array to the MemberData attribute. With either option, I now have coverage for the boundary case of 100.

After adding these tests, we now have good enough coverage for this code. 


Conclusion

GitHub Copilot did help me write these tests, but only to some extent. If I want to write good unit tests, I still have to pay good attention to the test cases that are generated by Copilot and amend them with test cases that I know are needed to provide good enough test coverage.

The code for this blog can be found here. For the different steps, separate branches have been used. 

Comments

Popular posts from this blog

Deploying multiple projects from one solution with Azure DevOps

Using pictures in ASP.NET MVC Core with Entity Framework Core