Loading
Current section: Real World Examples 15 exercises
lesson

Writing Tests with GitHub Copilot

Using AI assistants like GitHub Copilot can be a game-changer when it comes to writing tests for your code.

For this example, we will use GitHub Copilot to help us write tests for the epicweb/invariant utility while utilizing Node's built-in test runner.

Here's a step-by-step guide on how to use

Loading lesson

Transcript

00:00 I have this package called "epicweb/invariant" and what it does is it allows you to call this invariant function with some condition and if that condition is false, then it will throw an error with this message. It's pretty simple, there's not a whole lot to it.

00:15 We have an invariant response version of this as well, which will throw a response instead and you can customize the response in it. Pretty useful utility, we use it everywhere. Not very complicated, but it's still necessary to have tested. So in the index test right here, I've actually deleted all the tests

00:32 and I also deleted the git information from this project so that GitHub co-pilot isn't trying to talk to GitHub and be like, "Hey, what are the tests here?" and just like filling in all the tests. So let's get the test going. And we're also using the Node built-in test runner. So we're using TSX so we can run TS files, but you can just like replace this with the Node.

00:51 This is just the built-in test runner here as well and that's going to be important for us too. So to get started, I'm going to keep this index TS file open because right now, GitHub co-pilot will use the open tabs that you have as part of the context for the code generation that it's doing.

01:09 That may not be the case in the future and different AI assistants will have different mechanisms for including things in the context. Some will probably just have the whole project into context. But the point is that it's important for you to understand how these AI assistants get the full context, the full picture of what you're trying to do

01:29 and for the time being, co-pilot needs to have those files open. You don't typically have to think about this very much because as you're working throughout the day, you're probably going to have the right files open anyway, so it's not normally a problem. Okay, so we're going to ask GitHub co-pilot to do something.

01:48 We're going to say, "Please write a test for the invariant utility." And I'm going to leave it kind of vague so that we can see what it generates and talk about how to improve it. So here it's doing an import, that's fine. The "describe it" and "expect" is coming from the globals that you get from

02:06 pretty much most JavaScript-based testing frameworks. But we're not using a testing framework, we're using the built-in testing stuff, testing utilities from Node. So I'm actually going to improve this and we're going to say,

02:18 "Please write a test for invariant using the built-in testing utilities from Node." Like, "node assert" and "node test." And we'll see what it generates for us there.

02:37 Okay, so it's bringing in "strict as assert," that is not quite what I want. So, getting a little bit closer though, we can actually edit this prompt. So, "built-in testing utilities from Node."

02:51 How about we say, "Using assert from node assert" and "test from node test." And we'll see if it can improve on this here. Okay, so now we've got "test" and "node test." And let's improve this further.

03:10 "Do not use strict and update the import from assert to node assert." Because that's the way I like my Node imports to look. "Assert" is not a named import.

03:30 And there we go, awesome. And here, let's see what happens if I just say "thanks." Awesome, let's accept that. Okay, so we've got our invariant, and our test, and our assert. And our test should be passing, yes, it is passing.

03:47 Important note here is that you need to verify that you're actually testing what you think you are. This has nothing to do with AI, by the way, but I would be remiss if I didn't mention, make sure that it can fail. And that's actually maybe even more important when you've got an intern working for you, your AI assistant, to make sure that they're doing their job.

04:06 So the job is, make sure that this test verifies that it's working. So to verify that the test is doing what it's supposed to do, go break the implementation. And if the test still passes, then it's not doing its job. Okay, great. So let's prompt the AI in a little different way. We're going to use a comment this time.

04:25 And I'm going to say, "Verify the message argument can be a function." And then we'll just let Copilot autofill that for us as well. So it throws, "Invariant false, condition must be truthy." Boom. Awesome. Okay, so I'm happy with that.

04:43 I'm not going to keep this comment around. Sometimes it can be useful, but I literally, like, I'm looking at this and I'm thinking, that's literally what the message is for, or the title of the test. So I'm not going to hang on to that at all. So one thing that I want to improve about this is I can see that we've got this invariant error.

05:02 So it's saying, "Hey, this is going to throw an invariant error, but I don't know what the message is." So check this out. I can come over here and I can be like, "Yo!" or YOLO or something, whatever I want. And my tests are still passing. That's definitely an important aspect of this utility is that it's going to throw the message.

05:21 So let's verify that it's throwing the message here. And this is why it's important that you review what AI is generated for you. So let's say, "Please verify the message of the thrown invariant error matches the message

05:39 given to invariant. Use a variable to do this comparison." Okay, and let's see what our AI assistant is going to generate for us. So here's our error message saying it throws, and here's our error message.

05:58 We're going to use the callback form here, and we can say, "Return error message equals error message." That works. That's fine. So we'll stick with that. We get a thrown error, and we'll say, "Yo!" and comment that out. And now our test is failing. So that is good.

06:18 And it's cool that you can just select the part of the test that you want to modify and tell it exactly the changes that you want to make, and it will make them. Another way that I would have done this is I might have said, "Use a try catch instead

06:35 of the second argument to throws." And we'll see how it updates that. So we've got a try catch. We've got an assert fail right there. That's nice. And we've got instance of invariant error and assert fail. Okay.

06:52 Maybe we could say, "Don't use assert fail." I don't know. That's actually probably fine. Let's accept this. Looks pretty good. I'm just bothered by the fact that TypeScript is like, "There's no way in the world that this line could ever evaluate." But it's fine. Okay, great.

07:08 So that should give you a pretty good idea of how to use Copilot to write your tests. Make sure that you have the thing that you're trying to test in context. And feel free to continue to provide additional feedback and make it feel like it's actually

07:26 a chat, like you're actually having a conversation with a coworker. It's best to treat your AI assistant as somebody that you're pairing with just remotely. So you can only communicate via text, maybe in the future, or actually, in fact, there are some tools that will allow you to talk vocally with your AI assistant if that's

07:45 your thing. Maybe it will be all of our thing in the future. But the idea is, make it feel conversational, like you're actually having a conversation with your assistant, and you can create a lot more a lot faster. So there you go. That is writing tests with your AI assistant.