Loading
Current section: Unit Testing 6 exercises
solution

Writing Unit Tests for Utility Functions

Loading solution

Transcript

00:00 So I'm going to create this file right next to it. I like to co-locate my lower level tests like this. So we're going to say miss.test.tsx, or ts. There we go. But we have a lot of utilities in here. And so I'm going to differentiate this one just a little bit by adding another dot in here.

00:19 And we'll say error message. And actually, this is totally legit. I will pretty often have multiple files that test different aspects of a utilities file like this. Especially if they require a different setup and stuff like that, that could be really useful.

00:36 OK, so with this now, we can import test and expect from vTest. All right. And then we'll add our test, the error object returns message.

00:51 So if we just do a regular error of some kind, then we're going to expect that getError message for that error should be the same thing. So let's run vTest npm test. And that will run vTest on our tests.

01:11 And there we go. That test totally works. And this is a very important thing. You need to do this. Make sure your test can't fail, because it's very possible that you're not testing the thing you think you are. And it's even better than breaking the test itself. Go into the source code and see if you can break that. Return high.

01:30 And if you can break it from the source code, then you're like, yes, I am running that line of code for sure. So that helps a lot. Definitely recommend it. OK, now there's one other thing I want to change about this. And that is, what if somebody was silly

01:44 and said, if the error.message is this? That would be a silly thing to do. And then we have return high. OK, this is just an example.

02:02 Our function is totally busted in a terrible way. But there's not really anything communicating to the reader of the test that this is an error, is not some sort of special string that is treated specially. And so for that reason, I actually

02:19 generate as much of my test data as possible. So we're going to say faker, to use faker to generate, maybe not a sentence, maybe just a couple words. We'll generate two words there. And then we'll make sure that the error message is the same.

02:35 So here, the way I want to do this is we'll say, this is our message. And then we expect this to be the message. And there we go. We successfully failed. And so we can now fix this. And there we go.

02:54 Now our test should be passing. And we won't have that problem anymore. And we communicate to readers of the test that, yes, that is actually any message. Whatever the message is, that's what this function is doing for us. OK, so let's handle this case now.

03:11 If the string or the type of error is a string. And so we'll say test. And actually, you know what, before we do that, let's take a quick opportunity to run npxv test, no watch.

03:27 We need, good grief, no watch and coverage. So this will run all of our tests, of which we only have one. And it will include a coverage report. So we can actually open the coverage and index HTML right there.

03:45 That'll open up in our browser. There's our dad joke. So here is what the coverage report looks like. We'll look in these details. And here's miss. And we only have one test, so there's not a lot of coverage. But you'll notice in this coverage report, it's going to show us, hey, that's not being covered right now, it's highlighted.

04:05 So we are covering this statement right here, this if statement, but we're not falling into the consequent. And so that's why that's being highlighted. So we're only covering half of the branch. We are covering this, because we end up in here, but we actually don't get past this. So when I see a coverage report like this,

04:23 I'm constantly just thinking, OK, so that's great. What are the use cases that I'm missing? So you want to look at the code and then think about the use cases, because that's what's actually important is the use cases, not the particular lines of code,

04:40 necessarily, because you can sometimes overfit to those use cases. So this coverage report can be pretty helpful if you're trying to figure out, OK, what other tests do I need to write? And other tests we need to write include that the error is a type of string, and if it is, then it'll return that. And then the other, it would be a situation

04:59 where we're passing just something totally random. User doesn't know what it is. So let's cover those pieces of code as well. Coming over here, we're going to say, let's run our test again in watch mode. Sweet.

05:17 So the string returns itself, and that's going to be our function here. Then, yeah, there we go. That works perfect. Thanks, Copilot. The string returns itself. And of course, you want to check that, and yep, that works. So we're good.

05:35 And then let's have a test. Oh, and you know what? I just realized that we're generating a two-letter word. We want words. We want two words. There we go. That should be better. OK, so this last one is going to be undefined falls back to unknown. And so then we can just say, Copilot's

05:55 going to get this one for me. I know it, Copilot. Yes, thank you. So if we pass in undefined or pass in some random object or whatever, undefined works fine, then the result should be unknown error, actually, I think is what we return. Yep, unknown error.

06:14 And that gets our test passing. If I change this, then that's going to fail, so that's good. And what's interesting is we've got this standard error output. Ooh, that's not good. I don't like seeing that. We'll talk about that more later. But let's run that coverage report again to make sure that we're covering everything

06:33 that we were hoping. And I can just open that up again. And boom, we're covering the whole function. We can look at this line, or this if statement is executed three times, apparently. So that's interesting. So there you go. That is our first unit test.

06:51 We've got some more work to do, but hopefully that was a good time for you.