Loading
Current section: Rate Limiting 4 exercises
solution

Safeguarding Your Server: Adding Rate Limit Configuration

Transcript

00:00 Because we want this rate limit to apply before we even get into our Remix code, so we want to use as few resources as possible to handle these rate limits before we get into our Remix stuff, this is all going to happen before Remix is even called. So this will happen in our server index file right here. So this is our Express server.

00:17 We have a couple static files, stuff where we're compressing responses, all this stuff. This is all stuff that is like, it's really fast, it's not a problem, and in fact users will be making requests to get static resources a lot anyway, so I don't think we want to rate limit them. So we'll do this after all that stuff, but before we get into the real

00:37 resource intensive stuff like database queries and stuff where we jump into the Remix stuff. So right here is a pretty good place to add our middleware for rate limiting. So we'll say app.use and we'll say rate limit, and that's going to come from Express Rate Limit, and then we'll configure it as we have laid out here. So

00:56 window.ms, that's going to be a 60 second window. So as time moves forward, our window moves forward, right? I'm not sure which way looks more natural to you, but as time moves forward, we're going to be moving forward on our window, and as long as there are less than the number of our max

01:12 requests in that window, then they're good to go. So then we are going to set our max, we'll set it to a thousand. This is probably a little much, we probably wouldn't catch somebody hitting the refresh button a bajillion times. So you're going to be adjusting this based on the needs of your specific application.

01:31 This will vary widely based on whether you're integrating with a third party and they're making requests for a bunch of different users, or you're just a regular web app and you're accepting regular traffic. So maybe a thousand might be a little bit much for our particular use case, but the other thing is you don't want to get in the way of

01:48 regular users either. So it's definitely a balance and a trade-off. So then we're going to add our standard headers and disable the legacy headers. So there are the standard headers for communicating the rate limit. This can be very useful, especially if you're integrating with a third party and they're talking to your APIs. You want to be able to let them know, hey, you're hitting, like you're getting

02:08 close to your rate limit, or here's how much you've got left. So those headers are actually quite useful. And then the legacy headers are unnecessary because we've got the standard headers. And with that, we now have our rate limit. Now, this is going to be really difficult to reproduce because of our max.

02:24 So I'm actually going to lower it down to five so that we can test out what it actually looks like. So I go one, two, three, four, five, and then six. And we got too many requests. Please try again later. You can, of course, customize this page, but it's,

02:40 yeah, feel free to if that's something that you are into. Okay, so that we do have this one last bit, and that is to make sure that when process ENV testing is defined, we just have a very, very large limit. So that, like, I want to keep the

02:58 rate limit middleware in there because I want my testing to go through that. But I don't want to ever have my tests be rate limited because my tests are, they actually are bots. And so they are okay to hit my server a lot. And so that's what we're going to do right here. We're going to say our

03:15 max multiple is equal to, if we are testing, let's do, like, something outrageous like 10,000. Otherwise, it's one. And then we'll multiply our max by the max multiple. And that way, when we're running in our tests, we're going to basically never hit that maximum. But in production, we

03:34 definitely potentially could. And so here, now I can hit it a bunch of times and it will work just fine. So that is our initial rate limit. That's a good, like, baseline that will help protect the rest of your app from getting hit a bunch of different

03:51 times from bots. Now, there are a lot of hosts that you can use to deploy your application that will handle the, like, just general DDoS attacks and stuff like that, where they just send a swarm of bots to you and all that stuff. They typically will protect you from

04:09 this sort of thing. But, yeah, having this built in yourself can be really helpful to handle those kinds of attacks as well. It's a good idea to put in place. And it's especially a good idea to put in place for some of your more sensitive endpoints, which we'll do in a future step. But that's getting us

04:28 started with a pretty straightforward rate limit middleware.