Loading
Current section: Multi-Region Data and Deployment 6 exercises
lesson

Set Up the Consul Lease Type

One of the issues with the static lease configuration is that if the primary instance goes down, all other instances will go down as well.

A more resilient option is to use the Consul lease type, which switches the primary instance to a different one if the primary fails.

![](http://res.cloudinary

Loading lesson

Transcript

0:00 Earlier, I mentioned that there are a couple of different configuration options for your lease configuration. The reason for this is that static lease configuration is easy. It's like this instance is always going to be the primary instance.

0:15 The problem is, if that instance ever falls over for any reason or during deploys, you're going to have some downtime. We'd much rather be able to have instances, or the primary instance, change over time so that if one comes down because some sort of error happens or we're doing a deploy, another instance can take over as the primary.

0:36 That said, we still don't want to just have the instance be any of our instances all over the world in many cases. Maybe this doesn't apply to you, but in my case, I have most of my users in one part of the world, and I have a lot of users in other parts that I want it to be fast for them, but I always want to have the primary instance be in this part of the world.

0:57 What we're going to do is we're going to set up consul, which is a service that runs alongside our app that is in charge of keeping track of which is the primary instance. Anytime one of the instances falls over, that is the primary, this consul service will say, hey, you need to be the new instance now or the new primary instance. Then that one can take over, and we don't have any downtime.

1:21 We're going to switch over from our static configuration to our consul configuration, so this consul leasing. There's one thing we need to change in our dockerfile, and that is we need to add ca-certificates because our consul service relies on this package.

1:39 Then, we also need to update our fly.toml to say enable_consul=true inside of our experimental configuration because this is technically still an experimental feature. You may just double check the docs to make sure that this is still necessary at the time that you're running through this.

1:55 The rest of our configuration is going to be inside of our LiteFS. The first thing we'll change, of course, is we're no longer doing static. We're doing consul. Then we have the advertise-url that specifies this is the URL that we are going to be advertising that we can be the candidate.

2:14 We're going to specify the hostname.vm.${fly_app_name}.internal: 20202. I don't know how you say that without it being 2--2-2, 20202. This is a unique URL that is accessible within the Fly network, so .internal is actually not a top level domain except within Fly's network. That's how different virtual machines can communicate with each other.

2:44 Then our consul configuration specifies the URL. We simply can go with the default of fly_consul_URL. Then the key is just saying this is a unique identifier for this particular instance of consul. We're just going to say litefs/${fly_app_name} to keep things pretty simple.

3:02 With that, that's everything we need to do to switch from a static configuration to consul. Of course, we probably are going to want to commit all of this. So let's commit, enable consul, and then we can push. With that pushed, we can go over to our actions and watch that deploy.

3:20 Here we have enable consul and that deploy process should just take a moment. We can go over here to our logs, and we'll speed this up so that we can take a look at our logs. All right, here we go. We've got our shutdown and restart all of this. Lots of these logs should be very familiar, but there are a couple of things that are a little different I wanted to look at.

3:41 Here now, we're using consul to determine the primary. We've specified our key, we've got that advertise URL, and all of that stuff is specified in here now. We've got our LiteFS mounted and the HTTP servers listening on this port, all of that stuff. Right here is interesting. It says primary lease acquired.

4:02 It acquired the primary lease from the consul service that was running for us, now it's connected to the cluster, and now it can run the sub-process. It basically says, hey consul, I can be the primary, and consul's like, well, sweet. There are no other primaries right now, so I'm going to let you be the primary.

4:17 Alternatively, it'd say, oh, actually there's already a primary node so you're going to be a replica. It'd be like, okay that's fine because I'm just a candidate. I was not elected.

4:27 We have our Prisma output here where it's found the migration but it didn't need to apply it because that migration existed already in the database. Now, when I come and refresh, we're going to have the count is still four, and we can still increment it. Everything is working perfectly well running behind LiteFS using consul to determine the primary as our lease configuration.

4:51 In review, to make this work, we just made a couple of changes. First, we added ca-certificates to our dockerfile because consul service needs that. In Fly, we enabled consul in the experimental options here. Then LiteFS, we added some configuration to our lease. We changed the type to consul.

5:09 We added an advertise URL. We specified the consul config for the URL as the Fly consul URL, an environment variable that is exposed to us. The key is litefs/${fly_app_name} to keep it unique. That's how you get LiteFS running with the consul lease strategy.