Home > Categories > Aws cdk > Deploy Next JS on ECS Fargate With CDK & Codepipeline

Home > Categories > Next js > Deploy Next JS on ECS Fargate With CDK & Codepipeline

Home > Categories > Docker > Deploy Next JS on ECS Fargate With CDK & Codepipeline

Home > Deploy Next JS on ECS Fargate With CDK & Codepipeline

Deploy Next JS on ECS Fargate With CDK & Codepipeline

Updated:

14 Dec 2020

Published:

14 Dec 2020

This is not for people who want to deploy statically generated Next JS app.

This is only applicable for someone who wants to deploy server side rendered NextJS app.

For statically generated NextJS, you can just deploy to any CDN service and be done with it.

But if you want to server side render with NextJS then you have very few good options.

As recommended by creators of NextJS the Vercel is the best place to deploy.

But they have small limits on lambda functions.

Now these limits are not that small however I don't like the fact that they have limits to begin with.

I may not reach those limits frequently but I would really hate to turn away customers/visitors while my site is having its 5 minutes of fame and getting millions of visitors.

I am willing to pay premium charges just for those once in a lifetime heavy spikes.

Now I can set something like that up with Vercel but I have to contact for enterprise sales.

Well I am just starting out and not ready for that yet.

Where as AWS allows me to create unlimited scaling NextJS app without contacting enterprise sales.

Another reason that is more personally to me. I hate making simple things complicated.

I mostly use AWS for serverless database and other services.

When you try to connect 2 different competing cloud provider then they offer you very little support.

You lose your mind trying to do a simple task. Or maybe it's just me.

So this is another main reason I am deploying NextJS to fargate.

One advantage Fargate has over lambda deployment on Vercel is no more lambda cold start.

Sure Fargate is more expensive than lambda and more complicated to setup than lambda but it will create better user experience than lambda.

So all the hard work you put in is worth it. And of course you can have baked in unlimited scaling.

Also by putting Next JS inside docker you can deploy it to any container platform not limited to AWS or Vercel.

So we will start with building NextJS docker container.

To read rest of the article please Purchase Subscription

Before we get started with NextJS I would recommend to develop CDK project inside development docker container.

So we no longer have the issue of "this works on my computer". And it will not mess with your existing workflow.

So checkout my blog post for detailed video explanation.

Then checkout CDK development container settings on github.

And NextJS development container settings on github.

So let's get started building our NextJS app first.

View complete source code on github.

We are testing deployment of NextJS and not development of NextJS. So I am going to deploy starter project that is created by npx create-next-app next-client --use-npm

Then add following Dockerfile inside next-client

Here I am starting with official node alpine image. I wouldn't recommend that you start with alpine image for production.

Always start with what you know. So if you have been using Ubuntu for many years then start with that and eventually move to alpine down the road.

Because alpine has it's own unique quirks that you need to learn by investing significant time.

The upside of using alpine is smaller size and better security.

Now in this case size doesn't matter.

The cost of storage has gone down significantly and the alternative OS images are not very large.

All linux OS images have gone down in size. Some more than others. And size doesn't affect performance that much.

Same is the case with security. The security provided by other linux OS images are good enough. They are not terrible compared to Alpine.

However because of fewer moving pieces in alpine, it does eliminate more security threats than other linux operating systems.

Also most container registries provide container scanning to find vulnerabilities.

Most scanners don't work well with alpine images. But they do work well with other popular linux images.

So you will have better security in some regards with popular linux OS than alpine.

So again for starting out always start with what you know.

Rest of the dockerfile is very standard.

I am using --production flag when installing dependencies to not install any dev dependencies to further reduce the size of image.

And to keep things very simple I am using port 80. This reminds me you should also run next app on port 80 as follows.

1
2
3
4
5
6
7
  <!-- prettier-ignore -->
  {
  "scripts": {
    "start": "next start -p 80"
  }
}
  

This is very important. In package.json change the script tag to run it at port 80.

Now we need to build docker image and store to container registry.

You can store this container to public or private docker hub registry but we are going to store it in aws public and private registry.

If you want to push to docker hub then use the following command.

docker login
docker build -t your_username/next .
docker push your_username/next

I have tested deploying to fargate with public docker container repo. And it works perfectly fine but I have switched to public aws container repo.

Unfortunately creating public ecr repo is not supported by aws cdk. However you can create public repo with aws cli.

But my 2nd preference after aws cdk is very user friendly aws console. So I created public container repo on console.

And you will have to update container often so I setup a Taskfile.yml that builds and stores the container as follows

I should mention that I have created public container repo in region us-east-1.

Here I am taking advantage of parallel execution. So while I am building the docker image; I can login with aws cli.

Again the prerequisite for this step is you have configured aws cli with command aws configure.

So if you see deps is array of publicLogin and publicBuild. Those two tasks will run in parallel.

After both are complete then you just push the image to remote repo and delete image from docker locally.

The login and build command can be found at aws console in ECR section.

And whenever you want to upload the container then just change PUBLIC_VERSION and run task public to push new version.

If you want to use private ECR repository then you can create that with cdk as follows.

Just like S3 bucket the default removal policy of private ECR repo is RETAIN.

So after you delete the stack the container repository will be orphaned. I want to destroy the repo after I am done with this project.

Important thing to remember here is this behavior is same as S3. By that I mean you cannot delete container repo that has images in it. Just like you cannot delete S3 bucket with files in it.

And if you delete the stack while having images in the repo then your cloudformation will be in pending state for 4-6 hours.

So if you don't want to waste 4 hours then I suggest to remember to delete images before deleting stack.

Also this is private repo and you will be paying for storage after the free tier. So I have set 2 rules.

  1. From list of images with prefix prod, keep only latest 5 images and delete older images.
  2. Delete any image that is older than 30 days

You can explicitly set order for each lifecycle rule. But it can also automatically set the order in which these rules are applied. This is why the order matters in the array.

And I have set to scan images for vulnerabilities as they arrive at the repository.

Also at the end I print repo uri that we need to insert for building our image in Taskfile.yml as follows

Again this is same as public image build and the commands are copied from aws console in ecr section.

Now deploy this image to fargate with CDK as follows.

The VPC will be automatically created with good defaults.

But every cloudformation stack has limit of 500 resources and default for maxAzs is 3. So I have manually set it to 2. Just to reduce 1 resource.

Then you create cluster with new VPC.

Container insights are false by default. These are very helpful so I have enabled them.

To assign domain to fargate we need the hosted zone.

In fargate props the cpu, desired count and memory defaults are good but I am explicitly setting to their default value because these values determine your monthly bill.

So you don't want to leave them blank in hopes that default values will be always right. So always set billing related values explicitly.

If you just set domain name then you must also set protocol to HTTPS and domain zone.

Redirect HTTP will block all unsecure connections and redirect to HTTPS.

And by assigning public IP you are exposing this container to internet.

Then inside task image option select image from either public docker hub or public ecr or private ecr.

The pipeline can only be set with private ecr for now.

Finally set auto scaling limits with minimum of 1 and maximum of 20.

You can even set maximum to 2,000 if you want. Just make sure you get notified if billing exceeds certain amount.

So you can confirm if this is programming mistake or real spike in traffic.

The target utilization is set to 70% because after you cross the limit it has to download image and spin up new container. This takes few minutes so I have not set 90% or above 90%.

Now to just redirect the root domain traffic you can add the HTTPS redirect as follows.

1
2
3
4
5
6
7
  <!-- prettier-ignore -->
  new HttpsRedirect(this, 'wwwToNonWww', {
  recordNames: ['www.example.com'],
  targetDomain: website_domain,
  zone: domainZone
})
  

View Source code on github

Container update without pipeline

Just change the image version in CDK and deploy. It will take 5 or more minutes but new image will be deployed on fargate.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
  <!-- prettier-ignore -->
  taskImageOptions: {
  // image from docker hub
  // image: ContainerImage.fromRegistry('apoorvmote/next:prod-v2'),
  // image from public ecr registry
  // image: ContainerImage.fromRegistry('public.ecr.aws/abc123xyz/next:prod-v2'),
  // image from private ecr registry
  image: ContainerImage.fromEcrRepository(ecrRepo, 'prod-v2'),
}
  

Container update with Codepipeline

First lets create codecommit repo for storing Next JS project as follows

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
  <!-- prettier-ignore -->
  const nextRepo = new codecommit.Repository(this, 'nextJSSourceCode', {
  repositoryName: 'next-blog',
  description: 'Pipeline source code'
})

new CfnOutput(this, 'sourceCodeUrl', {
  value: nextRepo.repositoryCloneUrlSsh
})
  

And I am printing the SSH url so I can add to Next JS project.

Then I am going to create codebuild project as follows

Very Important to build docker container you need to set environment privileged: true. Without this you cannot access docker cli at codebuild container.

And in our project we are using official node image from docker hub.

And unfortunately docker has recently set pull limit. These limits are based on IP address.

And codebuild service is used by millions of users. So you will never be under 100 anonymous pull limit.

So you have to login to docker hub before you can pull the official node image. So in environment variables I am passing docker username and password.

Build image I have set to standard 5.0. It includes Ubuntu 20.04 with node 14.x LTS version.

Compute type default is automatically decided on build image you choose.

For standard 5.0 the compute type default is small. So you don't have to add it.

But again this compute type decides your cost. So I would recommend to always manually add it.

And don't forget to give codebuild push access to your private ecr repo.

Here we have set to pick up buildspec.yml from repo source files. So you need to add the file as follows.

The above file is directly take from aws docs.

I have not created this file from scratch. However I have made few changes.

Interesting fact the first pre build command checks for aws --version and its using version 1 at the time of writing.

And to deploy public ecr image you need to have min of aws-cli 2.1.7. So if you plan to deploy public ecr image with codebuild then upgrade aws cli first.

And if you have both aws cli v1 and v2 installed then it will choose first aws path it can find. So make sure to update the path.

Otherwise you will be using aws cli v1 after installing aws cli v2.

So after aws ecr login I have added docker login so we can pull official nodejs image from docker hub.

And the credentials are received from environment variables.

I also have updated REPOSITORY_URI from output from previous cdk deploy. And added prod- prefix to IMAGE_TAG.

At the last line of post build. I have set name to web. This is container name set in fargate task definition.

The default name is web. If you have set custom name in cdk then make sure to set same name here.

Then create a bucket as follows

1
2
3
4
5
6
  <!-- prettier-ignore -->
  const artifactBucket = new Bucket(this, 'containerBuildArtifactBucket', {
  bucketName: 'example-pipeline-artifact',
  removalPolicy: RemovalPolicy.DESTROY
})
  

The reason I created this bucket explicitly so I can delete the bucket when I am done with this project.

Again make sure all files inside bucket are deleted before you delete bucket from the stack.

Then add pipeline as follows.

Here I have created 2 artifacts. One for git output and other for container output.

Important remember when getting codecommit source make sure to pull branch main not master.

Because by default Next JS creates branch called main.

Then simply build and deploy directly to ECS.

Here I have changed deploy timeout to 30 minutes. The default timeout is 60 minutes. In normal operation it will be deployed under 5 minutes.

View Source code on github

Bonus: deploy to fargate without load balancer

Load balancer costs roughly around ~$20. It is recommended that you use load balancer to scale with high traffic.

But when starting out you may want to build without load balancer to run some tests. So here is how you can do it.

In security group you want to open port 80 for unsecure HTTP traffic.

And in container I have set connect port 80 from host to port 80 of container.

Rest of the code is very similar to before.

After this code is deployed go to aws console inside the fargate service task that is running.

And you will find public IP. Just copy that IP address and insert into browser and your container will show up there.

The updating of image for this container is same as before.

You can either change image number in CDK file and deploy with cdk or you can use codepipeline as showed earlier.

View Source code on github

Free users cannot comment below so if you have questions then tweet me @apoorvmote. I would really like to hear your brutally honest feedback.

If you like this article please consider purchasing paid