Next.js is a popular framework for building fast and user-friendly web applications using React. It supports features such as server-side rendering, static site generation, code splitting, and incremental static regeneration. These features require a build process that transforms your source code into optimized files that can be served by a web server.
However, this build process also generates some files that are not needed for running the application, such as source maps, development dependencies, and configuration files. These files can increase the size of your Docker image, which can affect the performance and security of your application. For example, a larger image can take longer to download and start, consume more disk space and memory, and expose more attack surface to potential hackers.
# Ok, so what does a normal Dockerfile for Next.js look like?
Like this! But don't just copy-paste this, as we need to work on this a bit first.
dockerfileFROM node:18-alpine WORKDIR /app COPY package*.json ./ RUN npm install COPY . . RUN npm run build CMD npm start
This will:
- Download the Node.js 18 image
- Copy over your
package.json
andpackage-lock.json
files - Install your dependencies
- Copy over the rest of your files
- Build your Next.js application
- Start the production server
This will get the job done. You can actually test that this works by running:
bashdocker build -t nextjs-app .
But Next.js doesn't need all these files to run. In fact, most of your source code is just completely unused past the build step. Not to mention all those locally installed dependencies! Don't believe me? Run this in your terminal to get the size of the image you've just built:
bashdocker image ls nextjs-app
I don't know about you, but I'm using the official example Next.js app for this, and I'm getting well over 1GB!
# Cool. So what can we do about this? π€
Well, first of all, be a dear and create a .dockerignore
file at your project root. This basically tells Docker to ignore copying over certain files or folders on your filesystem when building the image. Type this in it:
Dockerfile .dockerignore node_modules npm-debug.log README.md .next .git
Now, rebuild your Docker image, and just like that we're already down to 730MB!
# That still feels like a lot...
Thankfully, multi-stage builds exist! These will enable your Next.js Dockerfile to create a smaller and more efficient Docker image that contains only the files that are necessary for running your application. This can improve the performance and security of your application, as well as make it easier (and faster) to deploy and update.
Let's fix our previous Dockerfile:
dockerfileFROM node:18-alpine AS builder WORKDIR /app COPY package*.json ./ RUN npm install COPY . . RUN npm run build FROM node:18-alpine AS runner WORKDIR /app COPY --from=builder /app/.next ./.next COPY --from=builder /app/public ./public COPY --from=builder /app/node_modules ./node_modules COPY --from=builder /app/package.json ./ CMD npm start
Here's what changed:
- We've labeled our first image
builder
. This means we can now reference it in a future stage without redoing all of the steps. - We've added a second stage called
runner
. This is the one that actually ends up in the bundle. - Then, we copy over just the files we need for our Next.js application to run. That's our
.next
,public
, andnode_modules
folders, plus ourpackage.json
file that contains our scripts. - Finally, we run the production server!
Rebuilding our Docker image shows that we managed to trim off just a bit over 100MB more!
# Can we keep going? π
Hell yeah. This part requires some configuration though. To start, open up your next.config.js
and change it to this:
ts/** @type {import('next').NextConfig} */ const nextConfig = { output: 'standalone', }; module.exports = nextConfig;
This will make it so that Next.js automatically traces your import
and require
statements to automagically copy over only the necessary files for a production deployment when being built. This might break stuff. It definitely did on some of my larger projects, and there's not much you can do about it bar reverting to the previous example. Here's our updated Dockerfile this time around:
dockerfileFROM node:18-alpine AS deps WORKDIR /app RUN apk add --no-cache libc6-compat COPY package*.json ./ RUN npm install FROM node:18-alpine AS builder WORKDIR /app COPY --from=deps /app/node_modules ./node_modules COPY . . RUN npm run build FROM node:18-alpine AS runner WORKDIR /app ENV NODE_ENV production RUN addgroup --system --gid 1001 nodejs RUN adduser --system --uid 1001 nextjs COPY --from=builder /app/public ./public RUN mkdir .next RUN chown nextjs:nodejs .next COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./ COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static USER nextjs EXPOSE 3000 ENV PORT 3000 ENV HOSTNAME "0.0.0.0" CMD ["node", "server.js"]
Phew, that's a lot of changes.
- We now have a dedicated βdepsβ stage just for installing our dependencies.
- We're using
apk
in the βdepsβ stage to install some libraries that Next.js depends on to work properly. - We create a new non-root user for running the app in our new runner stage
- We copy over anything that's in the
public
folder - We copy over our standalone Next.js built files
- We run a minimal version of the production server
For most newer codebases which depend on packages that use newer technologies, you shouldn't have any issues running this. If you this approach works for you, you can triple your space savings from the previous Dockerfile example. That's a five times reduction in image size overall from where we started.
# But I don't use npm..π¬
I get you, I loooove pnpm. But most guides I've read pretty much only cover npm or yarn.
No more. You can use this catch-all Dockerfile instead, which adapts to use npm
, yarn
, or pnpm
, based on what kind of lockfile you have in your project root:
dockerfileFROM node:18-alpine AS deps WORKDIR /app RUN apk add --no-cache libc6-compat # Copy only the files needed to install dependencies COPY package.json yarn.lock* package-lock.json* pnpm-lock.yaml* ./ # Install dependencies with the preferred package manager RUN \ if [ -f package-lock.json ]; then npm ci; \ elif [ -f yarn.lock ]; then yarn --frozen-lockfile; \ elif [ -f pnpm-lock.yaml ]; then corepack enable pnpm && pnpm i --frozen-lockfile; \ else echo "Lockfile not found." && exit 1; \ fi FROM node:18-alpine AS builder WORKDIR /app COPY --from=deps /app/node_modules ./node_modules # Copy the rest of the files COPY . . # Run build with the preferred package manager RUN \ if [ -f yarn.lock ]; then yarn build; \ elif [ -f package-lock.json ]; then npm run build; \ elif [ -f pnpm-lock.yaml ]; then corepack enable pnpm && pnpm build; \ else echo "Lockfile not found." && exit 1; \ fi FROM node:18-alpine AS runner WORKDIR /app ENV NODE_ENV production # Add nextjs user RUN addgroup --system --gid 1001 nodejs RUN adduser --system --uid 1001 nextjs # Copy public assets COPY --from=builder /app/public ./public # Set the correct permission for prerender cache RUN mkdir .next RUN chown nextjs:nodejs .next # Automatically leverage output traces to reduce image size # https://nextjs.org/docs/advanced-features/output-file-tracing COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./ COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static USER nextjs EXPOSE 3000 ENV PORT 3000 ENV HOSTNAME "0.0.0.0" CMD ["node", "server.js"]
And that should do it! In the next few guides in this series, we'll talk about creating a docker-compose.yaml
file for locally working on your Next.js application and adding Turborepo integration.
Until next time β happy coding!