Step-by-Step Guide
How to build a static site and self-host it from home β from zero to deployed, no prior experience needed.
Before You Start
You'll need four things. Don't worry about getting everything perfect β you can always adjust later.
- Node.js 22 or later installed on your development machine. Download it from nodejs.org or install via nvm (recommended β it lets you switch between Node versions easily).
- Docker and Docker Compose installed on the server. Follow the official Docker docs for your OS β on Ubuntu it's a few apt commands.
- A machine to use as a server. Any old laptop, desktop, or mini PC will do. It just needs to stay on and be connected to your network. This guide uses a Mac Mini running Ubuntu, but any Linux distribution works.
- A domain name pointed to Cloudflare DNS. Cloudflare's free plan is more than enough. You'll need this for the tunnel and TLS certificates.
Step 1: Create the Next.js Project
We'll scaffold a new Next.js project, configure it for static export, and set up the styling. By the end of this step you'll have a site that builds to plain HTML files.
Start by creating a new project. The --app flag uses the App Router (the default in Next.js 15), and --tailwind sets up Tailwind CSS automatically.
npx create-next-app@latest resume --typescript --tailwind --appOpen next.config.ts and add the static export configuration. This single line is what makes the whole approach work β it tells Next.js to generate flat HTML files instead of requiring a Node.js server.
// next.config.ts
import type { NextConfig } from "next";
const nextConfig: NextConfig = {
output: "export",
images: { unoptimized: true },
};
export default nextConfig;Tailwind CSS 4 uses CSS custom properties for theming. Define your design tokens in globals.css β these variables are available everywhere in your app and make it easy to maintain a consistent look.
/* globals.css */
@import "tailwindcss";
@theme {
--color-bg: #0d1117;
--color-bg-inset: #010409;
--color-surface: #161b22;
--color-surface-hover: #1c2128;
--color-border: #30363d;
--color-border-hover: #484f58;
--color-text: #e6edf3;
--color-text-secondary: #8b949e;
--color-text-muted: #6e7681;
--color-accent: #58a6ff;
--color-accent-dim: rgba(56, 139, 253, 0.15);
}next/font/google handles font loading with zero layout shift. Declare your fonts in layout.tsx, and they're automatically subset and self-hosted β no external requests to Google Fonts at runtime.
// layout.tsx
import { DM_Sans, Instrument_Serif } from "next/font/google";
const dmSans = DM_Sans({
subsets: ["latin"],
variable: "--font-body",
weight: ["300", "400", "500", "600", "700"],
});
const instrumentSerif = Instrument_Serif({
subsets: ["latin"],
variable: "--font-display",
weight: "400",
style: ["normal", "italic"],
});
// In the <html> tag:
<html className={`${dmSans.variable} ${instrumentSerif.variable}`}>
<body className="font-[family-name:var(--font-body)]">
{children}
</body>
</html># Test the build
npm run build
# You should see an "out/" directory with static HTML files
ls out/Why static export?
A statically exported site is just files β there's nothing to crash, nothing to patch, and nothing to scale. You can host it on Nginx, Caddy, S3, GitHub Pages, or even serve it from a USB stick. The trade-off is you can't use Next.js features that need a server (API routes, ISR, middleware), but for a personal site those aren't needed.
Step 2: Set Up the Server
Any old machine will work. This guide uses a Mac Mini Late 2012 with 16 GB RAM and a 1 TB SSD, but a used laptop or a Raspberry Pi would be just fine. The goal is a headless Linux box you can manage over SSH.
Download Ubuntu Server from ubuntu.com and flash it to a USB drive using Rufus (Windows) or Balena Etcher (Mac/Linux). Boot the machine from USB, follow the installer, and choose "Ubuntu Server (minimized)" β you don't need a desktop environment.
# Download Ubuntu Server from https://ubuntu.com/download/server
# Flash to USB with Balena Etcher or Rufus
# Boot from USB, follow the installer
# Choose "Ubuntu Server (minimized)"After the install finishes, make sure SSH is enabled so you can manage the machine remotely. From this point on, you can unplug the monitor and keyboard β everything is done through the terminal.
# Install SSH server (if not already installed)
sudo apt update && sudo apt install -y openssh-server
# Check it's running
sudo systemctl status ssh
# From your workstation, connect:
ssh [email protected]Give the server a static IP address so it's always reachable at the same address on your local network. Edit the netplan configuration file and apply the changes.
# Edit netplan config
sudo nano /etc/netplan/01-netcfg.yaml# Example netplan configuration
network:
version: 2
ethernets:
enp1s0: # your interface name
dhcp4: false
addresses:
- 192.168.1.100/24
routes:
- to: default
via: 192.168.1.1
nameservers:
addresses:
- 1.1.1.1
- 8.8.8.8# Apply the changes
sudo netplan applyPractical tips
Enable unattended-upgrades so security patches are installed automatically β you don't want to SSH in every week just to run apt update. Give the machine a memorable hostname so you can reach it by name. And consider the economics: the whole setup costs about $2/month in electricity. You own the hardware and the full stack.
# Enable automatic security updates
sudo apt install -y unattended-upgrades
sudo dpkg-reconfigure -plow unattended-upgrades
# Set a friendly hostname
sudo hostnamectl set-hostname miniserverStep 3: Containerize the Site
We'll package the site into a Docker container using a multi-stage build. The final image contains only Nginx and your static files β no Node.js runtime, under 25 MB total.
Create a Dockerfile with two stages. The first (builder) installs dependencies and runs the Next.js build. The second (runner) copies only the generated static files into an Nginx Alpine image. This is what keeps the production image tiny.
# Dockerfile
FROM node:22-alpine AS builder
WORKDIR /app
COPY package.json ./
RUN npm install
COPY . .
RUN npm run build
FROM nginx:alpine AS runner
COPY --from=builder /app/out /usr/share/nginx/html
COPY nginx.conf /etc/nginx/conf.d/default.conf
EXPOSE 80Create an nginx.conf file that serves the static files with gzip compression and cache headers. The try_files directive handles client-side routing β if a file isn't found directly, it tries adding .html or falling back to index.html.
# nginx.conf
server {
listen 80;
server_name _;
root /usr/share/nginx/html;
index index.html;
gzip on;
gzip_types text/plain text/css application/json
application/javascript text/xml
application/xml text/javascript
image/svg+xml;
gzip_min_length 256;
location /_next/static/ {
expires 1y;
add_header Cache-Control "public, immutable";
}
location / {
try_files $uri $uri.html $uri/ /index.html;
}
}Create a .dockerignore file to prevent node_modules, .next, and out/ from being copied into the build context. Without this, builds are slower and you might get unexpected conflicts.
# .dockerignore
node_modules
.next
outCreate a docker-compose.yml on the server that defines the service. The build context points to your source code, and the container connects to the shared Docker network we'll create in the next step.
# docker-compose.yml
services:
resume:
build:
context: /path/to/your/source-code
dockerfile: Dockerfile
container_name: resume
restart: unless-stopped
networks:
- server-net
networks:
server-net:
external: trueDeploy with a single command. The --no-cache flag ensures Docker doesn't use cached layers, so file changes are always picked up. The whole process takes about 30 seconds.
docker compose build --no-cache && docker compose up -dGood to know
If you ever need to debug, docker exec -it resume sh drops you into the running container's shell. To check logs, use docker logs resume. And if something goes wrong, docker compose down && docker compose up -d gives you a clean restart.
Step 4: Set Up Networking
The final piece: getting traffic from the internet to your container without opening any ports on your home network. We'll use a Docker bridge network, Caddy as a reverse proxy, and a Cloudflare Tunnel.
Create a shared Docker bridge network. All containers on the server will join this network, which lets them find each other by name (e.g., the Caddy container can reach the resume container as "resume:80") while keeping traffic isolated from the host.
docker network create server-netCaddy runs as another container on the same network. It acts as a reverse proxy, routing requests by domain name to the right container. The best part: Caddy automatically provisions and renews TLS certificates from Let's Encrypt. Zero certificate management.
# Caddyfile
your-domain.com {
reverse_proxy resume:80
}# docker-compose.yml for Caddy
services:
caddy:
image: caddy:alpine
container_name: caddy
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile
- caddy_data:/data
- caddy_config:/config
networks:
- server-net
volumes:
caddy_data:
caddy_config:
networks:
server-net:
external: trueA Cloudflare Tunnel creates an outbound-only connection from your server to Cloudflare's edge network. This means zero inbound ports are open on your router β no port forwarding, no dynamic DNS, no firewall rules to maintain. Cloudflare handles DNS, DDoS protection, and edge caching for free.
# Install cloudflared
curl -L https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-linux-amd64.deb -o cloudflared.deb
sudo dpkg -i cloudflared.deb
# Authenticate with Cloudflare
cloudflared tunnel login
# Create a tunnel
cloudflared tunnel create my-tunnel
# Route your domain to the tunnel
cloudflared tunnel route dns my-tunnel your-domain.com
# Create the config file
mkdir -p ~/.cloudflared# ~/.cloudflared/config.yml
tunnel: <your-tunnel-id>
credentials-file: /home/your-user/.cloudflared/<tunnel-id>.json
ingress:
- hostname: your-domain.com
service: http://caddy:443
- service: http_status:404# Run as a system service
sudo cloudflared service install
sudo systemctl start cloudflared
sudo systemctl enable cloudflaredWhy Nginx and Caddy?
They serve different roles. Nginx lives inside each container as a lightweight file server β it makes each app self-contained and portable. Caddy runs in front of everything, routing traffic and handling TLS. You could simplify by having Caddy do both, but this approach scales better when you host multiple apps. Another bonus: Cloudflare Tunnel works even behind CGNAT, so you don't need a static IP or even a public IP to host from home.
You're Live
That's it. You have a static site running in a Docker container, served by Nginx, reverse-proxied by Caddy, and tunneled to the internet through Cloudflare β all from a machine on your shelf. No cloud bills, no open ports, full control. Now make it yours.