BEGINNERAWS-CLIS3

Sync a Folder to S3 with Include and Exclude Filters

Use aws s3 sync with --exclude, --include, --dryrun, and --delete to deploy a directory to an S3 bucket safely.

Published Apr 8, 2026
aws-clis3syncdeployshell

aws s3 sync is the canonical way to push a local directory to an S3 bucket. It's idempotent, content-aware (only uploads what changed), and supports include/exclude patterns for filtering files. This snippet covers the deploy pattern I use for static sites and SPA build outputs.

Tested on AWS CLI v2.17, macOS and Linux.

When to Use This

  • Deploying a static site or SPA build output (dist/, build/, out/) to an S3 bucket
  • Pushing a documentation site to a versioned prefix
  • Mirroring a local folder to S3 for backup
  • Migrating files between two prefixes (S3-to-S3 sync also works)

Don't use this when you need atomic uploads (S3 sync is per-object, not transactional) or when the destination is a shared bucket with other writers.

Code

# Dry run first — shows exactly what will change without touching anything
aws s3 sync ./dist s3://$BUCKET/$PREFIX \
  --exclude "*.map" \
  --exclude "*.DS_Store" \
  --exclude "node_modules/*" \
  --delete \
  --dryrun
 
# Real run — drop --dryrun
aws s3 sync ./dist s3://$BUCKET/$PREFIX \
  --exclude "*.map" \
  --exclude "*.DS_Store" \
  --exclude "node_modules/*" \
  --delete \
  --cache-control "public, max-age=31536000, immutable"

The --exclude pattern uses shell-like globs. They're applied in order, so a later --include can re-add files matched by an earlier --exclude. The --delete flag removes anything in the destination that no longer exists locally, which is what makes this a true "sync" instead of an "upload".

Usage

A real deploy script for a Next.js static export, with cache headers:

#!/usr/bin/env bash
set -euo pipefail
 
BUCKET="my-site-prod"
PREFIX="assets"
 
# 1) Long-lived cache for hashed assets
aws s3 sync ./out/_next s3://$BUCKET/$PREFIX/_next \
  --cache-control "public, max-age=31536000, immutable" \
  --delete
 
# 2) No cache for HTML — must always be revalidated
aws s3 sync ./out s3://$BUCKET/$PREFIX \
  --exclude "_next/*" \
  --cache-control "public, max-age=0, must-revalidate" \
  --content-type "text/html" \
  --delete

This pattern gives you immutable caching for hashed JS/CSS bundles and instant cache busting for HTML — the canonical static-site setup.

Gotchas

  • --delete is destructive. Always run with --dryrun first the very first time you target a new bucket.
  • Glob patterns are POSIX-style, not regex. Use * for wildcard, not .*. And remember to quote them so the shell doesn't expand them locally.
  • Order matters for --exclude and --include. They're processed left to right. To upload only .html files, use --exclude "*" followed by --include "*.html".
  • Content-Type detection is wrong sometimes. The CLI guesses MIME types from extensions, but for files without extensions or unusual ones (.webmanifest), pass --content-type explicitly.
  • sync doesn't compare content hashes. It only checks size and modified time, so identical files with different timestamps will be re-uploaded unless you also pass --size-only.
  • Generate S3 Presigned URLs(coming soon) — share private files temporarily
  • Authenticate Docker with ECR(coming soon) — another common AWS CLI auth flow
  • AWS CLI s3 sync reference — official docs for every flag

Frequently Asked Questions

What does aws s3 sync actually do?

aws s3 sync compares the contents of a local folder against an S3 bucket and only uploads files that are missing or changed (based on size and modified time). It does not re-upload identical files, which makes it cheap and fast for repeat deploys.

Is it safe to run aws s3 sync with --delete?

Only if the local folder is the source of truth. The --delete flag removes any S3 object that is not in the local folder, which is what you want for static site deploys but disastrous for shared buckets. Always run with --dryrun first.