How we handle 200 MB video uploads in a serverless Next.js app without choking the database
Video uploads in serverless environments have three failure modes: timeouts on large files, connection pool exhaustion, and broken uploads on flaky mobile connections. Here's how we solved all three in Proofly's Next.js API route.
TL;DR
Set `export const maxDuration = 60` on your upload route to prevent Vercel's 10-second default timeout from killing large uploads. Validate file size and MIME type before reading the file into memory. Normalize MediaRecorder's codec-suffixed MIME types (video/webm;codecs=vp9,opus → video/webm) before storing. Upload to R2, not to your database. Check plan limits before burning bandwidth.
When we first shipped Proofly's testimonial upload endpoint, it was a standard Next.js API route that accepted a multipart/form-data POST and wrote the file to cloud storage. It worked fine in development. In production, it fell over in three different ways within the first two weeks.
Here's what broke and exactly how we fixed it.
Failure 1: 10-second timeouts killing large uploads#
Vercel's default function timeout is 10 seconds. A customer on a 10 Mbps upload connection uploading a 100 MB video file takes 80 seconds minimum, just for the network transfer. The function was timing out before the upload even finished streaming into the handler.
The fix is a single export:
export const runtime = "nodejs";
export const maxDuration = 60; // seconds
runtime = "nodejs" opts out of the Edge Runtime, which has its own limits and doesn't support the Node-specific APIs we use for file handling. maxDuration = 60 raises the timeout to 60 seconds. On Vercel's Pro and Enterprise plans, you can push this higher, but 60 seconds covers a 200 MB file at 27 Mbps — fast enough for most customers recording from a laptop on a home connection.
The 200 MB cap itself is set as a constant:
export const MAX_VIDEO_BYTES = 200 * 1024 * 1024; // 200 MB
We check file size before reading the file into memory, so a customer who accidentally uploads a raw 4K recording gets a 413 immediately rather than a timeout:
if (file.size > MAX_VIDEO_BYTES) {
return NextResponse.json(
{ error: "Video exceeds the 200 MB size limit" },
{ status: 413 },
);
}
Failure 2: MediaRecorder's codec-suffixed MIME types#
The MediaRecorder API in Chrome emits files with a MIME type that includes the codec: video/webm;codecs=vp9,opus. Safari uses video/mp4 or video/mp4;codecs=avc1. Firefox uses video/webm.
When we validated the uploaded file's file.type against an allowlist of video/webm, video/mp4, video/quicktime, we were rejecting every Chrome-recorded video because video/webm;codecs=vp9,opus didn't match video/webm.
The fix is MIME normalization before validation:
// MediaRecorder commonly emits MIMEs like `video/webm;codecs=vp9,opus`.
// Normalize to the bare type before validating + persisting.
const rawType = file.type || "video/webm";
const contentType = rawType.split(";")[0].trim().toLowerCase();
if (!ALLOWED_VIDEO_MIME_TYPES.has(contentType)) {
return NextResponse.json(
{ error: "Unsupported video format. Use MP4, MOV, or WebM." },
{ status: 415 },
);
}
We strip everything after the first semicolon, trim whitespace, and lowercase before checking against the allowlist. This handles every variant we've seen MediaRecorder emit across Chrome, Safari, and Firefox.
The normalized contentType (not the raw MIME) is what gets stored in the database and sent as ContentType to R2. When the browser plays the video back later, the clean MIME type is what it receives.
Failure 3: Burning bandwidth before checking plan limits#
Our original implementation uploaded the video to R2 and then checked whether the user's plan allowed more uploads. That meant a free-plan user who'd hit their cap (5 video testimonials) would upload their 150 MB file over a slow connection, wait through the full transfer, and then get a 403.
We moved the plan check to happen before reading the file into memory, right after we confirm the request is active:
// Enforce the Sketch (free) plan cap before we burn bandwidth uploading.
const plan = await getUserPlan(requestRow.userId);
if (plan === "sketch") {
const used = await countBillableTestimonials(requestRow.userId);
if (used >= SKETCH_TESTIMONIAL_LIMIT) {
return NextResponse.json(
{
error:
"This studio is full — they've reached their plan's testimonial limit.",
},
{ status: 403 },
);
}
}
The order matters: validate input → check request exists → check plan limits → read file into memory → upload to R2 → insert database row. Anything that can fail cheaply should fail before the expensive operations.
How the R2 upload works#
We don't stream the file from the request body directly to R2. Instead, we buffer the entire file into a Uint8Array and then send it in a single PutObjectCommand. Streaming would be more memory-efficient, but the AWS SDK's streaming upload support for @aws-sdk/client-s3 requires careful handling of the duplex stream to avoid backpressure issues in serverless environments. For files under 200 MB, buffering is fine.
const buffer = new Uint8Array(await file.arrayBuffer());
The R2 object key follows a consistent layout:
testimonials/<requestSlug>/<randomId>.<ext>
The random ID is 16 bytes from crypto.getRandomValues, encoded as 32 hex chars. Objects are stored with CacheControl: "public, max-age=31536000, immutable" — once a video is uploaded, the URL never changes, so we can cache it forever at the CDN layer.
await r2.send(
new PutObjectCommand({
Bucket: env.r2.bucket,
Key: key,
Body: buffer,
ContentType: contentType,
ContentLength: size,
CacheControl: "public, max-age=31536000, immutable",
}),
);
The thumbnail path#
Customers can optionally submit a client-generated thumbnail alongside the video — a JPEG or PNG poster frame captured from the video on their device before upload. This avoids us having to run ffmpeg on every upload to generate a poster frame server-side.
We accept the thumbnail as a separate field in the same multipart form:
const thumbnailFile = form.get("thumbnail");
if (thumbnailFile instanceof File && thumbnailFile.size > 0) {
const t = (thumbnailFile.type || "image/jpeg")
.split(";")[0].trim().toLowerCase();
if (
ALLOWED_THUMBNAIL_MIME_TYPES.has(t) &&
thumbnailFile.size <= MAX_THUMBNAIL_BYTES // 2 MB
) {
thumbnailBuffer = new Uint8Array(await thumbnailFile.arrayBuffer());
thumbnailContentType = t;
}
}
If the client doesn't send a thumbnail (or sends something we can't use), the thumbnailUrl column is null and the embed falls back to a plain play button overlay on a black background.
The database write: last, not first#
The database row is created only after both the video and thumbnail uploads to R2 succeed. If R2 throws, no row is written and the customer gets an error they can retry. This avoids orphaned database rows that point to files that don't exist — a class of inconsistency that's tedious to clean up and hard to detect.
const [testimonial] = await db.insert(schema.testimonials).values({
requestId: requestRow.id,
userId: requestRow.userId,
kind: "video",
customerName,
customerEmail,
customerTitle,
videoUrl: video.url,
thumbnailUrl: thumbnail?.url ?? null,
videoLayout: requestRow.videoLayout,
rating: typeof rating === "number" ? rating : null,
status: "pending",
}).returning();
After the insert, the owner gets a notification email — fire-and-forget, wrapped in a void call so an email failure doesn't propagate back to the customer as an error.
The complete order of operations#
- Parse and validate form data (Zod schema)
- Check file exists and is non-empty
- Check file size is under 200 MB
- Normalize and validate MIME type
- Look up the testimonial request by slug
- Confirm request status is "active"
- Check plan limits (free plan cap)
- Buffer video into memory
- Buffer thumbnail into memory (if provided)
- Upload video to R2
- Upload thumbnail to R2 (if provided)
- Insert testimonial row (status: "pending")
- Send notification email (fire-and-forget)
- Return 201 with testimonial ID
Every step that can fail cheaply fails before the steps that are expensive. That's the design principle that made this endpoint reliable.
Frequently asked
Quick answers
Why Cloudflare R2 instead of S3?+
Zero egress fees. When your testimonial embed script fetches video URLs and a visitor plays a video, that's egress. On S3, you pay per GB out. R2 charges nothing for egress, which matters a lot once you have thousands of videos being served from landing pages. The API is S3-compatible, so switching from S3 to R2 is mostly a credentials change.
What happens if a 200 MB upload times out halfway through?+
The video isn't stored — we only create the database row after the R2 upload succeeds. The customer sees an error and can retry. We considered resumable uploads with chunking, but in practice the failure rate on uploads under 200 MB over a stable connection is low enough that we haven't needed them. Mobile connections are more problematic — we expose the error clearly and encourage desktop recording for long testimonials.
Do you transcode the uploaded video?+
Not automatically. We store the original file as-is (MP4, MOV, WebM, or MKV) with public, immutable cache headers on R2. The browser handles playback. If we need to normalize for a specific output format in the future — say, HLS for adaptive bitrate — we'd add a background job, but for testimonial use cases the raw upload plays fine in every browser.