logo
Published on

Why I moved this blog from Vercel to Cloudflare Workers

Authors
  • avatar
    Name
    Alberto Montalesi
    Twitter
This article was written with AI assistance (Claude Sonnet 4.6).

I migrated this blog off Vercel this week. Here is what pushed me to do it and how the technical side went.

The trigger: Vercel's April 2026 security incident

On April 19, Vercel disclosed a security incident. According to Vercel's bulletin, the incident originated with a compromise of Context.ai, a third-party AI tool used by a Vercel employee. That access was then used to take over the employee's Vercel Google Workspace account, which enabled access to some Vercel environments and environment variables that were not marked as sensitive.

Vercel said a limited subset of customers had non-sensitive environment variables compromised and recommended rotating exposed credentials immediately.

None of this was catastrophic for a small personal blog. But it was enough to finally do something about a problem I had been putting off: the cost.

The cost problem

Vercel is genuinely good for developer experience. But the pricing gets uncomfortable fast for personal projects with unpredictable traffic. The Hobby plan has bandwidth and function invocation limits that are easy to hit if a post picks up any traction. The Pro tier is $20/month, and useful things like log drains and more granular analytics are either locked behind paid tiers or charged separately on top.

Cloudflare Workers is essentially free for this kind of site. The free tier covers 100,000 requests per day. The paid plan starts at $5/month and includes a large amount of usage before overages. For a blog, the math is not close.

Also: Contentlayer was dead

The site had been using Contentlayer for MDX processing. The project has been unmaintained for a while. The main repo went quiet and the community fork (contentlayer2) was barely keeping pace with Next.js. I was already looking at replacing it.

So the migration was really two things at once: swap the runtime from Vercel to Cloudflare, and replace Contentlayer with Velite.

The new stack

  • Velite for content and MDX processing
  • OpenNext Cloudflare (@opennextjs/cloudflare) to adapt the Next.js build output for the Workers runtime
  • Wrangler for deployment

The site is still a Next.js app. OpenNext does the work of making that compatible with Cloudflare's environment.

Replacing Contentlayer with Velite

Contentlayer used "document types" with computed fields to define your content schema. Velite has a similar concept but uses a Zod-style schema API and is actively maintained.

Before (Contentlayer):

// contentlayer.config.ts
const computedFields: ComputedFields = {
  readingTime: { type: 'json', resolve: (doc) => readingTime(doc.body.raw) },
  slug: {
    type: 'string',
    resolve: (doc) => doc._raw.flattenedPath.replace(/^.+?(\/)/, ''),
  },
  toc: { type: 'string', resolve: (doc) => extractTocHeadings(doc.body.raw) },
}

export const Blog = defineDocumentType(() => ({
  name: 'Blog',
  filePathPattern: 'blog/**/*.mdx',
  contentType: 'mdx',
  fields: {
    title: { type: 'string', required: true },
    date: { type: 'date', required: true },
    tags: { type: 'list', of: { type: 'string' }, default: [] },
    // ...
  },
  computedFields,
}))

After (Velite):

// velite.config.ts
const allBlogs = defineCollection({
  name: 'Blog',
  pattern: 'blog/**/*.mdx',
  schema: s
    .object({
      title: s.string(),
      date: s.isodate(),
      tags: s.array(s.string()).default([]),
      rawContent: s.raw(),
      path: s.path(),
      body: s.mdx(),
      // ...
    })
    .transform(async (data) => {
      const { rawContent, ...rest } = data;
      return {
        ...rest,
        body: renderCompiledMdxToHtml(rest.body),
        slug: rest.path.replace(/^[^/]+\//, ''),
        readingTime: readingTime(rawContent),
        toc: await extractTocHeadings(rawContent),
      };
    }),
});

One difference worth noting: Velite compiles MDX to a code string rather than a React component. I had to write a small helper to turn that into static HTML at build time:

// data/mdx-to-html.ts
export function renderCompiledMdxToHtml(code: string) {
  const fn = new Function(code) as (runtime: typeof jsxRuntime) => CompiledMdxModule;
  const MDXContent = fn(jsxRuntime).default;

  return renderToStaticMarkup(
    createElement(MDXContent, { components: mdxComponents })
  );
}

This runs during the build. The output is plain HTML strings, which is what gets stored and eventually served.

Wiring Velite into the webpack build

Velite needs to run before Next.js compiles. The way to do that is a small webpack plugin:

// next.config.js
class VeliteWebpackPlugin {
  static started = false;
  apply(compiler) {
    compiler.hooks.beforeCompile.tapPromise('VeliteWebpackPlugin', async () => {
      if (VeliteWebpackPlugin.started) return;
      VeliteWebpackPlugin.started = true;
      const dev = compiler.options.mode === 'development';
      const { build } = await import('velite');
      await build({ watch: dev, logLevel: 'warn' });
    });
  }
}

The static started flag prevents duplicate builds during hot reloads in dev mode.

OpenNext config

There isn't much to configure in open-next.config.ts:

import { defineCloudflareConfig } from "@opennextjs/cloudflare";

const config = defineCloudflareConfig({
  // R2 incremental cache is available but not enabled here
});

export default {
  ...config,
  buildCommand: "npm run build:next",
};

And the Wrangler config (with domain-specific values anonymized):

{
  "main": ".open-next/worker.js",
  "name": "my-blog-worker",
  "compatibility_date": "2026-04-18",
  "compatibility_flags": ["nodejs_compat", "global_fetch_strictly_public"],
  "assets": {
    "directory": ".open-next/assets",
    "binding": "ASSETS"
  },
  "services": [
    {
      "binding": "WORKER_SELF_REFERENCE",
      "service": "my-blog-worker"
    }
  ],
  "images": {
    "binding": "IMAGES"
  }
}

The WORKER_SELF_REFERENCE service binding is used by OpenNext for caching. The IMAGES binding wires up Cloudflare's image optimization.

Build scripts

The build pipeline changed to put OpenNext in front:

{
  "build": "opennextjs-cloudflare build",
  "build:next": "cross-env INIT_CWD=$PWD next build && node ./scripts/postbuild.mjs",
  "deploy": "opennextjs-cloudflare build && opennextjs-cloudflare deploy",
  "preview": "opennextjs-cloudflare build && opennextjs-cloudflare preview"
}

opennextjs-cloudflare build calls build:next internally, then wraps the output for the Workers runtime. Having build:next as a separate named script is what makes the buildCommand in open-next.config.ts work.

Dev mode emulation

OpenNext needs one extra line to emulate Cloudflare bindings locally during next dev:

// next.config.js
if (process.env.NODE_ENV === 'development') {
  import('@opennextjs/cloudflare').then((m) => m.initOpenNextCloudflareForDev());
}

Without it, ASSETS and IMAGES bindings are not available and things break in local development.

How long did it take?

The migration took most of a weekend. The majority of that time went into Velite, specifically understanding how it handles MDX differently from Contentlayer and getting the HTML rendering helper right. Once that clicked, the Cloudflare side was straightforward.

The migration itself was done entirely with AI, using Claude Sonnet 4.6 and Codex 5.4. I mostly guided the process and reviewed the output rather than writing the config files by hand.

Performance

The site is noticeably faster. Time to first byte dropped significantly since pages are now served from Cloudflare's edge rather than a single Vercel region. Lighthouse scores went up across the board, mostly from the improved response times and the fact that Cloudflare handles image optimization natively.

Cost and security

The cost rounds to zero. The security concern is also largely addressed: fewer things sitting in a third-party dashboard that could be exposed in a breach.

Should you do it?

There are still rough edges in the OpenNext Cloudflare adapter. It is moving fast and some Next.js features are not fully supported yet. For a content-heavy blog without heavy server-side logic it works well, but I would check the compatibility matrix before going this route with something more complex.

If you are running a Next.js site on Vercel and the April breach made you think harder about your platform dependencies, or you are just watching the billing go up, the OpenNext route is worth a look. The main cost is one weekend and some patience with Velite's documentation.

This blog was the easy case, a mostly static site with no complex server-side logic. I am also in the process of moving my other, larger projects off Vercel, and that migration is a different story. I will write that up as a part two once it is done.