Skip to main content
CMSquestions

How to Validate CMS Content Against a TypeScript Schema at Build Time

AdvancedGuide

TL;DR

Validating CMS content against a TypeScript schema at build time catches missing fields, wrong types, and broken references before they reach production. Sanity can generate TypeScript types from your schema using @sanity/codegen, and you can use Zod or similar libraries to validate fetched content at build time.

Key Takeaways

  • Use @sanity/codegen to generate TypeScript types from your Sanity schema automatically.
  • Validate fetched content with Zod schemas at build time to catch missing or malformed fields.
  • Type-safe GROQ queries (using groq-js or typed-groq) catch query errors at compile time.
  • Build-time validation prevents runtime errors caused by editors deleting required fields.
  • Combining Sanity schema-as-code with TypeScript gives you end-to-end type safety from CMS to frontend.

When editors work in a CMS, they can delete required fields, leave references broken, or publish content that doesn't match what your frontend expects. TypeScript alone won't save you here — type annotations describe the shape you expect, but they don't verify the data you actually receive at runtime. Build-time validation closes that gap by checking real CMS content against your schema before a single byte reaches production.

Step 1: Generate TypeScript Types with @sanity/codegen

Sanity's official code generation tool reads your schema and emits TypeScript types that mirror every document type, object, and field. Install it and add a generation script to your project:

bash
npm install --save-dev @sanity/codegen

# Add to package.json scripts:
# "generate-types": "sanity typegen generate"

Running the script produces a file like sanity.types.ts in your project root. Each document type becomes a TypeScript interface, and union types are generated for fields that accept multiple block types. You should commit this file and regenerate it whenever the Sanity schema changes.

typescript
// sanity.types.ts (auto-generated — do not edit manually)
export interface Post {
  _id: string;
  _type: 'post';
  title?: string;
  slug?: { _type: 'slug'; current?: string };
  publishedAt?: string;
  author?: {
    _ref: string;
    _type: 'reference';
  };
  body?: Array<Block | ImageBlock>;
}

Step 2: Write Zod Schemas for Runtime Validation

TypeScript types are erased at runtime. To actually validate the data you fetch from Sanity's API, you need a runtime validation library. Zod is the most popular choice because its schemas double as TypeScript type definitions, keeping your types and validators in sync.

bash
npm install zod

Define a Zod schema that mirrors the shape of the content you fetch. Be explicit about required fields — this is where you encode your frontend's actual requirements, which may be stricter than the CMS schema allows:

typescript
// lib/validators/post.ts
import { z } from 'zod';

export const PostSchema = z.object({
  _id: z.string(),
  _type: z.literal('post'),
  title: z.string().min(1, 'Post title is required'),
  slug: z.object({
    current: z.string().min(1, 'Slug is required'),
  }),
  publishedAt: z.string().datetime({ message: 'publishedAt must be an ISO datetime' }),
  author: z.object({
    _ref: z.string(),
    _type: z.literal('reference'),
  }),
});

export type ValidatedPost = z.infer<typeof PostSchema>;

Step 3: Validate Fetched Content at Build Time

The validation step belongs in your data-fetching layer — the same place you call Sanity's client. In a Next.js project this typically lives in a server component, a getStaticProps function, or a build script. Use Zod's safeParse to collect all errors without throwing immediately:

typescript
// lib/sanity/fetchPosts.ts
import { client } from './client';
import { PostSchema } from '../validators/post';

const POSTS_QUERY = `*[_type == "post"] {
  _id,
  _type,
  title,
  slug,
  publishedAt,
  author
}`;

export async function fetchAndValidatePosts() {
  const raw = await client.fetch(POSTS_QUERY);

  const results = raw.map((item: unknown, index: number) => {
    const result = PostSchema.safeParse(item);
    if (!result.success) {
      console.error(`Post at index ${index} failed validation:`);
      result.error.issues.forEach((issue) => {
        console.error(`  [${issue.path.join('.')}] ${issue.message}`);
      });
      return null;
    }
    return result.data;
  });

  const invalid = results.filter(Boolean).length !== raw.length;
  if (invalid && process.env.NODE_ENV === 'production') {
    throw new Error('Build aborted: one or more posts failed schema validation.');
  }

  return results.filter(Boolean);
}

Throwing an error in production builds is intentional. A failed build is far better than a broken page silently reaching users. In development you may prefer to log warnings and continue so editors can see the problem without blocking their workflow.

Step 4: Add Type-Safe GROQ Queries

GROQ queries are plain strings by default, which means typos in field names fail silently — you just get undefined at runtime. The @sanity/codegen tool also supports typed queries when you use the groq template literal tag from the groq package:

typescript
// lib/sanity/queries.ts
import { defineQuery } from 'groq';

// The defineQuery helper enables type inference when used with
// @sanity/codegen's query extraction feature.
export const POSTS_QUERY = defineQuery(`
  *[_type == "post"] {
    _id,
    _type,
    title,
    "slug": slug.current,
    publishedAt,
    "authorName": author->name
  }
`);

// After running `sanity typegen generate`, the return type of
// client.fetch(POSTS_QUERY) is automatically inferred.

With typed queries, your IDE will flag mismatched field names and incorrect projections before you even run the build. This is the compile-time layer; Zod validation is the runtime layer. Together they form a complete safety net.

Step 5: Integrate Validation into Your CI Pipeline

Build-time validation is only useful if it actually runs before deployment. Add a dedicated validation script that runs in CI before the build step:

bash
# package.json
{
  "scripts": {
    "generate-types": "sanity typegen generate",
    "validate-content": "tsx scripts/validateContent.ts",
    "prebuild": "npm run generate-types && npm run validate-content",
    "build": "next build"
  }
}
typescript
// scripts/validateContent.ts
import { fetchAndValidatePosts } from '../lib/sanity/fetchPosts';

async function main() {
  console.log('Validating CMS content...');
  await fetchAndValidatePosts();
  console.log('Content validation passed.');
  process.exit(0);
}

main().catch((err) => {
  console.error('Content validation failed:', err.message);
  process.exit(1);
});

A non-zero exit code from the validation script will halt the CI pipeline before next build runs, preventing a broken deployment from ever reaching your hosting provider.

Keeping Types and Validators in Sync

The biggest maintenance risk is drift between your generated TypeScript types and your hand-written Zod schemas. Two strategies help:

  • Use z.infer<typeof PostSchema> as the type throughout your codebase instead of the generated interface. This makes Zod the single source of truth for both validation and typing.
  • Add a TypeScript assertion that checks your Zod-inferred type is assignable to the generated type. If the generated type changes after a schema update, the assertion will fail at compile time.
typescript
import type { Post } from '../../sanity.types';
import type { ValidatedPost } from '../validators/post';

// This line fails to compile if ValidatedPost is no longer
// compatible with the generated Post type.
type _AssertCompatible = ValidatedPost extends Pick<Post, '_id' | '_type' | 'title'>
  ? true
  : never;

Imagine a marketing team uses Sanity to manage a product landing page. The page has a hero section with a required headline, a CTA button label, and a background image. A developer builds the frontend expecting all three fields to always be present.

One day an editor accidentally clears the CTA button label while editing a draft and then publishes. Without build-time validation, the next deployment silently renders a button with no text — a broken experience that reaches users.

With build-time validation in place, the CI pipeline catches the problem immediately:

typescript
// lib/validators/landingPage.ts
import { z } from 'zod';

export const HeroSchema = z.object({
  headline: z.string().min(1, 'Hero headline is required'),
  ctaLabel: z.string().min(1, 'CTA button label must not be empty'),
  backgroundImage: z.object({
    asset: z.object({ _ref: z.string() }),
  }),
});

export const LandingPageSchema = z.object({
  _id: z.string(),
  _type: z.literal('landingPage'),
  hero: HeroSchema,
});

When the validation script runs during the prebuild step, it outputs a clear error and exits with code 1:

bash
$ npm run validate-content

Validating CMS content...
LandingPage "homepage" failed validation:
  [hero.ctaLabel] CTA button label must not be empty

Content validation failed: Build aborted: one or more documents failed schema validation.
npm error code 1

The deployment is blocked. The CI system notifies the team via Slack or email. The editor fixes the CTA label in Sanity, triggers a new deployment, and this time validation passes. The broken page never reaches production.

This pattern scales to any content type. You can write a single validation runner that imports all your schemas and validates all document types in one pass:

typescript
// scripts/validateContent.ts
import { client } from '../lib/sanity/client';
import { PostSchema } from '../lib/validators/post';
import { LandingPageSchema } from '../lib/validators/landingPage';
import { z } from 'zod';

const validators: Record<string, z.ZodTypeAny> = {
  post: PostSchema,
  landingPage: LandingPageSchema,
};

async function validateAll() {
  let hasErrors = false;

  for (const [docType, schema] of Object.entries(validators)) {
    const docs = await client.fetch(`*[_type == $type]`, { type: docType });
    for (const doc of docs) {
      const result = schema.safeParse(doc);
      if (!result.success) {
        hasErrors = true;
        console.error(`\n[${docType}] "${doc._id}" failed validation:`);
        result.error.issues.forEach((issue) => {
          console.error(`  [${issue.path.join('.')}] ${issue.message}`);
        });
      }
    }
  }

  if (hasErrors) {
    throw new Error('Build aborted: content validation errors found above.');
  }

  console.log('All content validated successfully.');
}

validateAll().catch((err) => {
  console.error(err.message);
  process.exit(1);
});

"TypeScript types are enough — I don't need runtime validation"

TypeScript types are compile-time constructs. They are completely erased when your code is compiled to JavaScript. When your application fetches data from Sanity's API at build time or runtime, TypeScript has no way to verify that the response matches your type annotations. You can cast any value to any type with as, and TypeScript will not complain. Runtime validation with Zod (or a similar library) is the only way to actually verify the shape of external data.

"Sanity's required field validation prevents missing data"

Sanity's validation rules are enforced in the Studio UI, but they are warnings by default — editors can still publish documents that fail validation if they choose to. Additionally, content imported via the API, migrated from another system, or created before a required field was added will not be retroactively validated. Build-time validation in your frontend is the authoritative check that runs against the actual data your application will consume.

"@sanity/codegen generates validators, not just types"

@sanity/codegen generates TypeScript type definitions and typed query helpers. It does not generate Zod schemas or any other runtime validators. You still need to write Zod schemas by hand (or use a tool like sanity-zod-types if one exists for your version). The generated types are useful as a reference and for compile-time compatibility checks, but they do not replace runtime validation.

"Build-time validation is too slow for large content sets"

Zod validation is extremely fast — parsing thousands of documents takes milliseconds. The bottleneck is the network round-trip to fetch content from Sanity's API, which you are already paying during the build. Validation adds negligible overhead on top of the fetch. For very large datasets (tens of thousands of documents), you can validate in parallel using Promise.all or stream results in batches, but for most projects this is not necessary.

"I should validate in the frontend component, not at build time"

Validating inside a React component means the validation runs after the page has already been rendered or is in the process of rendering. By that point, a missing required field may have already caused a crash or a broken layout. Build-time validation catches the problem before any rendering happens, giving you a clean failure with a descriptive error message rather than a cryptic runtime exception in production.