MENU
robots.txt
Place a robots.txt file in /app/ to tell search engine crawlers which URLs they can access on your site.
Alternatively, generate the file programmatically in robots.(js|ts).
robots.js is a special Route Handler that is cached by default unless it relies on a dynamic API or a dynamic configuration option.
robots.ts:
robots.txt:
robots.ts:
import type { MetadataRoute } from 'next'
export default function robots(): MetadataRoute.Robots {
return {
rules: [
{
userAgent: 'Googlebot', // * for all agents
allow: ['/'],
disallow: '/private/',
},
{
userAgent: ['Applebot', 'Bingbot'],
disallow: ['/'],
},
],
sitemap: 'https://acme.com/sitemap.xml',
}
}
robots.txt:
User-Agent: Googlebot
Allow: /
Disallow: /private/
User-Agent: Applebot
Disallow: /
User-Agent: Bingbot
Disallow: /
Sitemap: https://acme.com/sitemap.xml