# /search and /profile are NOT disallowed here on purpose: both pages # serve at the HTML # level, and disallowing them in robots.txt prevents Googlebot from # crawling them — which means Google never sees the noindex meta and # happily lists the bare URL in the index anyway ("Indexed, though # blocked by robots.txt"). Letting Google crawl + see the meta is the # canonical way to keep these out of the index. # # /api/ stays disallowed — those endpoints return JSON, not HTML, so # the noindex meta route doesn't apply. If any /api/ URL ever surfaces # in GSC, switch to an X-Robots-Tag: noindex response header instead. User-agent: * Disallow: /api/ Sitemap: https://www.dtcetc.com/sitemap-index.xml