Stop search engines from crawling website using simple Nginx location
directive to provide robots.txt
file.
This is a straightforward solution as it will return a defined response and 200 OK
status code for /robots.txt
request regardless of the file’s existence.
location = /robots.txt { add_header Content-Type text/plain; return 200 "User-agent: *\nDisallow: /\n"; }
You can use this solution to define a default robots.txt
file.
location = /robots.txt { try_files /robots.txt @robots.txt; } location @robots.txt { add_header Content-Type text/plain; return 200 "User-agent: *\nDisallow: /\n"; }
This will ensure that default content will be served only when the application does not provide its own robots.txt
file.