Hey pythonanywhere team. My website kenyanresources.co.ke is having a strange robots.txt issue. While the file returns HTTP 200 OK when accessed directly (verified via curl and browser at both kenyanresources.co.ke/robots.txt and kenyanresources.co.ke/static/robots.txt), Google Search Console consistently reports "Failed to fetch." The file exists at /home/DanielGitau/kenyanresources/staticfiles/robots.txt with 644 permissions, and is properly mapped in PythonAnywhere's static files configuration (URL: /robots.txt, Path: /home/DanielGitau/kenyanresources/staticfiles/). There are no errors in the server logs, and the file contains standard directives (User-agent: * / Allow: /). Why would Googlebot fail to fetch this file when it's clearly accessible to regular requests? Could this be a PythonAnywhere-specific issue with how they handle Googlebot's requests, or should I consider serving robots.txt through a Django view instead of static files? Any insights would be appreciated.