I know that the file robots.txt is used for to block web crawler of index content sites of third parties.
However, if the goal this file is to delimit a private area of site or to protect a private area, which is the sense in try to hidden the content with robots.txt, if all will can be see in GitHub repository?
My question extend the examples using custom domain.
Is there motivation in to use file robots.txt
inside of GitHub pages? Yes or no? And why?
Alternative 1
For that content stay effectively hidden, then will been need to pay for the web site is to get a private repository.