I have a web-page on which users can fill some data and to do so they need to be logged in. So, when I created the sitemap.xml
using xml-sitemaps.com it created several locs asking for login first. Something like:
<loc> https://www.example.com/login/?next=fill-form/ </loc>
This page don't have content as well, so I thought it's a good idea to prevent search engines from crawling it.
I was wondering what is the right way of preventing search engines from crawling,
adding the below tag in head
section,
<meta name="robots" content="noindex, nofollow">
or disallowing the web-page by adding its URL in robots.txt
file?
Also, what's the difference between the two?