I am new to programming so please if I say something stupid don't judge me.
I was wondering if there is any way to trick web crawlers, so some of the content of a website will be different for a human visitor, than a web spider.
So here's an idea I thought.
Everytime a visitor enter a page, there will be a script that will identify users gender from facebook API. If there is a return (if user is connected to facebook in the same browser) then some code will be printed with PHP to the page code. If it's a crawler, there will be no return so the code will not exist in the source code of that page.
I know that PHP is a server side language, so web crawlers don't have permition to scan those codes. If I am not right, please correct me.
Thank you.