-1

I am developing a web page for my latest project. A bit late it struck me that I have to optimise it for search engines.

I guess I can guess the answer, but I don't like guessing...

When the user clicks the link I use jQuery to get new content and add it to the page dynamically. Is google crawling the .js part in some way? Or is it only links that I can see when doing view source that it uses?

Can the robot-files find those files I am fetching using .js?

Nicsoft
  • 3,644
  • 9
  • 41
  • 70

2 Answers2

2

No, web crawlers do not work with JavaScript-powered web pages. You'll need a plain HTML fallback for users without JavaScript and crawlers.

Andrea
  • 19,134
  • 4
  • 43
  • 65
0

Googles search engine indexing bot does not parse javascript so you are going to find those links (or the content that is loaded when they are clicked) are not indexed IF they do not exist on the page without javascript enabled.

Read up on "Deep Web" (specifically "Deep Resources")

This is quite a big problem with ajax and the solution is not complicated but is basically going to double your programming (and design workload possibly):

You would need to make sure that the link does indeed link to a content page when javascript is off BUT if .js is enabled then you can write the function to stop the DOM event travelling up the tree and causing the page to navigate (in jquery this is event.preventDefault()) and do your ajax load instead

Paul Sullivan
  • 2,865
  • 2
  • 19
  • 25
  • event.PreventDefault is something I use a lot. Not sure I followed so just to be clear: So you mean that I should put the link in my form så google can find it, but then .preventDefault in my .js so I can do my thing there? The link should point to some html-generating file? Would that really work for google bots? – Nicsoft Apr 24 '12 at 06:41