2

One SEO advice we got was to move all javascript to external files, so the code could be removed from the text. For fixed scripts this is not a problem, but some scripts need to be generated as they depend on some ClientId that is generated by asp.net. Can I use the ScriptManager (from asp.net Ajax or from Telerik) to send this script to the browser or do I need to write my own component for that?

I found only ways to combine fixed files and/or embedded resources (also fixed).

scunliffe
  • 62,582
  • 25
  • 126
  • 161
Hans Kesting
  • 38,117
  • 9
  • 79
  • 111

3 Answers3

2

How about registering the ClientIDs in an inline Javascript array/hash, and have your external JS file iterate through that?

James McCormack
  • 9,217
  • 3
  • 47
  • 57
1

Spiderbots do not read JavaScript blocks. This advice is plain wrong.

Diodeus - James MacFarlane
  • 112,730
  • 33
  • 157
  • 176
  • The reason given was to improve the content-to-code ratio: removing the text from the html file saves the spider from having to read and ignore it. – Hans Kesting Aug 11 '09 at 13:48
1

Some javascript can break W3C validators (and possibly cause issues with some spiderbots) You can reduce this by placing this code around your javascript:

< !-- no script

... your javascript code and functions ...

// -->

Note: remove the space between "<" and "!" as it seems to comment out the example here :-)

Mark Redman
  • 24,079
  • 20
  • 92
  • 147
  • 1
    I disagree, so do many others: http://stackoverflow.com/questions/204813/does-it-still-make-sense-to-use-html-comments-on-blocks-of-javascript – Diodeus - James MacFarlane Aug 11 '09 at 14:06
  • Which part do you disagree with? On the basis I have used this to "fix" W3C validation, it is correct. On that basis I am saying it may also fix parsing by other spiderbots, in this case SEO related bots? If you dont agree with that then I guess it confirms my doubt (ie when I said "possibly") – Mark Redman Aug 11 '09 at 15:00
  • Note, this code helps with W3C Validation when you have some html code written out by Javascript (eg when showing content when flash is not available), in this case the html is sometimes escaped to construct a string in javascript which breaks the validation as the validation "bot" thinks its seeing bad html. This is what I am refering to. – Mark Redman Aug 11 '09 at 15:10
  • SEO has nothing to do with validation. If you want to see what the BOTs see, use Lynx (http://en.wikipedia.org/wiki/Lynx_%28web_browser%29). Bots do not see code written by JavaScript because they don't execute JavaScript, they scrape the page. – Diodeus - James MacFarlane Aug 11 '09 at 16:31
  • If a bot scrapes a page and sees html strings within javascript, it may be that the bot sees this as html. If the strings are escaped, the bot may read this an invlaid html. This is a problem with validating pages with html with javascript string. This has to do with Validation, not SEO. Agreed? On the basis that SEO uses a bot to parse html, it "may"/"possibly" have the same issue. I dont know whether having a validated page is better for SEO or not, I am just commenting on how the bot may fail parsing the html. – Mark Redman Aug 11 '09 at 16:42
  • The bot finds the tag and ignores everything in-between, whether the stuff in-between is HTML or not is irrelevant. – Diodeus - James MacFarlane Aug 11 '09 at 17:30