133

I would like to know when I should include external scripts or write them inline with the html code, in terms of performance and ease of maintenance.

What is the general practice for this?

Real-world-scenario - I have several html pages that need client-side form validation. For this I use a jQuery plugin that I include on all these pages. But the question is, do I:

  • write the bits of code that configure this script inline?
  • include all bits in one file that's share among all these html pages?
  • include each bit in a separate external file, one for each html page?

Thanks.

Braiam
  • 1
  • 11
  • 47
  • 78
Dan
  • 9,912
  • 18
  • 49
  • 70

19 Answers19

118

At the time this answer was originally posted (2008), the rule was simple: All script should be external. Both for maintenance and performance.

(Why performance? Because if the code is separate, it can easier be cached by browsers.)

JavaScript doesn't belong in the HTML code and if it contains special characters (such as <, >) it even creates problems.

Nowadays, web scalability has changed. Reducing the number of requests has become a valid consideration due to the latency of making multiple HTTP requests. This makes the answer more complex: in most cases, having JavaScript external is still recommended. But for certain cases, especially very small pieces of code, inlining them into the site’s HTML makes sense.

Konrad Rudolph
  • 530,221
  • 131
  • 937
  • 1,214
  • Agreed. And I'll add that you can put the < script > inclusion tag AT THE END of the html code as yahoo recommands it for performance reasons too (http://developer.yahoo.com/performance/rules.html#postload) – Bite code Sep 27 '08 at 11:57
  • @zach: putting a script tag in your HTML is *not* obtrusive JS. @konrad: you can easily overcome the < > problems by wrapping your code in a CDATA section. – nickf Oct 14 '08 at 22:17
  • 6
    @Nick: most problems can be overcome. Better not to generate them in the first place, though. – Konrad Rudolph Oct 15 '08 at 08:16
  • @nickf There is no such thing as CDATA in HTML, and nobody ever serves their pages as XHTML even if they write XHTML in the DOCTYPE, because if they did their pages would not load in IE. – Daniel Cassidy Jul 07 '11 at 13:18
  • @Daniel I actually did this for one website, only non-supporting browsers (= MSIE) got a HTML doctype. That said, Firefox’ CDATA support was extremely buggy some time ago (no idea whether that’s changed). – Konrad Rudolph Jul 07 '11 at 13:22
  • 17
    Sometimes you get better performance when inlining. Look at the source of [google.com](http://www.google.com/). They know what they're doing. – callum Apr 12 '12 at 10:04
  • 15
    @callum Google has a different use-case from 99.999999% of websites. Of course they measure *extremely* carefully and even the smallest difference matters. But just because they found that in their particular use-case, inlining works better (probably because the script changes very frequently?) doesn’t mean that we can derive a general rule from that, or even that we should disregard the “conventional” rule (to externalise scripts). – Konrad Rudolph Apr 12 '12 at 10:16
  • 8
    @KonradRudolph - Agreed, no general rule should be derived from Google's approach. I'm just saying it's a hint that it might be worth questioning the **rule** in your answer. Anyway, I think the reason Google does it is to reduce HTTP requests, and this might benefit more than 0.000001% of sites. Bandwidth is getting higher but round trip times are staying the same. Removing a whole serial HTTP request is sometimes better than the caching benefit of external JS. Depends on the size of your JS of course. – callum Apr 13 '12 at 10:28
  • 5
    @callum While this is true, the point about caching still remains and stays important. Reducing roundtrips is only important if your visitors don’t return (and then you won’t get enough page hits to make it matter) or if your content changes so often that caching the script files has no benefit. – Konrad Rudolph Apr 13 '12 at 10:30
  • then how are you going to have page specific stuff? – GorillaApe Apr 16 '12 at 15:44
  • @Parhs Load page-specific scripts. Or, if it’s a very small configuration, just have it inline. – Konrad Rudolph Apr 16 '12 at 16:29
  • 1
    -1 this a naive statement, there are certainly times where performance is greater inline vs http request (even cached). – Chris Marisic Aug 08 '14 at 13:44
  • @Chris … this has been discussed to death in the comments above. – Konrad Rudolph Aug 08 '14 at 13:51
  • @Konrad Rudolph: Google does the opposite to whatever it recommends you, look at their websites. It's Google, deal with it! – machineaddict Mar 25 '15 at 12:36
  • @machineaddict Exactly: it’s Google. You are not Google. Their guidelines are not for you. Also, I suggest you read the existing comments before commenting yourself the next time. – Konrad Rudolph Mar 25 '15 at 12:56
  • Isn't the document cached just as easily as the javascript file? If you had a large number of pages each one would have to be cached but if you only had one page putting it inline saves the request. There's probably a point where the number of pages and size of the script makes saving the request less beneficial than caching a js file. – dev_willis Jun 11 '16 at 14:29
  • @Dave *If* the page doesn't change. The whole point having separate files is to be able to cache some files when others change. – Konrad Rudolph Jun 11 '16 at 15:17
  • 2
    Disagree. Small script could and should be inlined performance wise. MOre xternal files, more calls. – Vladd Aug 16 '16 at 11:55
  • @Vladd What do you mean by “more calls”? More HTTP requests/round-trips? Sure. And for high-traffic websites (Google, Stack Overflow, a handful more) that is totally an important metric. For 99.999% websites on the Internet, it’s utterly irrelevant. – Konrad Rudolph Aug 16 '16 at 12:30
  • @Konrad, I completely disagree. The problem is not the "amount" of http calls, its the latency each call takes. At least according to my experience, the only thing that the end user really feels at the end is the round trips. unless all your clients live next door to the data center where your js is stored, or you are developing an intranet app, for 99.999% of the website that time is very relevant. what is usually irrelevant is the 1 - 300 kb more data you would need to transfer over the wire that could have been cached if the js would be separated – Yuval Perelman Apr 11 '17 at 22:44
  • @user2033402 If your end users feel the latency of loading external JavaScript, there’s probably an error in your server configuration (probably something that prevents concurrent loading). This shouldn’t (and indeed, doesn’t usually) happen. – Konrad Rudolph Apr 12 '17 at 09:38
  • 1
    could be, or i'm just leaving in Israel, the main server is in the US and our clients are spread on all corners of the globe. we could of course invest money in replicating our website in many locations, but we like money and prefer to keep it for ourselves. yes, we could put our script in CDNs and remember to update them, we could do many things that takes money and time. Or we could just use inline scripts while we have no real reason not to. "shouldn't happen" on a general case is a funny word. I would like to reference you to Gleno's answer, looks like he came to the same conclusion – Yuval Perelman Apr 12 '17 at 17:50
  • 2
    @YuvalPerelman If the js is external and cached, the latency for the cached bits is ~0 on other than the first visit. With HTTP/2 many of the previous best practices don't apply anymore either, so which protocol is used should also be considered. – jinglesthula May 02 '18 at 17:14
  • @KonradRudolph I am interested in what you said in your answer about external having a maintenance advantage. Can you elaborate? – jinglesthula May 16 '18 at 23:04
  • @jinglesthula why do you choose to ignore the first visit, which in many cases would be the only one? first visit is important, sometimes the most important. About maintenance, its much easier to maintain a script that is saved on a separate file, it gives you separation of concerns. HTML files (or php, cshtml etc.) tends to be large and hard to navigate, if you have scripts inside it can be dreadful. When you need to change something, its hard to find it, and the only way to update it is to update everything together. Until a certain degree, granularity is very convenient. – Yuval Perelman Aug 21 '18 at 13:45
  • @YuvalPerelman I agree - the first visit is very important. I'm merely pointing out that on subsequent visits cached assets aren't fetched again, which is worth considering. Definitely, you need to consider your users' behavior. If 90% of your visitors only request the page once, that's very different than if 90% visit the page 2+ times. – jinglesthula Aug 21 '18 at 16:00
  • re: maintenance advantage, I re-read and saw that this was referring to 2008 (before babel/es6 modules/react etc., and when HTML files really did typically contain large amounts of code). Now we can collocate all code for a concern - behavior (js), markup, and styles - in very small component files (certainly easier to maintain than when spread out over different files in different directories, and for js/css often mixed in with code for disparate concerns) so the thought of even trying to embed js in an HTML file is becoming a moot point. "js in my .html file? No! Put it with the markup!" :D – jinglesthula Aug 21 '18 at 16:11
31

Maintainability is definitely a reason to keep them external, but if the configuration is a one-liner (or in general shorter than the HTTP overhead you would get for making those files external) it's performance-wise better to keep them inline. Always remember, that each HTTP request generates some overhead in terms of execution time and traffic.

Naturally this all becomes irrelevant the moment your code is longer than a couple of lines and is not really specific to one single page. The moment you want to be able to reuse that code, make it external. If you don't, look at its size and decide then.

Horst Gutmann
  • 10,910
  • 2
  • 28
  • 31
  • 5
    That's one of my concerns. Having a separate HTTP request for a few lines of codes seems wasteful. – Dan Sep 26 '08 at 13:01
  • Could you perhaps post a sample configuration for your code? IMO if it's under 300 characters and absolutely page-specific, inline it. – Horst Gutmann Sep 26 '08 at 15:22
  • @Dan bear in mind that the separate request only happens the first time. If you expect your users to be loading the page more than once, cached external (even for a few lines) is clearly faster than waiting for the bytes for those few lines over the wire on the n=2+ page loads. – jinglesthula May 02 '18 at 17:16
  • @HorstGutmann how does having the file external aid with maintainability? I personally prefer external js whenever possible, but is there something objective that makes it easier to maintain? – jinglesthula May 16 '18 at 23:12
  • @jinglesthula one example would be if you want to use the same code (or something quite similar) on multiple pages. Have the same code live in multiple places hinders maintainability. Sure, there are ways around that, but let's just assume the most naive implementation here. It's also far easier to statically analyse code when it lives in dedicated files (eslint et al.). – Horst Gutmann May 18 '18 at 19:51
22

If you only care about performance, most of advice in this thread is flat out wrong, and is becoming more and more wrong in the SPA era, where we can assume that the page is useless without the JS code. I've spent countless hours optimizing SPA page load times, and verifying these results with different browsers. Across the board the performance increase by re-orchestrating your html, can be quite dramatic.

To get the best performance, you have to think of pages as two-stage rockets. These two stages roughly correspond to <head> and <body> phases, but think of them instead as <static> and <dynamic>. The static portion is basically a string constant which you shove down the response pipe as fast as you possibly can. This can be a little tricky if you use a lot of middleware that sets cookies (these need to be set before sending http content), but in principle it's just flushing the response buffer, hopefully before jumping into some templating code (razor, php, etc) on the server. This may sound difficult, but then I'm just explaining it wrong, because it's near trivial. As you may have guessed, this static portion should contain all javascript inlined and minified. It would look something like

<!DOCTYPE html>
     <html>
         <head>
             <script>/*...inlined jquery, angular, your code*/</script>
             <style>/* ditto css */</style>
         </head>
         <body>
             <!-- inline all your templates, if applicable -->
             <script type='template-mime' id='1'></script>
             <script type='template-mime' id='2'></script>
             <script type='template-mime' id='3'></script>

Since it costs you next to nothing to send this portion down the wire, you can expect that the client will start receiving this somewhere around 5ms + latency after connecting to your server. Assuming the server is reasonably close this latency could be between 20ms to 60ms. Browsers will start processing this section as soon as they get it, and the processing time will normally dominate transfer time by factor 20 or more, which is now your amortized window for server-side processing of the <dynamic> portion.

It takes about 50ms for the browser (chrome, rest maybe 20% slower) to process inline jquery + signalr + angular + ng animate + ng touch + ng routes + lodash. That's pretty amazing in and of itself. Most web apps have less code than all those popular libraries put together, but let's say you have just as much, so we would win latency+100ms of processing on the client (this latency win comes from the second transfer chunk). By the time the second chunk arrives, we've processed all js code and templates and we can start executing dom transforms.

You may object that this method is orthogonal to the inlining concept, but it isn't. If you, instead of inlining, link to cdns or your own servers the browser would have to open another connection(s) and delay execution. Since this execution is basically free (as the server side is talking to the database) it must be clear that all of these jumps would cost more than doing no jumps at all. If there were a browser quirk that said external js executes faster we could measure which factor dominates. My measurements indicate that extra requests kill performance at this stage.

I work a lot with optimization of SPA apps. It's common for people to think that data volume is a big deal, while in truth latency, and execution often dominate. The minified libraries I listed add up to 300kb of data, and that's just 68 kb gzipped, or 200ms download on a 2mbit 3g/4g phone, which is exactly the latency it would take on the same phone to check IF it had the same data in its cache already, even if it was proxy cached, because the mobile latency tax (phone-to-tower-latency) still applies. Meanwhile, desktop connections that have lower first-hop latency typically have higher bandwidth anyway.

In short, right now (2014), it's best to inline all scripts, styles and templates.

EDIT (MAY 2016)

As JS applications continue to grow, and some of my payloads now stack up to 3+ megabytes of minified code, it's becoming obvious that at the very least common libraries should no longer be inlined.

Gleno
  • 16,621
  • 12
  • 64
  • 85
  • I didn't get the *which is now your amortized window for server-side processing of the portion* part - The server processes what ever it needs and only then serves the entire rendered html (head+body), what other server processing is needed after that? – BornToCode Oct 29 '15 at 11:37
  • @BornToCode The idea is to give client something to do at the same time the server side has something to do. Because the client libraries need to be interpreted - it's better to get that process started before doing *any* computation on the server. The amortized window is the time it takes the client to process the JS. You get that window for free, if you orchestrate a 2-stage rocket. – Gleno Nov 02 '15 at 06:12
21

Externalizing javascript is one of the yahoo performance rules: http://developer.yahoo.com/performance/rules.html#external

While the hard-and-fast rule that you should always externalize scripts will generally be a good bet, in some cases you may want to inline some of the scripts and styles. You should however only inline things that you know will improve performance (because you've measured this).

Joeri Sebrechts
  • 11,012
  • 3
  • 35
  • 50
  • 1
    I think Yahoo also recommend adding all the Javascript into one HTTP call too - this doesn't mean that the scripts should all be in the same file during development though – Paul Shannon Sep 26 '08 at 11:59
  • 1
    Also, as noted above, HTTP/2 changes the "1 call" practice as well. – jinglesthula May 16 '18 at 23:00
14

i think the specific to one page, short script case is (only) defensible case for inline script

Gene T
  • 5,156
  • 1
  • 24
  • 24
9

Actually, there's a pretty solid case to use inline javascript. If the js is small enough (one-liner), I tend to prefer the javascript inline because of two factors:

  • Locality. There's no need to navigate an external file to validate the behaviour of some javascript
  • AJAX. If you're refreshing some section of the page via AJAX, you may lose all of your DOM handlers (onclick, etc) for that section, depending on how you binded them. For example, using jQuery you can either use the live or delegate methods to circumvent this, but I find that if the js is small enough it is preferrable to just put it inline.
Miguel Ping
  • 18,082
  • 23
  • 88
  • 136
5

Another reason why you should always use external scripts is for easier transition to Content Security Policy (CSP). CSP defaults forbid all inline script, making your site more resistant to XSS attacks.

chiborg
  • 26,978
  • 14
  • 97
  • 115
4

I would take a look at the required code and divide it into as many separate files as needed. Every js file would only hold one "logical set" of functions etc. eg. one file for all login related functions.

Then during site developement on each html page you only include those that are needed. When you go live with your site you can optimize by combining every js file a page needs into one file.

Gene
  • 1,517
  • 1
  • 14
  • 15
4

The only defense I can offer for inline javascipt is that when using strongly typed views with .net MVC you can refer to c# variables mid javascript which I've found useful.

Austin_G
  • 177
  • 11
3

Three considerations:

  • How much code do you need (sometimes libraries are a first-class consumer)?
  • Specificity: is this code only functional in the context of this specific document or element?
  • Every code inside the document tends to make it longer and thus slower. Besides that SEO considerations make it obvious, that you minimize internal scripting ...
reformed
  • 4,505
  • 11
  • 62
  • 88
roenving
  • 2,560
  • 14
  • 14
3

On the point of keeping JavaScript external:

ASP.NET 3.5SP1 recently introduced functionality to create a Composite script resource (merge a bunch of js files into one). Another benefit to this is when Webserver compression is turned on, downloading one slightly larger file will have a better compression ratio then many smaller files (also less http overhead, roundtrip etc...). I guess this saves on the initial page load, then browser caching kicks in as mentioned above.

ASP.NET aside, this screencast explains the benefits in more detail: http://www.asp.net/learn/3.5-SP1/video-296.aspx

Brendan Kowitz
  • 1,795
  • 10
  • 14
2

External scripts are also easier to debug using Firebug. I like to Unit Test my JavaScript and having it all external helps. I hate seeing JavaScript in PHP code and HTML it looks like a big mess to me.

Clutch
  • 7,404
  • 11
  • 45
  • 56
2

Another hidden benefit of external scripts is that you can easily run them through a syntax checker like jslint. That can save you from a lot of heartbreaking, hard-to-find, IE6 bugs.

Ken
  • 77,016
  • 30
  • 84
  • 101
1

In your scenario it sounds like writing the external stuff in one file shared among the pages would be good for you. I agree with everything said above.

mattlant
  • 15,384
  • 4
  • 34
  • 44
1

During early prototyping keep your code inline for the benefit of fast iteration, but be sure to make it all external by the time you reach production.

I'd even dare to say that if you can't place all your Javascript externally, then you have a bad design under your hands, and you should refactor your data and scripts

Robert Gould
  • 68,773
  • 61
  • 187
  • 272
1

Google has included load times into it's page ranking measurements, if you inline a lot, it will take longer for the spiders to crawl thru your page, this may be influence your page ranking if you have to much included. in any case different strategies may have influence on your ranking.

Kees Hessels
  • 37
  • 2
  • 6
1

well I think that you should use inline when making single page websites as scripts will not need to be shared across multiple pages

Zak Sheikh
  • 23
  • 5
0

Having internal JS pros: It's easier to manage & debug You can see what's happening

Internal JS cons: People can change it around, which really can annoy you.

external JS pros: no changing around you can look more professional (or at least that's what I think)

external JS cons: harder to manage its hard to know what's going on.

-3

Always try to use external Js as inline js is always difficult to maintain.

Moreover, it is professionally required that you use an external js since majority of the developers recommend using js externally.

I myself use external js.

Daniel Puiu
  • 962
  • 6
  • 21
  • 29