105

I have a list of URLS that I need to check, to see if they still work or not. I would like to write a bash script that does that for me.

I only need the returned HTTP status code, i.e. 200, 404, 500 and so forth. Nothing more.

EDIT Note that there is an issue if the page says "404 not found" but returns a 200 OK message. It's a misconfigured web server, but you may have to consider this case.

For more on this, see Check if a URL goes to a page containing the text "404"

codeforester
  • 39,467
  • 16
  • 112
  • 140
Manu
  • 4,410
  • 6
  • 43
  • 77
  • 2
    To be fair, my script's "bug" is only when the server returns HTTP code 200 but the body text says "404 not found", which is a misbehaving webserver. – PhilR Aug 04 '12 at 17:53
  • 2
    The exit status of wget will be 0 if the response code was 200, 8 if 404, 4 if 302... You can use the $? variable to access the exit status of the previous command. – Casey Watson Dec 18 '13 at 22:14

9 Answers9

219

Curl has a specific option, --write-out, for this:

$ curl -o /dev/null --silent --head --write-out '%{http_code}\n' <url>
200
  • -o /dev/null throws away the usual output
  • --silent throws away the progress meter
  • --head makes a HEAD HTTP request, instead of GET
  • --write-out '%{http_code}\n' prints the required status code

To wrap this up in a complete Bash script:

#!/bin/bash
while read LINE; do
  curl -o /dev/null --silent --head --write-out "%{http_code} $LINE\n" "$LINE"
done < url-list.txt

(Eagle-eyed readers will notice that this uses one curl process per URL, which imposes fork and TCP connection penalties. It would be faster if multiple URLs were combined in a single curl, but there isn't space to write out the monsterous repetition of options that curl requires to do this.)

PhilR
  • 5,375
  • 1
  • 21
  • 27
  • Very nice. Can I execute that command on every url in my file ? – Manu May 26 '11 at 10:40
  • 1
    @Manu: Yes, I've edited my answer to show one possible way of wrapping up the curl command. It assumes url-list.txt contains one URL per line. – PhilR May 26 '11 at 10:49
  • If you're wanting to POST, you can't use --head. In that case, the rest still applies but it would look like: curl -o /dev/null -s -w '%{http_code}\n' --data "key=value" – dma Dec 22 '13 at 19:58
  • 1
    I don't know why script from above andswer always get me 000 in the output, but when I run command only once without loop it works... – Karol F Aug 09 '16 at 07:08
  • 1
    @KarolFiturski I had the same problem (which you've probably since fixed but just in case anyone else stumbles across this...) in my case I had carriage returns at the line ends of my input file, causing the urls to be like `http://example.com/\r` when going through the loop – Jordan Robinson Feb 21 '17 at 11:16
  • 1
    I had this issue and I was able to fix it by switching the line ending from the Windows type to the Linux type. – Tristan Jul 13 '17 at 01:40
  • Added example for Fish shell. @see https://github.com/fish-shell/fish-shell/issues/1147#issuecomment-364630947 – Elijah Lynn Feb 10 '18 at 06:50
  • During my testing, an empty trailing line was necessary, otherwise, the last line would not be tested. – Caleb Apr 25 '23 at 19:14
53
wget --spider -S "http://url/to/be/checked" 2>&1 | grep "HTTP/" | awk '{print $2}'

prints only the status code for you

user551168
  • 599
  • 5
  • 2
  • 13
    +1 Shows multiple codes when a url is redirected, each at new line. – Ashfame Apr 24 '12 at 21:55
  • Had to get rid of the --spider for it to work with the request that I was trying to make, but works. – amitavk Aug 06 '15 at 15:10
  • you can use `--max-redirect=0` if you do not want multiple codes: `wget --max-redirect=0 --spider -S "https://miles4migrants.org/ukraine2canada/s" 2>&1 | grep "HTTP/" | awk '{print $2}'` – Eugen Konkov Sep 28 '22 at 14:30
35

Extending the answer already provided by Phil. Adding parallelism to it is a no brainer in bash if you use xargs for the call.

Here the code:

xargs -n1 -P 10 curl -o /dev/null --silent --head --write-out '%{url_effective}: %{http_code}\n' < url.lst

-n1: use just one value (from the list) as argument to the curl call

-P10: Keep 10 curl processes alive at any time (i.e. 10 parallel connections)

Check the write_out parameter in the manual of curl for more data you can extract using it (times, etc).

In case it helps someone this is the call I'm currently using:

xargs -n1 -P 10 curl -o /dev/null --silent --head --write-out '%{url_effective};%{http_code};%{time_total};%{time_namelookup};%{time_connect};%{size_download};%{speed_download}\n' < url.lst | tee results.csv

It just outputs a bunch of data into a csv file that can be imported into any office tool.

estani
  • 24,254
  • 2
  • 93
  • 76
  • 2
    Parallelism, file input and csv. Exactly what i was looking for. – Agey Mar 02 '16 at 13:45
  • Brilliant, made my day. – xlttj May 03 '17 at 06:39
  • This is awesome, just what I was looking for, thank you sir. One question, how could one include the page title of the page in the csv results? – MitchellK Jul 24 '17 at 12:47
  • @estani - https://stackoverflow.com/users/1182464/estani how could one include getting the page title of a page into the .csv file. Sorry for repost, forgot to tag you so you would get notified about this question. Many thanks. – MitchellK Jul 24 '17 at 13:32
  • @MitchellK this is not handling the contents of the http call at all. If the "page title" (whatever that is) is in the url, then you could add it. If not, you need to parse the whole page to extract the "title" of it (assuming you mean a html page retrieved by the http). Look for other answers at stack overflow or ask that specific question. – estani Jul 26 '17 at 13:02
  • the output is not the same as the phil script – acgbox Aug 31 '19 at 00:06
19

This relies on widely available wget, present almost everywhere, even on Alpine Linux.

wget --server-response --spider --quiet "${url}" 2>&1 | awk 'NR==1{print $2}'

The explanations are as follow :

--quiet

Turn off Wget's output.

Source - wget man pages

--spider

[ ... ] it will not download the pages, just check that they are there. [ ... ]

Source - wget man pages

--server-response

Print the headers sent by HTTP servers and responses sent by FTP servers.

Source - wget man pages

What they don't say about --server-response is that those headers output are printed to standard error (sterr), thus the need to redirect to stdin.

The output sent to standard input, we can pipe it to awk to extract the HTTP status code. That code is :

  • the second ($2) non-blank group of characters: {$2}
  • on the very first line of the header: NR==1

And because we want to print it... {print $2}.

wget --server-response --spider --quiet "${url}" 2>&1 | awk 'NR==1{print $2}'
Salathiel Genese
  • 1,639
  • 2
  • 21
  • 37
10

Use curl to fetch the HTTP-header only (not the whole file) and parse it:

$ curl -I  --stderr /dev/null http://www.google.co.uk/index.html | head -1 | cut -d' ' -f2
200
dogbane
  • 266,786
  • 75
  • 396
  • 414
  • curl tells me 200 when wget says 404 ... :( – Manu Jun 22 '11 at 15:19
  • The `-I` flag causes curl to make a HTTP HEAD request, which is treated separately from a normal HTTP GET by some servers and can thus return different values. The command should still work without it. – lambshaanxy Dec 28 '12 at 10:52
4

wget -S -i *file* will get you the headers from each url in a file.

Filter though grep for the status code specifically.

colinross
  • 2,075
  • 13
  • 10
2

I found a tool "webchk” written in Python. Returns a status code for a list of urls. https://pypi.org/project/webchk/

Output looks like this:

▶ webchk -i ./dxieu.txt | grep '200'
http://salesforce-case-status.dxi.eu/login ... 200 OK (0.108)
https://support.dxi.eu/hc/en-gb ... 200 OK (0.389)
https://support.dxi.eu/hc/en-gb ... 200 OK (0.401)

Hope that helps!

Yura Loginov
  • 710
  • 7
  • 7
1

Due to https://mywiki.wooledge.org/BashPitfalls#Non-atomic_writes_with_xargs_-P (output from parallel jobs in xargs risks being mixed), I would use GNU Parallel instead of xargs to parallelize:

cat url.lst |
  parallel -P0 -q curl -o /dev/null --silent --head --write-out '%{url_effective}: %{http_code}\n' > outfile

In this particular case it may be safe to use xargs because the output is so short, so the problem with using xargs is rather that if someone later changes the code to do something bigger, it will no longer be safe. Or if someone reads this question and thinks he can replace curl with something else, then that may also not be safe.

Example url.lst:

https://fsfe.org
https://www.fsf.org/bulletin/2010/fall/gnu-parallel-a-design-for-life
https://www.fsf.org/blogs/community/who-actually-reads-the-code
https://publiccode.eu/
Ole Tange
  • 31,768
  • 5
  • 86
  • 104
  • what is the format of 'url.lst'? How are urls separated? – Andrew May 30 '23 at 13:05
  • thanks, but for some reason for me it returns response status code only for last url in the list, all urls above it get status code 000 ... Did you try yourself? Is this a working code for you? – Andrew May 30 '23 at 14:58
  • @Andrew Code is working for me. I get: `: 200` for each of them. – Ole Tange May 30 '23 at 16:13
  • 1
    @Andrew I just tried putting in \r\n as newline, and then I get your 000. So you probably have an additional \r in your file. Try: -d '\r\n' – Ole Tange May 30 '23 at 16:15
  • You are right, the infamous newlines between Windows and Linux :) I have fixed it and now it works for me too. However, I have noticed that the order of results in the output file is not the same as the order of urls in the input file.... interesting. – Andrew May 30 '23 at 17:26
  • 1
    @Andrew If you want that: --keep-order – Ole Tange May 30 '23 at 19:22
  • Thanks, works great. I also managed to achieve similar result using curl native parallel option: curl --parallel --parallel-immediate --config config.txt --retry 3 --retry-delay 5 -s -w "%{url}: %{response_code}\n" > outfile. Wonder which one is better / uses more resources – Andrew May 30 '23 at 21:23
  • 1
    @Andrew GNU Parallel rarely outperforms tools that have their own parallelization: The tools know how they can cut corners, whereas GNU Parallel will have to run the full program every time. – Ole Tange May 31 '23 at 08:21
1

Keeping in mind that curl is not always available (particularly in containers), there are issues with this solution:

wget --server-response --spider --quiet "${url}" 2>&1 | awk 'NR==1{print $2}'

which will return exit status of 0 even if the URL doesn't exist.

Alternatively, here is a reasonable container health-check for using wget:

wget -S --spider -q -t 1 "${url}" 2>&1 | grep "200 OK" > /dev/null

While it may not give you exact status out, it will at least give you a valid exit code based health responses (even with redirects on the endpoint).

Rowy
  • 121
  • 1
  • 3