234

I am making a website with articles, and I need the articles to have "friendly" URLs, based on the title.

For example, if the title of my article is "Article Test", I would like the URL to be http://www.example.com/articles/article_test.

However, article titles (as any string) can contain multiple special characters that would not be possible to put literally in my URL. For instance, I know that ? or # need to be replaced, but I don't know all the others.

What characters are permissible in URLs? What is safe to keep?

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
Paulo
  • 7,123
  • 10
  • 37
  • 34
  • There was a similar question, [here](http://stackoverflow.com/questions/522466/what-makes-a-friendly-url). Check it out, you may find some useful answers there also (there were quite a lot of them). – Rook Mar 29 '09 at 22:07
  • I reworded the question to be more clear. The question and answers are useful and of good quality. (48 people, including me, have favorited it) In my opinion, it should be reopened. – Jonathan Allard Nov 17 '20 at 21:53

13 Answers13

290

To quote section 2.3 of RFC 3986:

Characters that are allowed in a URI, but do not have a reserved purpose, are called unreserved. These include uppercase and lowercase letters, decimal digits, hyphen, period, underscore, and tilde.

  ALPHA  DIGIT  "-" / "." / "_" / "~"

Note that RFC 3986 lists fewer reserved punctuation marks than the older RFC 2396.

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
Skip Head
  • 7,580
  • 1
  • 30
  • 34
  • @Skip Head, does "characters" include Latin encoded characters like `ç` and `õ`? – Mohamad Jun 10 '11 at 19:34
  • 6
    @Mohamad: No, ASCII only, although UTF-8 support is getting better. – Dietrich Epp Jun 19 '11 at 12:58
  • @Dietrich Epp, thank you. I guess it shouldn't matter if the URL is for decoration and SEO purposes, like: www.mysite.com/[postId]/post-title-with-ç-and-õ – Mohamad Jun 19 '11 at 15:22
  • 1
    @Mohamad: The last part there will get changed under the hood to `post-title-with-%C3%A7-and-%C3%B5`, but it will still display in the user's location bar as `post-title-with-ç-and-õ`. – Dietrich Epp Jun 19 '11 at 16:35
  • @Dietrich Epp, that's really interesting. I had no idea. Does that impact SEO? Would you recommend I replace such characters with their nearest English equivalents? My readers are all Portuguese who use such characters! – Mohamad Jun 19 '11 at 18:53
  • 7
    Your readers are Portuguese, so use Portuguese characters. – Dietrich Epp Jun 19 '11 at 19:49
  • 2
    As the referred document is very old and this post too. Just wanted to know is this still valid or we have any updated document. – prasingh May 31 '19 at 07:05
  • What about commas? `,` – Protector one Jan 18 '21 at 11:52
143

There are two sets of characters you need to watch out for: reserved and unsafe.

The reserved characters are:

  • ampersand ("&")
  • dollar ("$")
  • plus sign ("+")
  • comma (",")
  • forward slash ("/")
  • colon (":")
  • semi-colon (";")
  • equals ("=")
  • question mark ("?")
  • 'At' symbol ("@")
  • pound ("#").

The characters generally considered unsafe are:

  • space (" ")
  • less than and greater than ("<>")
  • open and close brackets ("[]")
  • open and close braces ("{}")
  • pipe ("|")
  • backslash ("\")
  • caret ("^")
  • percent ("%")

I may have forgotten one or more, which leads to me echoing Carl V's answer. In the long run you are probably better off using a "white list" of allowed characters and then encoding the string rather than trying to stay abreast of characters that are disallowed by servers and systems.

Oded Breiner
  • 28,523
  • 10
  • 105
  • 71
Gary.Ray
  • 6,441
  • 1
  • 27
  • 42
  • 1
    `#` is a reserved character used for bookmarks on a specific page, created by having one HTML element with a matching name-attribute or id-attribute (sans `#`-symbol). – TheLonelyGhost Aug 12 '14 at 14:00
  • Question mark shows up here as both reserved and unsafe - I think of it as only reserved, but I may be incorrect – Jonathan Basile May 26 '15 at 02:02
  • 8
    Other's seem to disagree that the tilde `~` is unsafe. Are you sure it is? – drs Jun 15 '15 at 14:04
  • 4
    Whitelist is not so good if handling languages other than English. Unicode just has too many OK code points. Therefore, blacklisting the unsafe ones is likely to be the easiest to implement in regular expressions. – Patanjali Nov 26 '15 at 07:04
  • 1
    tilde `~` seems to be safe: Characters that are allowed in a URI but do not have a reserved purpose are called unreserved. These include uppercase and lowercase letters, decimal digits, hyphen, period, underscore, and tilde. unreserved = ALPHA / DIGIT / "-" / "." / "_" / "~", from ietf.org/rfc/rfc3986.txt – jorgefpastor Jun 18 '16 at 18:23
  • 2
    I've made a working regex based off this answer here: https://regex101.com/r/9VBu66/1 with the following notes. 1. The first part blacklists non-ascii characters, so you'd need to remove that if you want to support Unicode and 2. I don't blacklist `/` because I am allowing subdirectories. This is the regex I'm using: `/([^\x00-\x7F]|[&$\+,:;=\?@#\s<>\[\]\{\}|\\\^%])+/` – andyvanee Dec 02 '20 at 20:34
  • % percent is always unsafe, as it is used _precisely_ for encoding unsafe characters. – Guillermo Prandi Dec 11 '20 at 19:27
  • I think there are three sets. The third being non-ASCII characters. – Peter Mortensen Jan 13 '21 at 22:19
61

Always Safe

In theory and by the specification, these are safe basically anywhere, except the domain name. Percent-encode anything not listed, and you're good to go.

    A-Z a-z 0-9 - . _ ~ ( ) ' ! * : @ , ;

Sometimes Safe

Only safe when used within specific URL components; use with care.

    Paths:     + & =
    Queries:   ? /
    Fragments: ? / # + & =
    

Never Safe

According to the URI specification (RFC 3986), all other characters must be percent-encoded. This includes:

    <space> <control-characters> <extended-ascii> <unicode>
    % < > [ ] { } | \ ^
    

If maximum compatibility is a concern, limit the character set to A-Z a-z 0-9 - _ . (with periods only for filename extensions).

Keep Context in Mind

Even if valid per the specification, a URL can still be "unsafe", depending on context. Such as a file:/// URL containing invalid filename characters, or a query component containing "?", "=", and "&" when not used as delimiters. Correct handling of these cases are generally up to your scripts and can be worked around, but it's something to keep in mind.

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
Beejor
  • 8,606
  • 1
  • 41
  • 31
  • 1
    Could you provide any sources for your second claim ("Sometimes Safe")? In particular, I believe you are wrong in saying that `=` is not safe for queries. For example, [FIQL](https://tools.ietf.org/html/draft-nottingham-atompub-fiql-00) accepts equal signs and describes itself as being "URI-friendly" and "optimised and intended for use in the query component". In my interpretation, RFC 3986 explicitly allows "=", "&", "+" and others in queries. – DanielM Nov 26 '19 at 10:40
  • @DanielM "?", "=", and "&" are valid in queries per spec, though in practice they're widely used for parsing name-value pairs within the query. So they can be unsafe as part of the names/values themselves. Whether or not this constitutes "unsafe" may be a matter of opinion. – Beejor Jan 05 '20 at 20:03
  • Some sources, as requested. (1) RFC 3986, Sec 3.4: "[...] query components are often used to carry identifying information in the form of 'key=value' pairs [...]" (2) WhatWG URL Spec, Sec. 6.2: "Constructing and stringifying a URLSearchParams object is fairly straightforward: [...] `params.toString() // "key=730d67"`" (3) PHP Manual, http-build-query: "Generate URL-encoded query string. [...] The above example will output: `0=foo&1=bar[...]`" (4) J. Starr, Perishable Press: "When building web pages, it is often necessary to add links that require parameterized query strings." – Beejor Jan 05 '20 at 20:05
  • @Beejor : I am constructing a URL & I use '-' and ';' during construction. It is not a web app but a mobile app. Not a web developer & hence, would I be safe if I use the above two chars in Path property? https://learn.microsoft.com/en-us/dotnet/api/system.uribuilder.path?view=netframework-4.8 – Filip Feb 15 '20 at 00:27
  • 1
    @karsnen Those are valid URL characters. Though if used to reference paths on a local filesystem, keep in mind that some systems disallow certain characters in filenames. For example, "file:///path/to/my:file.ext" would be invalid on Mac. – Beejor Feb 17 '20 at 07:15
  • A semicolon ' is not safe it gets converted to %27 "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789-._~()!*:@,;" – Dean Jenkins Dec 08 '22 at 13:31
47

You are best keeping only some characters (whitelist) instead of removing certain characters (blacklist).

You can technically allow any character, just as long as you properly encode it. But, to answer in the spirit of the question, you should only allow these characters:

  1. Lower case letters (convert upper case to lower)
  2. Numbers, 0 through 9
  3. A dash - or underscore _
  4. Tilde ~

Everything else has a potentially special meaning. For example, you may think you can use +, but it can be replaced with a space. & is dangerous, too, especially if using some rewrite rules.

As with the other comments, check out the standards and specifications for complete details.

mklement0
  • 382,024
  • 64
  • 607
  • 775
carl
  • 49,756
  • 17
  • 74
  • 82
  • 22
    A preiod, I discovered today, is a bad choice of character to use for a URL-safe Base64 encoder, because there will be those rare cases where your encoded data may produce two consecutive dots (".."), which is significant in that it refers to the parent directory. – pohl May 03 '11 at 21:54
  • 6
    @pohl: that's only a problem if your URL is used as a file path, either in your code or if your web server actually tries to map the URL to files before forwarding the request to a script (unfortunately very common). – André Caron May 06 '11 at 22:01
  • 4
    Actually, in our case using it as a file path would be ok, since in unix files are allowed to have multiple, and even consecutive, dots in their names. For us, the problem arose in a monitoring tool called Site Scope which has a bug (perhaps a naive regex) and it was reporting spurious false downtimes. For us, we are stuck on an old version of Site Scope, the admin team refuses to pay for an upgrade, and one very important client has Site Scope (not an equivalent) written into their contract. Admittedly, most won't find themselves in my shoes. – pohl May 07 '11 at 01:48
  • 9
    Thank god that someone posted a list without much blabbering. As for dot (.) - as @pohl said, do not use it! Here is another weird case on IIS (don't know if this happens on other Web Servers): if it is at the end of your URL you'll most likely get 404 error (it'll try to search for [/pagename]. page) – nikib3ro Jun 01 '12 at 19:27
  • Can you rephrase *"You are best keeping"*? – Peter Mortensen Jan 13 '21 at 22:13
  • @PeterMortensen. *You are best keeping X* is fairly normal English syntax. Rephrase it as *Is is best for you to keep X* if you're more comfortable with that. – TRiG Jul 27 '22 at 10:35
20

Looking at RFC3986 - Uniform Resource Identifier (URI): Generic Syntax, your question revolves around the path component of a URI.

    foo://example.com:8042/over/there?name=ferret#nose
     \_/   \______________/\_________/ \_________/ \__/
      |           |            |            |        |
   scheme     authority       path        query   fragment
      |   _____________________|__
     / \ /                        \
     urn:example:animal:ferret:nose

Citing section 3.3, valid characters for a URI segment are of type pchar:

pchar = unreserved / pct-encoded / sub-delims / ":" / "@"

Which breaks down to:

ALPHA / DIGIT / "-" / "." / "_" / "~"

pct-encoded

"!" / "$" / "&" / "'" / "(" / ")" / "*" / "+" / "," / ";" / "="

":" / "@"

Or in other words: You may use any (non-control-) character from the ASCII table, except /, ?, #, [ and ].

This understanding is backed by RFC1738 - Uniform Resource Locators (URL).

Community
  • 1
  • 1
Philzen
  • 3,945
  • 30
  • 46
  • 3
    This is a great example of a theoretically correct answer, that leads to trouble when applied to the real world we actually live in. It is true that most of those characters will not cause a problem most of the time. But there exist in the real world things like proxies, routers, gateways, relays, etc., all of which "love" to inspect and interact with URLs in ways that disregard the theoretical standard. To avoid these pitfalls, you're pretty much limited to escaping everything except alphanumerics, dash, underscore, and period. – deltamind106 Dec 14 '15 at 18:29
  • 1
    @deltamind106 Can you provide examples and/or references to clarify which of those characters being safe according to the RFCs are in fact not? I'd to prefer stick to the facts backed by standards in my answer, and i'm happy to update my answer if you can pinpoint any facts i may have neglected. – Philzen Dec 14 '15 at 18:41
  • 2
    @deltamind106 I'd suggest we try to get products to follow the standards rather than tell devs not to. I consider your warning merited, but we should do our part in reporting non-compliance to vendors if necessary. – Lo-Tan May 11 '16 at 18:19
  • @Philzen : I am constructing a URL & I use '-' and ';' during construction. It is not a web app but a mobile app. Not a web developer & hence, would I be safe if I use the above two chars in Path property? https://learn.microsoft.com/en-us/dotnet/api/system.uribuilder.path?view=netframework-4.8 – Filip Feb 15 '20 at 00:27
  • 1
    @karsnen Yes of course `-` and `;` are safe, that's what my answer and RFC clearly states. – Philzen Feb 22 '20 at 19:09
12

From the context you describe, I suspect that what you're actually trying to make is something called an 'SEO slug'. The best general known practice for those is:

  1. Convert to lower-case
  2. Convert entire sequences of characters other than a-z and 0-9 to one hyphen (-) (not underscores)
  3. Remove 'stop words' from the URL, i.e. not-meaningfully-indexable words like 'a', 'an', and 'the'; Google 'stop words' for extensive lists

So, as an example, an article titled "The Usage of !@%$* to Represent Swearing In Comics" would get a slug of "usage-represent-swearing-comics".

chaos
  • 122,029
  • 33
  • 303
  • 309
  • 1
    Is it really a good approach to remove these "stop words" from the url? Would search engines penalize a website because of this? – Paulo Mar 30 '09 at 02:40
  • Search engines are generally believed to only acknowledge some portion of the URL and/or to give reduced significance to later portions, so by removing stop words what you're doing is maximizing the number of keywords you embed in your URL that you have a chance of actually ranking on. – chaos Mar 30 '09 at 03:50
  • 2
    @chaos Do you still recommend stripping StopWord, if you take into account this: http://www.seobythesea.com/2008/08/google-stopword-patent/ Also, can you recommend a good list of stopwords? This is the best list I've found so far - http://www.link-assistant.com/seo-stop-words.html – nikib3ro Jun 01 '12 at 19:53
  • 1
    @kape123 That doesn't look like a very good list to me. "c" and "d" are programming languages, and a lot of those other words also look significant. I'd probably just strip the basic ones: a, and, is, on, of, or, the, with. – mpen Feb 02 '16 at 16:37
11

unreserved = ALPHA / DIGIT / "-" / "." / "_" / "~"

LKK
  • 127
  • 1
  • 2
  • 4
    Doesn't "ALPHA" imply "DIGIT"? I assume ALPHA is short for "alphanumeric", and alphanumeric means uppercase, lowercase and digits. – Luc Jun 04 '13 at 13:30
  • 15
    Actually alpha doesn't imply alphanumeric. Alpha and numeric are 2 distinct things and alphanumeric is the combination of those things. He could have written his answer like so: ALPHANUMERIC / "-" / "." / "_" / "~" – MacroMan Sep 03 '13 at 10:32
  • 2
    The ABNF notation for 'unreserved' in RFC 3986 lists them separately. – Patanjali Nov 26 '15 at 07:05
7

From an SEO perspective, hyphens are preferred over underscores. Convert to lowercase, remove all apostrophes, then replace all non-alphanumeric strings of characters with a single hyphen. Trim excess hyphens off the start and finish.

mpen
  • 272,448
  • 266
  • 850
  • 1,236
  • 1
    Why are hyphens preferred over underscores? What is the explanation? – Peter Mortensen Jan 26 '21 at 11:44
  • @PeterMortensen https://studiohawk.com.au/blog/dash-or-underscore-in-url-heres-how-its-affecting-your-seo/#:~:text=Google%20recommends%20using%20hyphens%20instead,has%20and%20create%20consistent%20results. or maybe better: https://www.ecreativeim.com/blog/index.php/2011/03/30/seo-basics-hyphen-or-underscore-for-seo-urls/ " Google treats a hyphen as a word separator, but does not treat an underscore that way. Google treats and underscore as a word joiner — so red_sneakers is the same as redsneakers to Google" – mpen Jan 26 '21 at 20:24
  • As per the latest google guide lines - Google treats a hyphen as a word separator, but does not treat an underscore that way - https://takesurvery.com/ – Milan Soni Nov 17 '22 at 05:26
6

The format for an URI is defined in RFC 3986. See section 3.3 for details.

joschi
  • 12,746
  • 4
  • 44
  • 50
3

I had a similar problem. I wanted to have pretty URLs and reached the conclusion that I have to allow only letters, digits, - and _ in URLs.

That is fine, but then I wrote some nice regex and I realized that it recognizes all UTF-8 characters are not letters in .NET and was screwed. This appears to be a know problem for the .NET regex engine. So I got to this solution:

private static string GetTitleForUrlDisplay(string title)
{
    if (!string.IsNullOrEmpty(title))
    {
        return Regex.Replace(Regex.Replace(title, @"[^A-Za-z0-9_-]", new MatchEvaluator(CharacterTester)).Replace(' ', '-').TrimStart('-').TrimEnd('-'), "[-]+", "-").ToLower();
    }
    return string.Empty;
}


/// <summary>
/// All characters that do not match the patter, will get to this method, i.e. useful for Unicode characters, because
/// .NET implementation of regex do not handle Unicode characters. So we use char.IsLetterOrDigit() which works nicely and we
/// return what we approve and return - for everything else.
/// </summary>
/// <param name="m"></param>
/// <returns></returns>
private static string CharacterTester(Match m)
{
    string x = m.ToString();
    if (x.Length > 0 && char.IsLetterOrDigit(x[0]))
    {
        return x.ToLower();
    }
    else
    {
        return "-";
    }
}
Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
  • 3
    .NET regexes support unicode quite well actually. You have to use unicode character classes e.g. \p{L} for all letters. See http://msdn.microsoft.com/en-us/library/20bw873z.aspx#CategoryOrBlock – TheCycoONE Jun 26 '13 at 18:49
1

I found it very useful to encode my URL to a safe one when I was returning a value through Ajax/PHP to a URL which was then read by the page again.

PHP output with URL encoder for the special character &:

// PHP returning the success information of an Ajax request
echo "".str_replace('&', '%26', $_POST['name']) . " category was changed";

// JavaScript sending the value to the URL
window.location.href = 'time.php?return=updated&val=' + msg;

// JavaScript/PHP executing the function printing the value of the URL,
// now with the text normally lost in space because of the reserved & character.

setTimeout("infoApp('updated','<?php echo $_GET['val'];?>');", 360);
Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
DIY-Forum
  • 23
  • 5
0

I think you're looking for something like "URL encoding" - encoding a URL so that it's "safe" to use on the web:

Here's a reference for that. If you don't want any special characters, just remove any that require URL encoding:

HTML URL Encoding Reference

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
Andy White
  • 86,444
  • 48
  • 176
  • 211
-4

Between 3-50 characters. Can contain lowercase letters, numbers and special characters - dot(.), dash(-), underscore(_) and at the rate(@).

Ramji
  • 3