8

I have a list of domains e.g.

  • site.co.uk

  • site.com

  • site.me.uk

  • site.jpn.com

  • site.org.uk

  • site.it

also the domain names can contain 3rd and 4th level domains e.g.

  • test.example.site.org.uk

  • test2.site.com

I need to try and extract the 2nd level domain, in all these cases being site


Any ideas? :)

RadiantHex
  • 24,907
  • 47
  • 148
  • 244

6 Answers6

8

no way to reliably get that. Subdomains are arbitrary and there is a monster list of domain extensions that grows every day. Best case is you check against the monster list of domain extensions and maintain the list.

list: http://mxr.mozilla.org/mozilla-central/source/netwerk/dns/effective_tld_names.dat?raw=1

CrayonViolent
  • 32,111
  • 5
  • 56
  • 79
6

Following @kohlehydrat's suggestion:

import urllib2

class TldMatcher(object):
    # use class vars for lazy loading
    MASTERURL = "http://mxr.mozilla.org/mozilla-central/source/netwerk/dns/effective_tld_names.dat?raw=1"
    TLDS = None

    @classmethod
    def loadTlds(cls, url=None):
        url = url or cls.MASTERURL

        # grab master list
        lines = urllib2.urlopen(url).readlines()

        # strip comments and blank lines
        lines = [ln for ln in (ln.strip() for ln in lines) if len(ln) and ln[:2]!='//']

        cls.TLDS = set(lines)

    def __init__(self):
        if TldMatcher.TLDS is None:
            TldMatcher.loadTlds()

    def getTld(self, url):
        best_match = None
        chunks = url.split('.')

        for start in range(len(chunks)-1, -1, -1):
            test = '.'.join(chunks[start:])
            startest = '.'.join(['*']+chunks[start+1:])

            if test in TldMatcher.TLDS or startest in TldMatcher.TLDS:
                best_match = test

        return best_match

    def get2ld(self, url):
        urls = url.split('.')
        tlds = self.getTld(url).split('.')
        return urls[-1 - len(tlds)]


def test_TldMatcher():
    matcher = TldMatcher()

    test_urls = [
        'site.co.uk',
        'site.com',
        'site.me.uk',
        'site.jpn.com',
        'site.org.uk',
        'site.it'
    ]

    errors = 0
    for u in test_urls:
        res = matcher.get2ld(u)
        if res != 'site':
            print "Error: found '{0}', should be 'site'".format(res)
            errors += 1

    if errors==0:
        print "Passed!"
    return (errors==0)
Hugh Bothwell
  • 55,315
  • 8
  • 84
  • 99
5

Using python tld

https://pypi.python.org/pypi/tld

$ pip install tld

from tld import get_tld, get_fld

print(get_tld("http://www.google.co.uk"))
'co.uk'

print(get_fld("http://www.google.co.uk"))
'google.co.uk'
Artur Barseghyan
  • 12,746
  • 4
  • 52
  • 44
3

Problem in mix of extractions 1st and 2nd level.

Trivial solution...

Build list of possible site suffixes, ordered from narrow to common case. "co.uk", "uk", "co.jp", "jp", "com"

And check, Can suffix be matched at end of domain. if matched, next part is site.

mmv-ru
  • 219
  • 6
  • 13
2

The only possible way would be via a list with all the top level domains (here like .com or co.uk) possible. Then you would scan through this list and check out. I don't see any other way, at least without accessing the internet at runtime.

kohlehydrat
  • 503
  • 1
  • 3
  • 19
  • 1
    You need the list, even with accessing the Internet at run time. The decision to sell third level domains or second level domains to end users is made by the authority for the CCTLD. I think some even have some reserved second level domains, and sell third level domains on those and second level domains elsewhere. Of course, you also need to *maintain* the list, because these things do change (and that's before you account for new CCTLDs being created) – Quentin Feb 06 '11 at 23:32
  • Thank you! Any idea where I could grab a list? Feels like mission impossible :S – RadiantHex Feb 06 '11 at 23:33
1

@Hugh Bothwell

In your example you are not dealing with special domains like parliament.uk , they are represent in the file with "!" (e.g. !parliament.uk)

I did some changes of your code, also make it looks more like my PHP function I used before.

Also added possibility to load the data from local file.

Also tested it with some domains such:

  • niki.bg, niki.1.bg
  • parliament.uk
  • niki.at, niki.co.at
  • niki.us, niki.ny.us
  • niki.museum, niki.national.museum
  • www.niki.uk - due to "*" in Mozilla's file this is reported as OK.

Feel free to contact me @ github so I can add you as co-author there.

GitHub repo is here:

https://github.com/nmmmnu/TLDExtractor/blob/master/TLDExtractor.py

Nick
  • 9,962
  • 4
  • 42
  • 80