0

In order to learn a bit more advanced Python, I tasked myself with creating a python script that navigates to a website (imdb.com, in this case), enters a word (which has already been declared as a variable), and then outputs the first 5 titles that come up into an array, which then outputs to the console.

My question is: Is something like this even possible? Are there libraries/frameworks that make this possible?

If it's possible, where would I start? Web scraping isn't new to me, but web scraping in Python is. All I really need is guidance towards the correct path. 25(ish) minutes of google searching came up with somewhat vague answers that only confused me more.

  • 2
    Yes of course it's possible, Python does have built-in libraries for handling http protocol(google urllib2 or httplib), however there's also wonderful 3rd party library that greatly simplifies handling http calls - 'requests', if you're begginer i strongly reccomend you use it. – Konrad Wąsowicz Apr 16 '14 at 11:58
  • There's also http://scrapy.org/, a scrapping framework written in Python :) It is the main tool used by a rather large job offers scrapping company I know. – Ambroise Apr 24 '14 at 14:28

4 Answers4

2

You should definitely go the requests way. Making a request is as easy as:

import requests
r = requests.get('https://github.com/timeline.json')

(taken from requests' docs)

You simply have to find your site's URL of choice (http://www.imdb.com/find) and add the params ({'q': 'search_term'}) in the get method. Then you can access r.text and parse the results with a HTML parser (check BeautifulSoup). Storing the first 5 results and displaying them in the console should be a breeze.

linkyndy
  • 17,038
  • 20
  • 114
  • 194
  • I was just kind of sold on Selenium and didn't bother looking too deep into requests (ignorant, I know!) when I probably should have. In terms of ease, I'd assume requests is easier, correct? I'm really just looking for something relatively easy to implement. – user1547154 Apr 16 '14 at 13:03
  • It's as easy as what I've written above. I don't think it can become even easier :) – linkyndy Apr 16 '14 at 15:31
  • 1
    `requests` is definitely the way to go: far simpler than wrangling the built-in libraries. As for Selenium, in some scraping cases, it's needed, but it's really the last resort, and not needed here. [This answer](http://stackoverflow.com/a/7744369/1678416) discussing IMDB might also be a useful reference. – Steven Maude Apr 20 '14 at 23:52
1

It is possible, you can use selenium to navigate trough the websites: http://docs.seleniumhq.org/ and to find the correct elements you can use XPath. There are good browser addons to test the XPaths.

w5e
  • 199
  • 1
  • 12
  • Would it be easier to use the built in libraries, the 3rd party "requests" library (as mentioned elsewhere in this question) or Selenium? I feel like I don't need anything too big, as this really is just a simple program that I only plan on creating to show myself I can do it. – user1547154 Apr 16 '14 at 12:14
  • at this point is selenium nice, because it starts a real webbrowser and you can see what your script is doing. at the point you dont want to see it you can use http://phantomjs.org/ as the webdriver. selenium is easy to use and well documented, don't be afraid of using it – w5e Apr 16 '14 at 12:19
  • if you have problems with librarys and importing the right things you can use pyCharm, it will help you – w5e Apr 16 '14 at 12:22
  • Sweet, thanks a ton! Reading through it, Selenium sounds like exactly what I needed. – user1547154 Apr 16 '14 at 12:30
  • 1
    Selenium is mainly used for testing, and using it for such a simple task is a bit of overhead. – linkyndy Apr 16 '14 at 12:37
0

u can U se the Third Party frame work caled Beautiful soup link and it easy to use

Beautiful Soup is a Python library designed for quick turnaround projects like screen-scraping. Three features make it powerful:

Beautiful Soup provides a few simple methods and Pythonic idioms for navigating, searching, and modifying a parse tree: a toolkit for dissecting a document and extracting what you need. It doesn't take much code to write an application Beautiful Soup automatically converts incoming documents to Unicode and outgoing documents to UTF-8. You don't have to think about encodings, unless the document doesn't specify an encoding and Beautiful Soup can't detect one. Then you just have to specify the original encoding. Beautiful Soup sits on top of popular Python parsers like lxml and html5lib, allowing you to try out different parsing strategies or trade speed for flexibility.

sundar nataraj
  • 8,524
  • 2
  • 34
  • 46
0

I strongly second the answer suggesting python requests, a lightweight solution for what you are trying to accomplish.

You can try something like:

import requests
r = requests.get(http://www.imdb.com/find?ref_=nv_sr_fn&q=liam&s=all)
return r.content

Looks like for imdb, you can alter the q= parameter in the url to return results. If I wanted X-Men instead of Liam, I can keep the same url except replace q=liam with q=xmen. For easier parsing, check out BeautifulSoup. If that's not your style, and if you want to get some regex practice, try using python regular expressions to pull the data you want.