I am crawling a sitemap.xml and my objective is to find all the url's and the incremental count of them.
Below is the structure of the xml
<?xml version="1.0" encoding="UTF-8"?>
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">
<url>
<loc>http://www.htcysnc.com/m/designer-sarees</loc>
<lastmod>2014-09-01</lastmod>
<changefreq>hourly</changefreq>
<priority>0.9</priority>
</url>
<url>
<loc>http://www.htcysnc.com/m/anarkali-suits</loc>
<lastmod>2014-09-01</lastmod>
<changefreq>hourly</changefreq>
<priority>0.9</priority>
</url>
Below is my code
from BeautifulSoup import BeautifulSoup
import requests
import gzip
from StringIO import StringIO
def crawler():
count=0
url="http://www.htcysnc.com/sitemap/sitemap_product.xml.gz"
old_xml=requests.get(url)
new_xml=gzip.GzipFile(fileobj=StringIO(old_xml.content)).read()
#new_xml=old_xml.text
final_xml=BeautifulSoup(new_xml)
item_to_be_found=final_xml.findAll('loc')
for i in item_to_be_found:
count=count+1
print i
print count
crawler()
My output is like this
<loc>http://www.htcysnc.com/elegant-yellow-green-suit-seven-east-p63703</loc>
1
<loc>http://www.htcysnc.com/elegant-orange-pink-printed-suit-seven-east-p63705</loc>
2
Need the output as links without loc and /loc. Have tried replace command but that is throwing an error.