0

I'm building a personal webscraper and I'm still in dev phase, but I want to start saving data. The problem is, I cannot PUT or POST from the notebook level, which also means I cannot iterate through a big list of dictionaries/json objects.

I can however to it manually via Postman by just pasting the body and sending it.

Below is my code, which currently returns:

The pastebin URL is:{"message": "Could not parse request body into json: Unrecognized token 'ID': was expecting 'null', 'true', 'false' or NaN\n at [Source: (byte[])"ID=0&District=London"; line: 1, column: 4]"}

import requests 

# defining the api-endpoint 
API_ENDPOINT = ##############

# data to be sent to api 
body = {
  "ID": 0,
  "District": "London"
}

# sending post request and saving response as response object 
r = requests.post(url = API_ENDPOINT, data = body) 

# extracting response text 
pastebin_url = r.text 
print("The pastebin URL is:%s"%pastebin_url) 

Related question - can I use urllib instead of requests here?

madej
  • 49
  • 1
  • 7

1 Answers1

0

You can try the following:

r = requests.post(url = API_ENDPOINT, json = body) 

or

import json
r = requests.post(url = API_ENDPOINT,  headers={"Content-Type":"application/json"}, data = json.dumps(body)) 
Marcin
  • 215,873
  • 14
  • 235
  • 294
  • 1
    Thank you so much! The first one worked! How would it look like with urllib instead of requests? – madej Jul 17 '20 at 21:49
  • @mad_edge No problem. But if you already have requests, why not use it? – Marcin Jul 17 '20 at 22:20
  • that's because requests is not available as import in AWS Lambda, it needs to be included as a layer, unlike urllib – madej Jul 17 '20 at 22:39
  • @mad_edge Examples of doing this are [here](https://stackoverflow.com/questions/36484184/python-make-a-post-request-using-python-3-urllib). – Marcin Jul 17 '20 at 22:52