2

I've built a program to fill up a databank and, by the time, it's working. Basically, the program makes a request to the app I'm using (via REST API) returns the data I want and then manipulate to a acceptable form for the databank.

The problem is: the GET method makes the algorithm too slow, because I'm acessing the details of particular entries, so for each entry I have to make 1 request. I have something close to 15000 requests to do and each row in the bank is taking 1 second to be made.

Is there any possible way to make this requests faster? How can I improve the perfomance of this method? And by the way, any tips to measure the perfomance of the code?

Thanks in advance!!

here's the code:

# Retrieving all the IDs I want to get the detailed info
abc_ids = serializers.serialize('json', modelExample.objects.all(), fields=('id'))
abc_ids = json.loads(abc_ids)
abc_ids_size = len(abc_ids)

# Had to declare this guys right here because in the end of the code I use them in the functions to create and uptade the back
# And python was complaining that I stated before assign. Picked random values for them.
age = 0
time_to_won = 0
data = '2016-01-01 00:00:00'

# First Loop -> Request to the detailed info of ABC
for x in range(0, abc_ids_size):

id = abc_ids[x]['fields']['id']
url = requests.get(
    'https://api.example.com/v3/abc/' + str(
        id) + '?api_token=123123123')

info = info.json()
dealx = dict(info)

# Second Loop -> Picking the info I want to uptade and create in the bank
for key, result in dealx['data'].items():
    # Relevant only for ModelExample -> UPTADE
    if key == 'age':
        result = dict(result)
        age = result['total_seconds']
    # Relevant only For ModelExample -> UPTADE
    elif key == 'average_time_to_won':
        result = dict(result)
        time_to_won = result['total_seconds']

    # Relevant For Model_Example2 -> CREATE
    # Storing a date here to use up foward in a datetime manipulation
    if key == 'add_time':
        data = str(result)

    elif key == 'time_stage':

        # Each stage has a total of seconds that the user stayed in.
        y = result['times_in_stages']
        # The user can be in any stage he want, there's no rule about the order.
        # But there's a record of the order he chose.
        z = result['order_of_stages']

        # Creating a list to fill up with all stages info and use in the bulk_create.
        data_set = []
        index = 0

        # Setting the number of repititions base on the number of the stages in the list.
        for elemento in range(0, len(z)):
            data_set_i = {}
            # The index is to define the order of the stages.
            index = index + 1

            for key_1, result_1 in y.items():
                if int(key_1) == z[elemento]:
                    data_set_i['stage_id'] = int(z[elemento])
                    data_set_i['index'] = int(index)
                    data_set_i['abc_id'] = id

                    # Datetime manipulation
                    if result_1 == 0 and index == 1:
                        data_set_i['add_date'] = data

                    # I know that I totally repeated the code here, I was trying to get this part shorter
                    # But I could not get it right.
                    elif result_1 > 0 and index == 1:
                        data_t = datetime.strptime(data, "%Y-%m-%d %H:%M:%S")
                        data_sum = data_t + timedelta(seconds=result_1)
                        data_sum += timedelta(seconds=3)
                        data_nova = str(data_sum.year) + '-' + str(formaters.DateNine(
                            data_sum.month)) + '-' + str(formaters.DateNine(data_sum.day)) + ' ' + str(
                            data_sum.hour) + ':' + str(formaters.DateNine(data_sum.minute)) + ':' + str(
                            formaters.DateNine(data_sum.second))
                        data_set_i['add_date'] = str(data_nova)

                    else:
                        data_t = datetime.strptime(data_set[elemento - 1]['add_date'], "%Y-%m-%d %H:%M:%S")
                        data_sum = data_t + timedelta(seconds=result_1)
                        data_sum += timedelta(seconds=3)
                        data_nova = str(data_sum.year) + '-' + str(formaters.DateNine(
                            data_sum.month)) + '-' + str(formaters.DateNine(data_sum.day)) + ' ' + str(
                            data_sum.hour) + ':' + str(formaters.DateNine(data_sum.minute)) + ':' + str(
                            formaters.DateNine(data_sum.second))
                        data_set_i['add_date'] = str(data_nova)

                    data_set.append(data_set_i)

Model_Example2_List = [Model_Example2(**vals) for vals in data_set]
Model_Example2.objects.bulk_create(Model_Example2_List)

ModelExample.objects.filter(abc_id=id).update(age=age, time_to_won=time_to_won)
  • Somebody needs to update that API to retrieve information in bulk not just one item at a time. This would improve the performance significantly. – Rohan Jul 12 '16 at 03:02

1 Answers1

1

if the bottleneck is in your network request, there isn't much you can do except to perhaps use gzip or deflate but with requests ..

The gzip and deflate transfer-encodings are automatically decoded for you.

If you want to be doubly sure, you can add the following headers to the get request.

{ 'Accept-Encoding': 'gzip,deflate'}

The other alternative is to use threading and have many requests operate in parrallel, a good option if you have lot's of bandwidth and multiple cores.

Lastly, there are lots of different ways to profile python including with cprofile + kcachegrind combo.

Community
  • 1
  • 1
e4c5
  • 52,766
  • 11
  • 101
  • 134
  • Thank you very much for the answer =) I tried to add the gzip and deflate in my request header, but I didn't felt any change. I'll try to go deeper with both parameters to understand if I'm missing something. But anyway, parallel processing would be the best option right now. Guess I'll have to be a little patient with this particular code. – João Menezes Jul 12 '16 at 16:28