0

I am facing a heroku 30s time out error with my django webapp. The reason it is taking long is because of the views variables created (reddit api onto context)

Here is the code for the same.

def home(request):
    reddit = praw.Reddit(client_id='myclientid', client_secret='mysecretcode',
                         user_agent='user agent name')
    hot_posts = reddit.subreddit('AskReddit').top(time_filter="day", limit=7)
    x = []
    y = []


    for post in hot_posts:
        x.append(post.title)
        y.append(post.url)
        print(x)
        print(y)

    z = []
    for url in y:
        comments = []
        submission = reddit.submission(url=url)
        submission.comments.replace_more(limit=0)
        for count in range(10):
            comments.append(submission.comments[count].body)
        z.append(comments)


    top_EarthPorn = reddit.subreddit('EarthPorn').top(limit=100)
    EarthPorn_links = []
    for post in top_EarthPorn:
        EarthPorn_links.append(post.url)
    request.session['EarthPorn_links'] = EarthPorn_links

    return render(request, template_name='base.html', context=context)

How do i make sure the context dict data is being created every hour or so as a background process? which libraries can one use to achieve so

Shivam Anand
  • 91
  • 1
  • 10
  • You've just leaked your credentials. Please invalidate them **_immediately_**. They are forever compromised, and you need to generate new ones. Editing them out of your question is _**not enough**_. – ChrisGPT was on strike Dec 05 '20 at 21:49
  • @chris noted. on it, ty. https://stackoverflow.com/a/30312778/930271, as an aside, will this answer be correct for my use case? – Shivam Anand Dec 05 '20 at 21:55
  • That is a possible approach, but it won't help with your timeout problem. That example synchronously calls an external API, waits for a response, builds its own response, and returns it to the end user, like yours currently does. – ChrisGPT was on strike Dec 06 '20 at 19:09
  • You might want to think about a background task that runs on a schedule (e.g. hourly?) that retrieves whatever data you want from Reddit and saves it into a local database. Then you could use your locally-cached data instead of hitting Reddit every time you load that page. – ChrisGPT was on strike Dec 06 '20 at 19:10

1 Answers1

0

I think this should work:

Put this at the end of your settings.py file:

SESSION_EXPIRE_SECONDS = 1500 # 1500 seconds = 25 minutes

So the session will expire after 25 minutes.

RiveN
  • 2,595
  • 11
  • 13
  • 26