4

I've built a python/daphne/django/graphene server, and did everything I could to optimize my queries -- cached db queries in Redis so that they are basically free, and eventually came to find that even with a very small graph schema I still see a >200ms overhead inside of graphql-core, evidently parsing my query.

I come from a REST-api-centric worldview, and in this case the stack decisions were out of my hands. I'm just trying to make it work. Queries that a normal django view should return in ~20ms still take ~250ms.

Given that I'm consistently sending the same few queries, it would be fantastic to skip the repetitive parsing of the same query over and over again.

So, I'm curious what the Python graphql people do, to make their service perform, and to begin with, I'd like to know:

  1. Should I just expect to live with that query overhead?
  2. Can I improve by doing something like switching from Daphne to Uvicorn, or even mod_wsgi (am not doing any async stuff)
  3. Is there any way to circumvent repeated parsing of the same query?

thanks for your time and assistance.

Tyler Gannon
  • 872
  • 6
  • 19
  • Did you make any progress on this? I have similar situation with an internal tool with large queries that don't change getting parsed and validated over and over again and would love to cut out the overhead. – azundo Jun 08 '21 at 06:37
  • 1
    unfortunately we get very little response regarding graphene usage on stackoverflow. I'd love an answer to this too – Rafael Santos Feb 10 '22 at 09:45
  • My answer has been to use Apollo if I want graphql service. If I was to do it again, I'd likely be weighing the option to make a middleware that runs after authentication, which would hash the contents of a graphql query and use that as a cache key, bypassing graphene wherever possible. – Tyler Gannon Apr 04 '22 at 20:42
  • How would you invalidate a cache, if you go that route? @TylerGannon – Nobody Apr 20 '22 at 20:01
  • @Nobody no special answer there. I made an ad-hoc answer to the question last year in which each cached query has a companion function to compute a cache key based on the query arguments. In that solution, it's a developer's responsibility to expire a given cache key when one of its arguments has changed in value or state. – Tyler Gannon Apr 21 '22 at 15:59
  • By the way I propose it as a workaround, not an element in a good design. In retrospect I may have done well by asking the Graphene developers why they used a pure Python GraphQL parser, rather than making a Python extension to wrap something written in e.g. Rust. Extensions are a hassle but not as bad as 0.25s response time on basic queries. – Tyler Gannon Apr 21 '22 at 16:16

0 Answers0