1

Anyone have any experience using a long-running Magento process to mitigate overhead. For example, a typical Magento API call to an order or customer resource could take 1s or more, with potentially half of that time being spent on Magento overhead, and not specific to the API resource in question.

So, what if a Magento PHP process was spun up and maintained in memory, waiting for API requests, so that it could handle them without the need to load up Magento each time.

Most of my searches for long-running php scripts are turning up questions / issues related to troubleshooting PHP scripts that are taking longer than expected to run b/c of the amount of data they're processing, etc. - so I'm finding it difficult to find good resources on this kind of thing, if it's even possible.

UPDATE: To be a bit more specific with my needs:

  • I already have memcached in place for simple GETs that we can safely cache server-side.
  • It's write operations that I'd like to optimize now.
  • Using a REST API, so don't have any WSDL loading that we're concerned with.
kalenjordan
  • 2,446
  • 1
  • 24
  • 38
  • 2
    Hyper vigilance towards potential memory leaks would be important... – Roscius Aug 27 '12 at 19:52
  • 2
    Typically you have to implement the web server in PHP (I'm sure there's a number of implementations out there), bootstrap Magento, then fork off a process for each request. There's a lot of memory/resource management that would need to be considered. APC and memcache are often used as mechanisms to mimic the behavior. You're probably looking for something along the lines of Application Scope (such as in Java/.NET app servers). – beeplogic Aug 27 '12 at 20:46
  • Thanks @beeplogic. Any links to benchmarks on APC / memcache? We are already using memcache for API responses that can safely be fully cached (GETs mostly), but for POSTs we aren't using it. I'm going to look into APC further. In terms of memcache, could we actually just cache a magento model and then load it up from within a standalone PHP file from memcache without any of the magento loading overhead and use it? That seems impossible, but awesome. – kalenjordan Aug 27 '12 at 21:16
  • 1
    I don't know of any links regarding performance but I'm sure they're out there. It should be possible to store some of the objects in APC, although if they have resources or contain certain XML objects they will throw errors as they do not support serialization. In those cases you'd have to rewrite the models and implement __serialize/__deserialize methods to handle those cases. – beeplogic Aug 28 '12 at 00:14

1 Answers1

1

You may want to look into proc-open, and you'll need to do a lot of management that usually occurs in the OS's itself.

However, if the problem is speed and not just wanting a means to pipe/fork to take use of hardware available I would look into simply finding bottlenecks through out the system, and caching before diving into such. Such as WSDL Caching, DB normalization, OP code caching or even memcache or reverse proxy caching. Alan does have WSDL caching in his Mercury API product ( http://store.pulsestorm.net/products/mercury-api )

I have used proc-open before when importing over 500k customer records (through Magento's models (stack) I may add) with addresses into Magento on a 32 Core system in less than 8 hours using this same approach. One PHP file acted as the main entry point and new processes based of chunks of data were forked out to a secondary PHP file that did the actual importing.

I did leverage this small script for multi-threading on the import I had mentioned, although this isn't an exact answer to your question, as it doesn't seem to very technically specific oriented but hopefully offers some insight on possibilities:

Community
  • 1
  • 1
B00MER
  • 5,471
  • 1
  • 24
  • 41
  • Thanks @Boomer! So, I actually had read recently about the Alan's WSDL caching gains - pretty intense - but we're actually using a REST API, so not sure that will apply to us. DB normalization - is there much that can be done, given that Magento itself imposes quite a bit in terms of the existing DB structure? The custom tables we've created a fairly minimal. OP code caching - going to check that out. Memcache / reverse proxy - we have this already in place for simple GETs that we can cache server-side, but it's writes that we want to focus on a bit more. – kalenjordan Aug 27 '12 at 21:20
  • You may want to look into insert buffering: http://dev.mysql.com/doc/refman/5.0/en/innodb-insert-buffering.html – B00MER Aug 27 '12 at 21:57