0

I am reorganizing an existing PHP application to separate data access (private API calls) from the application itself.

The purpose of doing this is to allow for another application on the intranet to access the same data without duplicating the code to run the queries and such. I am also planning to make it easier for developers to write code for the current web application, while only a few team members would be adding features to the API.

Currently the application has a structure like this (this is only one of many pages):

  • GET /notes.php - gets the page for the user to view notes (main UI page)
  • GET /notes.php?page=view&id=6 - get the contents of note 6
  • POST /notes.php?page=create - create a note
  • POST /notes.php?page=delete - delete a note
  • POST /notes.php?page=append - append to a note

The reorganized application will have a structure like this:

  • GET /notes.php
  • Internal GET /api/notes/6
  • Internal POST /api/notes
  • Internal DELETE /api/notes/6
  • Internal PUT /api/notes (or perhaps PATCH, depending on whether a full representation will be sent)

In the web application I was thinking of doing HTTP requests to URLs on https://localhost/api/ but that seems really expensive. Here is some code to elaborate on what I mean:

// GET notes.php

switch ($_GET['page']) {
    case 'view':
        $data = \Requests::get(
            "https://localhost/api/notes/{$_GET['id']}",
            array(),
            array('auth' => ... )
        );
        // do things with $data if necessary and send back to browser
        break;
    case 'create':
        $response = \Requests::post( ... );
        if ($response->status_code === 201) {
            // things
        }
        break;
    // etc...
}

I read this discussion and one of the members posted:

Too much overhead, do not use the network for internal communications. Instead use much more readily available means of communications between different process or what have you. This depends on the system its running on of course...Now you can mimic REST if you like but do not use HTTP or the network for internal stuff. Thats like throwing a whale into a mini toilet.

Can someone explain how I can achieve this? Both the web application and API are on the same server (at least for now).

Or is the HTTP overhead aspect just something of negligible concern?

Making HTTP requests directly from the JavaScript/browser to the API is not an option at the moment due to security restrictions.

I've also looked at the two answers in this question but it would be nice for someone to elaborate on that.

rink.attendant.6
  • 44,500
  • 61
  • 101
  • 156
  • Go for https://www.rabbitmq.com/ if you want to separate two applications but want them to be able to talk properly - and fast - with each other. – floriank Jul 20 '15 at 22:30
  • 1
    @burzum why not [ZeroMQ](http://zero.mq), if we're going to throw such suggestions out there? On topic - using REST, even for internal stuff - **is not bad**. You can always split your app into two apps, one that's RESTful entirely and the other one that consumes it. You can scale it to multiple machines. HTTP is widely used protocol, many systems are able to speak HTTP. What benefit do you really get if you build a monolithic, crappy app that's able to work with itself only? Would it not be better if you didn't have to spend time in the future in order to use something you already built? – N.B. Jul 20 '15 at 22:36
  • @N.B. well, we can go further on this http://stackoverflow.com/questions/731233/activemq-or-rabbitmq-or-zeromq-or pick whatever you like. And HTTP *is* indisputable slower than *MQ (and others). We have here a similar system running and will switch to a *MQ based solution in the near future. Also almost any system and language can speak web sockets as well (there are libs), also AMQP, MQTT, STOMP aren't a big problem either. The implementation is also not that hard. So if you're aiming for good performance I wouldn't go for HTTP except for prototyping. – floriank Jul 20 '15 at 22:47
  • We don't have to go anywhere further. If you think that your time and time of your co-workers is less valuable than using hardware that costs peanuts these days then by all means, do waste your time - it's your time and your money. I prefer to work smart and efficiently. Also, if I want performance I'll always use Mongrel2 and ZeroMQ, I can have the speed that rabbitmq can never achieve with all the benefits of simply built REST service that I can consume from anywhere. I really hate spending time stupidly, and doing the same work twice.. well, to each his own. – N.B. Jul 20 '15 at 22:56

2 Answers2

0

The HTTP overhead will be significant, as you have to go through the full page rendering cycle. This will include HTTP server overhead, separate process PHP execution, OS networking layer, etc. Whether it is negligible or not really depends on the type of your application, traffic, infrastructure, response time requirements, etc.

In order to provide you with better solution, you need to present your reasoning for considering this approach in the first place. Factors to consider also include current application architecture, requirements, frameworks used, etc.

If security is your primary concern, this is not necessarily a good way to go in the first place, as you will need to now store some session related data in yet another layer.

Also, despite the additional overhead, final application could potentially perform faster given the right caching mechanisms. It really depends on your final solution.

D.K.
  • 478
  • 2
  • 6
  • 1
    Don't use sessions in rest services. A well designed REST api is stateless. – Evert Jul 20 '15 at 22:32
  • True that. That being said... I have a feeling the original application is not. – D.K. Jul 20 '15 at 22:34
  • I've updated my question to include why I am changing the architecture of the application. As for sessions, I do plan to remain stateless for the API (using OAuth to authenticate each request and such) however the web application will still have a session to keep the user logged in. – rink.attendant.6 Jul 21 '15 at 00:16
0

I am doing the same application framework. Had the same problem. So I decided to do following design:

  1. For processes that are located remotely (on a different machine) then I user crul or other calls to a remote resource. If I store user on a different server to get user status I do this API->Execute(https://remote.com/user/currentStatus/getid/6) it will return status.
  2. For local calls, say Events will require Alerts (these are 2 separate package with their own data model but on the same machine) - I make a local API like call. something like this: API->Execute(array('Alerts', Param1, Param2).

API->Execute then knows that's a local object. Will get the object local physical path. Initialize it, pass the data and return the results into context. No remote execution with protocols overhead.

For example if you want to keep an encryption service with keys and what not away from the rest of the applications - you can send data securely and get back the encrypted value; then that service is always called over a remote api (https://encryptionservice.com/encrypt/this/value)