4

I have developed a PHP application for my company, the problem is that I have just discovered that the application must work offline too. The application works like this: some data is loaded from a MySQL Database, then you have to compile some checklists, insert new data in the database and, in the end, generate a JSON.

The application will be used by a lot of people of our company, so we thought about installing on their computers a webserver (Apache) and make the application run on their machines. The problem is that, if we decide to go this way, we have to:

  • Download all the data from MySQL BEFORE starting the application (when the user has access to internet) and save this data into a JSON file
    • Change all the queries in the project in order to take the data from the JSON instead of the database
    • Also, there are a lot of functions which insert data into the database in real-time, so we would have to use SQLite and then transfer the data to the MySQL Database
    • Finally, in this way, the people who use this program would have access to ALL PHP files, and they could modify them at any time.

We don't have the time to think about a real Desktop Java application because this app will be used starting from January, so we don't have the time to develop it.

Have you got any suggestions? Is there something I'm not thinking about, or a technology which could help me? Thank you!

PS. I have considered programs like Nightrain of PHP Desktop but they only avoid the installation of Apache, nothing more...

Michele Stieven
  • 532
  • 9
  • 20
  • How about one central mysql database on a local server? – AgeDeO Dec 04 '15 at 11:08
  • I'm not entirely sure whether this type of question is on-topic for stackoverflow, but I like the question so I upvoted it. – Ultimater Dec 04 '15 at 11:09
  • 4
    Can it operate on an Intranet as opposed to offline, or by offline, do you mean completely disconnected? – Flosculus Dec 04 '15 at 11:13
  • 1
    Best and fastest solution would be to host the app on a local server and make it visible through your company's LAN. Anyway, how can the app work offline if you have to insert data in the db? – Phate01 Dec 04 '15 at 11:14
  • 1
    Well a database where several people write data into requires some sort of data exchange, otherwise everyone would only see the data he/she entered him/herself. I think this is logical so far. I would suggest you to use a central mysql database on a local server (like AgeDeO already suggested) OR if people should be able to update the data from home office too some sort of VPN (Virtual Private Network). – Marcel Wasilewski Dec 04 '15 at 11:14
  • @AgeDeo this application won't be used in an office, it will be used with laptops OUTSIDE (fields, gardens, woods...) so there is no connection to the local server – Michele Stieven Dec 04 '15 at 11:14
  • 3
    @Michele Stieven: 4g+VPN; there's your connection. – Piskvor left the building Dec 04 '15 at 11:16
  • I want to specify that the user has NO CONNECTION at all. No internet, no intranet, nothing. When the JSON and the data "to be written in MySQL" are ready, the user will upload them (at the time he will be able to connect to internet) and then there will be some validations for the files to be correct! – Michele Stieven Dec 04 '15 at 11:16
  • So you could anyway use a VPN. With a VPN everyone can update and commit data from everywhere. So for example he worked on some data and then wants to commit it. If he wants to commit, it means he has a connection. He will update data from VPN first and after commit the changes he made... With this method everyone can first get the newest data before he commits his own changes to the database. After this he is up to date again. I see no other way for this without a constant internet connection. – Marcel Wasilewski Dec 04 '15 at 11:19
  • 1
    Well, throw it away and start from scratch then. Making an offline-capable application is not a matter of setting a configuration option, or papering over a few AJAX calls: the assumptions and implications will force you to rewrite most of the code anyway. (This is, as I see it, currently a project management problem, not a programming one.) – Piskvor left the building Dec 04 '15 at 11:19
  • i would suggest you talk with your client and explain the situation, what you are looking for cannot be done, even thought you do it it would be hacky. – astroanu Dec 04 '15 at 11:20
  • 2
    ... I'm trying to think of a option involving "sharding". It might be possible to write new data to a separate shard (locally), and synchronize when online by patching the central database with the new data. – Flosculus Dec 04 '15 at 11:20
  • 3
    @Flosculus: One word: conflicts. (Now you have 2^n problems) – Piskvor left the building Dec 04 '15 at 11:21
  • @Piskvor That's going to be an issue anyway. GUIDs in place of auto incremental IDs are likely to be needed, and nearly all foreign key constraints will need to be abandoned. If the system could be optimized to only ever write to an intermediate database (with a different schema), then this wouldn't be a problem. – Flosculus Dec 04 '15 at 11:23
  • Nice change of the requirements when the application is already developed =). The fundamental problem here is sync of local databases and resolving conflicts. – Alex Blex Dec 04 '15 at 11:24
  • @MarcelWasilewski you don't see the point, I make you an example: I'm using the app. I am compiling the checklist, then I think "Hummm, I might save the data so nothing will be lost" -> Click "Save" -> THERE IS NO CONNECTION, so the browser will return "page not found" and all will be lost. I CANNOT use a VPN because the app has to work WITHOUT connection. Ever. The person has to compile the checklist, write all the new data and THEN, upload a JSON containing all. The Json is not a problem, the problem is the data that I send in real time to the DB. And also, people could modify PHP files. – Michele Stieven Dec 04 '15 at 11:26
  • @Flosculus: ...if it's an append-only database. User 1: "Hey, I have changed X for client A to Z" User 2: "Hey, I have changed X for client A to W". Both synchronize. Now what; do we throw away the former change? The latter? Do we merge? How? Is that even possible? – Piskvor left the building Dec 04 '15 at 11:26
  • I have to specify that the users never work on the same MySQL Rows. They can only insert new rows, and delete the ones they have just created. – Michele Stieven Dec 04 '15 at 11:28
  • If they never work on the same rows I would choose the @Flosculus's suggestion... but this does not solve the problem of modifying the php files... – jolmos Dec 04 '15 at 11:31
  • In that case, it's within the realm of *theoretical* possibility, as @Flosculus writes with sharding - see e.g. this: http://stackoverflow.com/questions/7722659/online-offline-database-synchronization-mysql-php . You're not going to ship that feature in January, though; at least not in January 2016. Sorry to break it to you; but offline and resynchronization is *hard*, even without submarine requirements popping up. – Piskvor left the building Dec 04 '15 at 11:36
  • Your company may have local network, why don't you let people access through LAN using ip address of pc where ur app is installed? – Abhijit Borade Dec 04 '15 at 11:37
  • 1
    @MicheleStieven "never" is a very long time, especially in the environment where web applications suddenly should be able to work offline. If access to the source code is the only problem you see, this answer may help http://stackoverflow.com/questions/9826672/how-to-protect-my-source-code-when-deployed – Alex Blex Dec 04 '15 at 11:37
  • For THAT, I need an immediate "solution". I don't really mind that people could modify the files, that is the minor problem. Let's say that, if they modify something, it's at their own risk. – Michele Stieven Dec 04 '15 at 11:38
  • 2
    If you are using MySQL 5.7, would the use of local MySQL servers to store the data when collected & then use multi-source replication to update the central database when connected be a possibility : https://dev.mysql.com/doc/refman/5.7/en/replication-multi-source-tutorials.html – PaulF Dec 04 '15 at 11:41
  • Ok, I see a lot of ideas that I could use, thank you, I would consider them in "porting" the app to desktop. But aren't there technologies who could help me without rewriting all the code? – Michele Stieven Dec 04 '15 at 11:48
  • install the servers on the clients – dhpratik Dec 04 '15 at 12:18

1 Answers1

1

Introduction

Since you obviously need a fast solution, I'll give you one. This is based on the pieces of information we know. Warning, this solution is not elegant, and you WILL NEED to replace it when you get the chance.

Solution

  1. Clear all of your primary and foreign keys.
  2. Replace them with BINARY(16) with an index.

Every record will need its pseudo-primary-key to be randomly generated with a CSRNG, Binary 16 is just convenient to follow the UUID standard. This will ensure each new record remains uniquely indexed despite lack of knowledge of the other distributions.

You're tables won't have primary key indexes, because these are unique, and since the database will be distributed, it won't be possible to check the uniqueness of the keys anyway, so there is no point using it.

  1. Each laptop will need a copy of the entire database.
  2. Each laptop will only be allowed to add new data, never delete or modify base data.

In fact, as a rule, all data on the central database will be write-once/read-only from now on. It doesn't matter how erroneous the newly merged data is, it must never be deleted or modified.

  1. New data should be regarded as "updates" based on their timestamp.

So every table will need a timestamp.

  1. Finally a record of when a copy distribution was made should be kept to retain knowledge of which data to merge back to the central database.

What you are left with, is a central database that takes on all data, and changes to data will be represented by the presence of newer data.

Conclusion

I'd only use this solution if I really had too. In fact, I'd estimate only an 80% chance of it even working with sub-standard quality. It also assumes that you can devote all remaining development time to the re-factoring of data insertion methods.

You are going to have to deal with the fact that a LOT of administration work will be needed on the central database to manage the integrity of the data, and you will need to work with the assumption that you can't change the format of the input being merged from the laptops.

Every new feature will need to be backwards compatible with old data.

Flosculus
  • 6,880
  • 3
  • 18
  • 42
  • Thank you for the solution, but at this point, I find it easier to store data into another JSON files instead of creating 2 databases and then merging them. I could just store data in a JSON and, in the end, when the user has the connection, start all the queries to the main database. – Michele Stieven Dec 04 '15 at 11:55
  • 1
    @MicheleStieven If the system is small enough to do that quickly then do that. – Flosculus Dec 04 '15 at 11:58
  • Now I don't have the time to think about something else, but later I'd like to write just ONE app which works both offline and online. Do you have suggestions? I know quite well php, javascript/jquery, html5 (for web) and C# (and a little bit of Java but not that much) – Michele Stieven Dec 04 '15 at 12:03
  • @MicheleStieven A few games I play use DRM, despite being single player. An example in Anno 2070 by Ubisoft was when online you had access to the Ark upgrades. These features were switched off when the connection was lost, but you could still play most of the game. – Flosculus Dec 04 '15 at 12:09
  • If you have a luxury of switching from SQL to JSON, consider couchdb. It was written for your usecase. This question http://stackoverflow.com/questions/24234320/couchdb-online-offline-synchronisation looks similar to what you need. – Alex Blex Dec 04 '15 at 12:10
  • Maybe it would be dirtier, but you could have a local database (replicated from the remote one), then log all your local 'INSERTS/UPDATES' statements into a file (.sql) and, when you have internet connection, upload that file and execute it on the remote database ... but, as @PaulF suggest in another comment, you could take a look at Multi-Source Replication mysql. Here another link: https://www.percona.com/blog/2013/10/02/mysql-5-7-multi-source-replication/ – jolmos Dec 04 '15 at 12:11