I'm running into an architectural issue in regards to how to organize a certain project.
The project is to create a package of models and relative database connections to be used in multiple different web applications.
I wish to create a model hierarchy with the following models:
User
Application
Log
There are more models but these three exemplify the basic underlying problem.
The User model has a many to many relationship with Application.
And an Application has a one to many relationship with Log.
User * - * Application
Application 1 - * Log
The problem i am running into is that i want User and Application to exist in a Postgres database and the logs to exist in a Mongo datastore.
The main reason i have for putting Logs into a different data store is that i don't need to do any updates, just inserts for said model. I will not need immediate consistency with the inserts. And i want to utilize the aggregation framework and map reductions on a weekly basis. Also the amount of documents of logs i will have will trump the amount of users/applications i will have. On a scale of millions to 1. The server i will be using this model mostly in will be communicating over websockets so the amount of traffic and inserts will be high and i don't really need postgres for it. (But please feel free to (nicely) steer me in a different direction if you feel so obliged).
Now i am using python with SQLAlchemy for the modeling, engine, and the migrations.
And while i haven't started yet i plan on using MongoKit for the mongo model(s).
So far i have the SQLAlchemy engine setup with the models and migrations for the User and Application models.
How do i setup the entire package so that when i include this repo as a dependency for a Flask application i can utilize both engines and connection pools?
Because i can already see that having a property on the Application for retrieving the latest logs may become an issue.