0

I need to export data from one DB to another instance / deployment ( eg: testing to production ) (same vendor/schema) . Can I achieve this using SQL Alchemy ? Its for a simple Import Export functionality for the end user.

The end user will click on all the values (lets say Employees) and he clicks on Export . All the Employee records along with their Deparment details ( since its a foreign key ) should be loaded into Python - and then written to a suitable XML format . Now the user has "XMLized" the database rows , he can go to another machine say Production and uploaded the "XML" and I write it to the DB - by overwriting Employees with same Name or uploading new Employee if it doesnt exist . First though I have to load the Department since its a Fkey using the same UPSERT conditions I specify .

In the below example I would like to supply the Table Employee and expect relevant Department Table to be loaded and saved to XML . This is the tricky thing I am trying to achieve .

SqlA can make my code database un-aware. I already have scripted this using normal sqls and python but I need a higher paradigm where fkeys needs to be loaded . Further I might need to load some tables which I specify which has one to many relation with my original table . I.e Say Employers Documents might be stored in a table called Documents , with one Empoyee having more than one Doc .

engine = create_engine('sqlite:///:memory:')

meta = MetaData()

employees = Table('employees', meta,
    Column('employee_id', Integer, primary_key=True),
    Column('employee_name', String(60), nullable=False, key='name'),
    Column('employee_dept', Integer, ForeignKey("departments.department_id"))
)
sqlemployees.create(engine)
Nishant
  • 20,354
  • 18
  • 69
  • 101
  • 1
    SQLAlchemy will load into memory exactly what you ask it to. If you query all of the records, it will try to load all of the records. As explained in the answer to [this question](http://stackoverflow.com/questions/7389759/memory-efficient-built-in-sqlalchemy-iterator-generator) SQLAlchemy typically does not have control over this (that happens at the DBAPI level). I'm not sure what your question is, exactly. Have you tried creating a query in SQLAlchemy and seeing what the results are? Are you trying to limit the rows or columns that are returned in that query? – Mark Hildreth Dec 23 '13 at 19:11
  • Thanks Mark . Looks like "reflect" is not what I thought it to be . I thought it would reflect the entire table to memory - reflect the verb can mean that too . But it seems "reflect" is just about mapping the columns available for a particular table - either you can specify it or you can just "reflect" it ? Which is pretty much light weight ? So basically with my understanding of reflect I thought you dont even have to query anything once you reflect a table . Can you please clarify ? – Nishant Dec 25 '13 at 13:48
  • You are correct about "reflection": it doesn't reflect the database data, it reflects the database schema. It tries to determine the column names, their types, the relationships, etc. This basically can be used so that you don't need to define your schema twice (once in the database, once again in Python). It doesn't query for the actual data, however. However, I still don't understand your question. The actual text of your question is not specific enough to have an answer. It seems is basically "how can I use SQLAlchemy". Can you try to make it more specific? – Mark Hildreth Dec 26 '13 at 16:35

0 Answers0