0

I am using the sqlalchemy module to import a .sqlite3 sheet and create an engine to form a database.

The number of rows in my sql file is far more than I require - how do I format the range of rows that are selected to go into the database? For example, the first 10000 rows of a 100000 row file are to be imported only.

Here is what I have so far:

import pandas as pd
import sqlalchemy
import os


    db_dir = os.path.join("file_path_found_here", 'sqlfile.sqlite3')
database_url = f'sqlite:///{db_dir}'
engine = sqlalchemy.create_engine(database_url)
if os.path.isfile(db_dir):
    df = pd.read_sql('FSC_Data', engine, index_col=['time'])
    print(df)
geobot
  • 1
  • Possible duplicate of [How to create a large pandas dataframe from an sql query without running out of memory?](https://stackoverflow.com/questions/18107953/how-to-create-a-large-pandas-dataframe-from-an-sql-query-without-running-out-of) – mad_ Sep 21 '18 at 18:36

1 Answers1

0

Try using pandas.read_sql_query(...) method instead of read_sql(...)

You could try something like:

myQuery = 'SELECT * FROM myTable LIMIT 10000'
if os.path.isfile(db_dir):
    df = pd.read_sql_query(myQuery, engine, index_col=['time'])
    print(df)
D_________
  • 563
  • 5
  • 14