0

I am running SQL query from python API and want to collect data in Structured(column-wise data under their header).CSV format.

This is the code so far I have.

sql = "SELECT id,author From researches WHERE id < 20 " 
cursor.execute(sql)
data = cursor.fetchall()
print (data)
with open('metadata.csv', 'w', newline='') as f_handle:
    writer = csv.writer(f_handle)
    header = ['id', 'author']
    writer.writerow(header)
    for row in data:
        writer.writerow(row)

Now the data is being printed on the console but not getting in .CSV file this is what I am getting as output:

What is that I am missing?

Brian Tompsett - 汤莱恩
  • 5,753
  • 72
  • 57
  • 129
Hayat
  • 1,539
  • 4
  • 18
  • 32
  • 1
    What problem are you having? Python has `csv.writer` to write CSV to a file, so all you have to do is read the SQL results into a list. – Barmar Nov 04 '17 at 05:27
  • What have you tried so far ? – Mahesh Karia Nov 04 '17 at 05:30
  • The problem is I don't know how to do it. Can you help me the sample code based on this query? – Hayat Nov 04 '17 at 05:30
  • I have edited the code. Now I am able to get the Data but to write it to a .csv file I am facing difficulty. – Hayat Nov 04 '17 at 05:37
  • Check out the examples in [the docs](https://docs.python.org/3/library/csv.html#examples). I'm not sure, but you might be able to write rows from the cursor directly into a `csv.writer()`. –  Nov 04 '17 at 05:39
  • I went through some. I saw using delimiter but couldn't help it. – Hayat Nov 04 '17 at 05:41

4 Answers4

3

Here is a simple example of what you are trying to do:

import sqlite3 as db
import csv

# Run your query, the result is stored as `data`
with db.connect('vehicles.db') as conn:
    cur = conn.cursor()
    sql = "SELECT make, style, color, plate FROM vehicle_vehicle"
    cur.execute(sql)
    data = cur.fetchall()

# Create the csv file
with open('vehicle.csv', 'w', newline='') as f_handle:
    writer = csv.writer(f_handle)
    # Add the header/column names
    header = ['make', 'style', 'color', 'plate']
    writer.writerow(header)
    # Iterate over `data`  and  write to the csv file
    for row in data:
        writer.writerow(row)
diek
  • 657
  • 7
  • 16
  • Thanks, @diek. Just one question, after getting everything inside data.We are opening as 'vehicle.csv' where does this file come from and what part it is playing? I mean we haven't written data to 'vehicle.csv' so ..? – Hayat Nov 05 '17 at 06:07
  • @Hayat Python will create the file. The role that it is playing is the answer to your question, you want to create a csv file. – diek Nov 05 '17 at 15:49
  • @Hayat I added comments to the code, hopefully this will help you understand. Reading this will help you better understand https://docs.python.org/3/tutorial/inputoutput.html – diek Nov 05 '17 at 15:59
1
import pandas as pd

import numpy as np

from sqlalchemy import create_engine

from urllib.parse import quote_plus

params = quote_plus(r'Driver={SQL Server};Server=server_name;                        Database=DB_name;Trusted_Connection=yes;')

engine = create_engine("mssql+pyodbc:///?odbc_connect=%s" % params)

sql_string = '''SELECT id,author From researches WHERE id < 20 '''

final_data_fetch = pd.read_sql_query(sql_string, engine)

final_data_fetch.to_csv('file_name.csv')

Hope this helps!

Madhur Bhaiya
  • 28,155
  • 10
  • 49
  • 57
t.Chovatia
  • 31
  • 4
  • Using Pandas makes it quick and easier to work on data with python, hopefully this code helps, I use it for connecting with DB everyday at work – t.Chovatia Sep 21 '18 at 17:19
  • 1
    This method is slower and causes more memory errors if you're pulling in a lot of data (at least for me it does), but is a nicer method for smaller data – Emi OB Oct 07 '21 at 14:20
1

with mysql - export csv with mysqlclient library - utf8

import csv

import MySQLdb as mariadb;
import sys

tablelue="extracted_table"

try:
   conn = mariadb.connect(
      host="127.0.0.1",
      port=3306,
      user="me",
      password="mypasswd",
      database="mydb")

   cur = conn.cursor()

   instruction="show columns from " + tablelue
   cur.execute(instruction)  
   
   myresult = cur.fetchall()
   
   work=list()
   i=0
   for x in myresult:
     work.append(x[0])
     i=i+1


   wsql = "SELECT * FROM " + tablelue
   cur.execute(wsql)
   wdata = cur.fetchall()

   # Create the csv file
   fichecrit=tablelue+".csv"
   with open(fichecrit, 'w', newline='', encoding="utf8") as f_handle:
      writer = csv.writer(f_handle,delimiter=";")
      # Add the header/column names
      header = work
      writer.writerow(header)
      # Iterate over `data`  and  write to the csv file
      for row in wdata:
         writer.writerow(row)
         
   conn.close()

except Exception as e:
   print(f"Error: {e}")

sys.exit(0) 
gerard093
  • 11
  • 1
  • Your answer could be improved with additional supporting information. Please [edit] to add further details, such as citations or documentation, so that others can confirm that your answer is correct. You can find more information on how to write good answers [in the help center](/help/how-to-answer). – Ethan Jun 10 '22 at 01:09
0

You can dump all results to the csv file without looping:

data = cursor.fetchall()
...
writer.writerows(data)
Milovan Tomašević
  • 6,823
  • 1
  • 50
  • 42
  • note for large datasets, this will take a HUGE amount of memory and is not feasible – Gabe May 09 '23 at 13:43
  • That was not the topic - processing a large amount of data. But if that would be a problem, then it would be solved so that in the `while` loop, reading from the database with `fetchmany()` and as long as there is data through the loop, it will be written. An example of an [optimization](https://stackoverflow.com/a/66665212/13155046). – Milovan Tomašević May 09 '23 at 16:51