I am looking to important around 100 csv files with the same data throughout all of them. I am using oracle SQL developer
Asked
Active
Viewed 1,259 times
-2
-
Please add details of what you have tried and at what point you're encountering a problem. – devlin carnate Feb 06 '20 at 16:13
-
Just to make sure: your database is MS SQL Server, and the tool you use is Oracle SQL Developer. It that correct? – Littlefoot Feb 06 '20 at 16:13
-
I have tried to search up methods to do this and can't seem to find anything to do it in one bulk import. any tips? – Callum Jones Feb 06 '20 at 16:14
-
https://stackoverflow.com/questions/6198863/oracle-import-csv-file – Monofuse Feb 06 '20 at 16:15
-
@Littlefoot Yes i believe so – Callum Jones Feb 06 '20 at 16:15
-
So, this didn't work? https://stackoverflow.com/questions/16076309/import-multiple-csv-files-to-sql-server-from-a-folder (I think you need to do a bit more research and tell us what exactly you're attempting and what exactly isn't working...you are not the first person to attempt this task and there is a lot out there in terms of tutorials, previous questions with answers, etc) – devlin carnate Feb 06 '20 at 16:19
-
this is not the type SQL i am using so it does not apply. I am not very caught up with the technologies apologies – Callum Jones Feb 06 '20 at 16:31
-
are they all the same definition, going to the same table? if so, then you want external tables, or sqlldr – thatjeffsmith Feb 06 '20 at 16:34
-
You need to fix the tags on your question if it's not MS Sql Server. – devlin carnate Feb 06 '20 at 16:38
1 Answers
2
Is this SQL Server or Oracle? Either way, if I were you, I would merge all 100 files into one single file, and load that into any database you are working with. Python will easily do the merge task for you. Then, load the consolidated file into your DB.
import pandas as pd
import csv
import glob
import os
#os.chdir("C:\\your_path\\")
results = pd.DataFrame([])
filelist = glob.glob("C:\\your_path\\test\\*.csv")
#dfList=[]
for filename in filelist:
print(filename)
namedf = pd.read_csv(filename, skiprows=0, index_col=0)
results = results.append(namedf)
results.to_csv('C:\\your_path\\CombinedFile.csv')

ASH
- 20,759
- 19
- 87
- 200