0

I have an Azure SQL Server database and a linux box. I have a csv file on the linux machine that I want to import into SQL Server. I have a table already created where I am going to import this file. I have the following questions -

1) Why does this command return an Unknown argument: -S

bcp table in ~/test.csv -S databaseServerName -d dbName -U myUsername -q -c -t

2) How do I import only part of the csv file? It has 20 columns, but I only want to import 2.

3) My table has these two columns - State, Province. My csv file has these two columns that I want to import - State, Region. How do I get Province to map to region.

Aaron
  • 1,345
  • 2
  • 13
  • 32
  • Yikes... answered too soon... answer deleted. You cannot load Excel files via BCP. BCP works with either flat text files or with native SQL data files. Didn't see your Excel reference till the end. If you can, if it's not too late, abandon any and all efforts to allow Excel into your data flow :) – jamie Sep 10 '19 at 23:21
  • Is it an excel file or a text file? Many people open CSV in excel and assume it's a excel file when really it's a text file – Nick.Mc Sep 11 '19 at 01:35
  • @jamie Sorry about the confusion. It is a csv file, not an excel file. – Aaron Sep 11 '19 at 14:42
  • @Nick.McDermaid Sorry about the confusion. It is a csv file, not an excel file. – Aaron Sep 11 '19 at 14:42
  • For question #1, I am not familiar with unix/linux environments, but can you try using the full unc path to the file? I assume the "~" notation is some form of "current directory" but it's not used in the Microsoft world. Maybe that is giving bcp fits? Just try your command with the full unc path of the file you want to BCP out to. Last, you dont want to share your servername, and shouldn't, but just confirm your servername and tablename do not contain break any naming rules. Any spaces in names? that kind of thing – jamie Sep 11 '19 at 15:49
  • It certainly matches the sample here https://learn.microsoft.com/en-us/sql/linux/sql-server-linux-migrate-bcp?view=sql-server-2017#import-data-from-the-source-data-file except you haven't specified a terminator – Nick.Mc Sep 11 '19 at 23:56
  • BCP can't be used to transform - only import. The typical pattern is to import everything into a staging table with BCP then clean it up and put it into the final table with T-SQL – Nick.Mc Sep 12 '19 at 00:04
  • If you consider leaving selected columns behind or rearranging the order of columns a form of transformation, then yes, BCP does transform. In a strict sense, i agree, BCP is not a transformation tool. But to address what the OP wanted to do, yes, BCP can do that and BCP could be a perfectly acceptable place to do that. – jamie Sep 13 '19 at 20:01

2 Answers2

0

For #2 and #3, you need to use a BCP format file. This allows you column-level control over which fields from the file go to which columns in the destination and which are left behind (not given a destination).

Use the -f option of BCP and specify the location and name of the format file you want to use. Sorry, no help yet with #1. Have a few questions/suggestions. But im not that familiar with Linux environments.

jamie
  • 745
  • 4
  • 11
0

For part 2 of your question, you can use the Linux cut command to extract just the columns you want. A short awk script can do the same thing (see this SO answer). For both of these, you'll have to identify the "State" and "Region" columns by number. A non-native solution is [querycsv.py][1], which can also rename the "Region" column (disclaimer: I wrote querycsv.py).

For part 3 of your question, you can use the Linux sed command to change the column name on the first line of the CSV file, e.g., sed -e "1s/Region/Province/" file.csv >file2.csv.

rd_nielsen
  • 2,407
  • 2
  • 11
  • 18