86

I'm trying to insert a large CSV file (several gigs) into SQL Server, but once I go through the Import Wizard and finally try to import the file I get the following error report:

  • Executing (Error) Messages

Error 0xc02020a1: Data Flow Task 1: Data conversion failed. The data conversion for column ""Title"" returned status value 4 and status text "Text was truncated or one or more characters had no match in the target code page.".

(SQL Server Import and Export Wizard)

Error 0xc020902a: Data Flow Task 1: The "Source - Train_csv.Outputs[Flat File Source Output].Columns["Title"]" failed because truncation occurred, and the truncation row disposition on "Source - Train_csv.Outputs[Flat File Source Output].Columns["Title"]" specifies failure on truncation. A truncation error occurred on the specified object of the specified component.

(SQL Server Import and Export Wizard)

Error 0xc0202092: Data Flow Task 1: An error occurred while processing file "C:\Train.csv" on data row 2.

(SQL Server Import and Export Wizard)

Error 0xc0047038: Data Flow Task 1: SSIS Error Code DTS_E_PRIMEOUTPUTFAILED. The PrimeOutput method on Source - Train_csv returned error code 0xC0202092. The component returned a failure code when the pipeline engine called PrimeOutput(). The meaning of the failure code is defined by the component, but the error is fatal and the pipeline stopped executing. There may be error messages posted before this with more information about the failure.

(SQL Server Import and Export Wizard)

I created the table to insert the file into first, and I set each column to hold varchar(MAX), so I don't understand how I can still have this truncation issue. What am I doing wrong?

Ajeet Verma
  • 1,021
  • 1
  • 7
  • 25
GMS
  • 963
  • 1
  • 7
  • 5
  • Take a look at the second row of data. Two likely causes are empty fields and/or commas in the field. – Dan Bracuk Sep 03 '13 at 19:43
  • I checked the second row, and the field in each column looks fine. No empty, no NULL, no commas. – GMS Sep 03 '13 at 19:49

6 Answers6

201

In SQL Server Import and Export Wizard you can adjust the source data types in the Advanced tab (these become the data types of the output if creating a new table, but otherwise are just used for handling the source data).

The data types are annoyingly different than those in MS SQL, instead of VARCHAR(255) it's DT_STR and the output column width can be set to 255. For VARCHAR(MAX) it's DT_TEXT.

So, on the Data Source selection, in the Advanced tab, change the data type of any offending columns from DT_STR to DT_TEXT (You can select multiple columns and change them all at once).

Import and Export Wizard - Data Source - Advanced

Hart CO
  • 34,064
  • 6
  • 48
  • 63
  • 1
    It looks like this did the trick after I maxed out the column lengths! Thanks a lot – GMS Sep 03 '13 at 20:09
  • Incidentally, this trick also appears to work in SSIS 2012. thank you – David Barrows Jan 25 '14 at 13:36
  • 2
    I had to change columns to `DT_DATE`, `DT_NUMERIC`, etc. That was the most tedious and annoying thing I've done int a while. But it's better than creating a new table with all varchar(50) columns. That's really not helpful. – Jess Aug 13 '15 at 02:32
  • 1
    This worked for me as well, even though none of the data was longer than 55 characters once it was in SQL Server. Strange... – John Pasquet Dec 16 '15 at 20:44
  • FYI - you can SHIFT-click on all columns in the Advanced tab to apply datatype changes en masse. Yes, very annoying that the datatype names differ from the standard. Even knowing all this, I still routinely run into unexpected conversion problems when importing, and the solution is usually a process of blind trial and error. – HamishKL Apr 12 '16 at 21:41
  • I have no `advanced` tab ;( – pookie Apr 30 '16 at 20:33
  • Edit: Right... because there is no such tab when importing from Excel, but it is there for `Flat file format`. Working! – pookie Apr 30 '16 at 21:10
  • Awesome @HamishKL It is amazing. I have been tried checking this for a whole day. Thanks much – Jeya Suriya Muthumari Sep 22 '16 at 15:06
  • I am importing from a flat file, and I still don't see an "Advanced" tab – Casey Crookston Feb 06 '17 at 15:53
  • @CaseyCrookston Odd that, which version of SQL Server are you using? – Hart CO Feb 06 '17 at 16:12
  • 1
    I found it! I was looking for a "tab", across the top of the window – Casey Crookston Feb 06 '17 at 16:30
  • 2
    Curiously, if I use `Suggest Types...` and have it scan all 6000 rows, it changes the length value of all of the columns, presumably to the highest it encountered, yet the error remains. The only fix for me was changing all of them to `DT_TEXT`. Finding the offending columns one at a time was taking ages. – Sinjai Jun 29 '17 at 19:51
  • @Sinjai That is curious, good to know that `Suggest Types` is flawed. – Hart CO Jun 29 '17 at 19:54
  • Also worth noting that I did not need to use [David's answer](https://stackoverflow.com/a/34930080/5043056), but I don't believe any of my cells actually exceeded 50 characters, so I'm willing to bet some people may need to do both. – Sinjai Jun 29 '17 at 20:13
  • Thank you very much for this! I just wanted to add, that I had to modify my csv table and remove the 0x byte annotation from the varbinary column. – FrankKrumnow Sep 07 '17 at 10:06
  • Where can I find what csv data types map to what sql server data types? – BamBam22 Jan 16 '19 at 20:49
  • @BamBam22 CSV's are just strings, do you mean the SSIS types compared to the SQL Server types? Or, SSIS/SQL Server import can 'suggest types' but it's not always great at that. – Hart CO Jan 16 '19 at 20:51
  • @HartCO I think so. This answer says that VARCHAR(255) maps to DT_STR. Where do I find that myself? – BamBam22 Jan 16 '19 at 21:00
  • 1
    @BamBam22 Good question, this is probably worth starting a new question for, but I did find this list which seems comprehensive: http://www.sqlservercentral.com/blogs/dknight/2010/12/22/ssis-to-sql-server-data-type-translations/ – Hart CO Jan 16 '19 at 21:05
  • 1
    just ran into this in sql server 2016 importing a bunch of json data. – DForck42 Oct 10 '19 at 18:11
  • I run into this error `Conversion from "DT_TEXT" with code page 65001 to "DT_STR" with code page 1252 is not supported. (SQL Server Import and Export Wizard)`. **EDIT**, I can choose the Code Page at the beginning of the SQL Server Import Dialog, [as seen here](https://stackoverflow.com/a/54571082/1175496) – Nate Anderson Jan 23 '21 at 02:52
  • I also run into an error: `An unexpected disk I/O error occurred while reading the file. (SQL Server Import and Export Wizard) Error 0xc0047038: Data Flow Task 1: SSIS Error Code DTS_E_PRIMEOUTPUTFAILED` , I'm trying to work around this by moving [file to another disk](https://social.msdn.microsoft.com/Forums/sqlserver/en-US/e792aadd-2ef8-4c5f-ba78-4a5914fe6a66/an-disk-io-time-out-occurred-while-reading-the-file-in-ssis-error-0xc020209e?forum=sqlintegrationservices) – Nate Anderson Jan 23 '21 at 04:28
2

This answer may not apply universally, but it fixed the occurrence of this error I was encountering when importing a small text file. The flat file provider was importing based on fixed 50-character text columns in the source, which was incorrect. No amount of remapping the destination columns affected the issue.

To solve the issue, in the "Choose a Data Source" for the flat-file provider, after selecting the file, a "Suggest Types.." button appears beneath the input column list. After hitting this button, even if no changes were made to the enusing dialog, the Flat File provider then re-queried the source .csv file and then correctly determined the lengths of the fields in the source file.

Once this was done, the import proceeded with no further issues.

David W
  • 10,062
  • 34
  • 60
0

I think its a bug, please apply the workaround and then try again: http://support.microsoft.com/kb/281517.

Also, go into Advanced tab, and confirm if Target columns length is Varchar(max).

Sonam
  • 3,406
  • 1
  • 12
  • 24
  • They are definitely varchar(MAX). I also went into Advanced and made each column width 8000 characters. Now I am only getting this error for the last column. – GMS Sep 03 '13 at 20:01
0

The Advanced Editor did not resolve my issue, instead I was forced to edit dtsx-file through notepad (or your favorite text/xml editor) and manually replace values in attributes to

length="0" dataType="nText" (I'm using unicode)

Always make a backup of the dtsx-file before you edit in text/xml mode.

Running SQL Server 2008 R2

dbd
  • 531
  • 1
  • 7
  • 18
0

Goto Advanced tab----> data type of column---> Here change data type from DT_STR to DT_TEXT and column width 255. Now you can check it will work perfectly.

Lokesh
  • 1
0

Issue: The Jet OLE DB provider reads a registry key to determine how many rows are to be read to guess the type of the source column. By default, the value for this key is 8. Hence, the provider scans the first 8 rows of the source data to determine the data types for the columns. If any field looks like text and the length of data is more than 255 characters, the column is typed as a memo field. So, if there is no data with a length greater than 255 characters in the first 8 rows of the source, Jet cannot accurately determine the nature of the data type. As the first 8 row length of data in the exported sheet is less than 255 its considering the source length as VARCHAR(255) and unable to read data from the column having more length.

Fix: The solution is just to sort the comment column in descending order. In 2012 onwards we can update the values in Advance tab in the Import wizard.

Tapas
  • 1