0

I have the following table in redshift:

 Column | Type
 id       integer
 value    varchar(255)

I'm trying to copy in (using the datapipeline's RedshiftCopyActivity), and the data has the line 1,maybe as the entry trying to be added, but I get back the error 1214:Delimiter not found, and the raw_field_data value is maybe. Is there something I'm missing in the copy parameters?

The entire csv is three lines that goes:

1,maybe
2,no
3,yes
CBredlow
  • 2,790
  • 2
  • 28
  • 47
  • can you segregate the values in two (or n, depending upon the number of attributes in your csv) different columns, remove the `,` and then give it a try? Also, does the destination support two attributes? – Yusuf Hassan Jan 30 '18 at 05:48
  • try changing your end of line byte to just what is expected in linux? was this prepared in windows? I stopped using AWS datapipline a long time ago because of so many issues like this. it is very hard to debug! – Jon Scott Jan 30 '18 at 08:43

1 Answers1

3

You may want to take a look at the similar question Redshift COPY command delimiter not found.

Make sure your RedshiftCopyActivity configuration includes FORMAT AS CSV from https://docs.aws.amazon.com/redshift/latest/dg/copy-parameters-data-format.html#copy-csv.

Be sure your input data has your configured delimiter between every field, even in the case of nulls.

Be sure you do not have any trailing blank lines.

You can run the following SQL (from the linked question) to see more specific details of what row is causing the problem.

SELECT le.starttime, 
  d.query, 
  d.line_number, 
  d.colname, 
  d.value,
  le.raw_line,
  le.err_reason 
FROM stl_loaderror_detail d, 
JOIN stl_load_errors le
ON d.query = le.query
ORDER BY le.starttime DESC;
  • So it turned out that I had multiple files in my s3 bucket that started with the same series of characters, and the RedshiftCopyActivity in datapipeline was trying trying to copy these files (that were a different schema) into the target table. – CBredlow Feb 01 '18 at 05:30