1

I have the table in PostgreSQL with many records and unique key

CREATE TABLE parcels
(
    Id SERIAL PRIMARY KEY NOT NULL,
    Number CITEXT NOT NULL,
    UserId INT REFERENCES Stations(Id) NOT NULL,
    TimeStampUtc TIMESTAMP WITHOUT TIME ZONE NOT NULL
);
CREATE UNIQUE INDEX ON parcels (Number, UserId, (TimeStampUtc::date));

then I import data from Excel and map it to list. At the end I have something like

 await _dbContext.Parcels.AddRangeAsync(parcels);
 await _dbContext.SaveChangesAsync();

parcels contains about 20,000 records and can contains records witch violates the unique constraint. In such case I need to skip that records and continue to insert.

Now I got an expected error

Microsoft.EntityFrameworkCore.DbUpdateException: An error occurred while updating the entries. See the inner exception for details. ---> Npgsql.PostgresException: 23505: duplicate key value violates unique constraint "parcels_number_userid_timestamputc_idx"

How to ignore it and continue to insert?

I found some similar questions like this and this but I don't want to load table to memory, because there are a lot of records to compare

Я TChebur
  • 366
  • 6
  • 23

1 Answers1

0

You should be able to do is the following:

  1. create a stored procedure that receives the parcels and does an insert paired with a join on the column with the unique key constraint (= filter out all that already have a value in the where condition)
  2. call the stored procedure from your code and pass the parcels

in case you need an artificial id, there are two ways to do it:

  • return the ID from the insert function (returning id) that the database created automatically for you
  • generate the Id on your client (usually only works well when you have GUID columns)
CitrusO2
  • 846
  • 10
  • 17