I've populated an array with hashes of data from a JSON API that I want to load into a SQLite table. The (composite) primary key of this table is generated such that a later JSON pull won't contain any new information associated with an old primary key, but /will/ contain new information with new primary keys. (Specifically, it's keyed by the combination of date and an ID uniquely identifying each object I need information on.)
Rather than having to fiddle awkwardly with the API parameters, I'd rather just have the program discard information already in the table given the above, but I can't get the loop to skip properly.
use DBI;
use 5.010;
#DLing information, processing, sanitising, etc. omitted
foreach my $hash (@{$data{$key}}) {
my %data = %{$hash->{'row'}};
my $date = $data{'date'};
my $id = $data{'id'};
my @pkey = [$date, $id]; #Generate a Perl array corresponding to the primary key, e.g. ['2013-09-10','0001']
next if @pkey ~~ @allkeys; #Trying to use the 5.010 smart match operator to check if primary key already processed, and skip the entry if it is
push(@allkeys,[@pkey]);
#Other stuff omitted--nothing to do with the pkey issue#
my $sql = sprintf "INSERT INTO %s (date,id,low,high,avg) VALUES ('%s', %s, %s, %s, %s)",
$table, $date, $id, $low, $high, $avg; #Values initialised earlier in script
$dbh->do($sql);
}
That's what I'm trying at the moment, but it still dies on the first operation; simpler would be to have the loop skip for the particular error type 'DBD::SQLite::db do failed: columns date, typeID are not unique' rather than dying, but I haven't the foggiest where to begin with that.
For completeness, the database schema:
CREATE TABLE table (
date TEXT,
id INTEGER,
low REAL,
high REAL,
avg REAL,
PRIMARY KEY (date,id)
);