I'm writing a script that executes the same merge statement across several different databases. The tables names and columns are the same in each schema I am connecting to, but the schema names themselves are different. Rather than define two separate queries, what I would prefer to do is define one merge statement as a string with the schema name set to '?', and then pass in the needed schema name during execution:
my $hostgroup_merge = "MERGE INTO ?.HOSTGROUPS .... ";
And then later:
my $dev_schema = 'DEV';
my $prod_schema = 'PROD';
#do the merge in the dev schema
my $dbh = DBI->connect($dev_server,$user,$pass);
my $sth = $dbh->prepare($hostgroup_merge);
$sth->execute($dev_schema);
$sth->finish();
$dbh->disconnect;
#now do the same merge for the prod schema
$dbh = DBI->connect($prod_server,$user,$pass);
$sth = $dbh->prepare($hostgroup_merge);
$sth->execute($prod_schema);
$sth->finish();
$dbh->disconnect;
This fails with an invalid table name error:
DBD::Oracle::st execute failed: ORA-00903: invalid table name (DBD ERROR: error possibly near <*> indicator at char 19 in '
MERGE INTO :p1<*>.HOSTGROUPS ...
However, it seems to be picking up the bind variable just fine, since it spits this out after the query:
" with ParamValues: :p1='DEV']
I know its the bind variable that's causing the problem, because when I define separate merge statements and hard-code the schema names in, they work just fine.
Since the only thing that's changing is the variable in the very beginning, I could probably hack the query together by defining the non-changing part of the statement in a string, and then concatenating the rest:
my $dev_merge = 'MERGE INTO ' . $dev_schema . $merge_statement;
And then just run the prepare
on that, but is there a legitimate way to pass in the name via the query? If so, can anybody point out what's going wrong here?