You have several options to deal with this problem.
The obvious one is to normalize your database and merge all the data into single table that contains a "month" column, identifying the month for which the data is.
This looks like some manually created "partitioning" scheme. So another choice you have is to convert that into a table that uses Postgres' built-in partitioning
As a quick fix, you could simply create view that does a union on all the tables, that way you have that huge UNION
only once in your database and you can use a simple select * from summary_table
in your report.
The report job then doesn't need to be changed. You only change the view once you create a new summary table (something you also wouldn't need to do with a normalized model or partitioning)
You can create a function that loops through the available tables and returns the contents of each table:
create function get_summary()
returns setof summary201711
as
$$
declare
l_sql text;
l_rec record;
begin
for l_rec in select schemaname, tablename
from pg_tables
where schemaname = 'public'
and tablename like 'summary%' --<< adjust this to match your pattern
loop
l_sql = format('select * from %I.%I', l_rec.schemaname, l_rec.tablename);
return query execute l_sql;
end loop;
end;
$$
language plpgsql;
Which of the tables you use for the returns setof summary201711
doesn't really matter as they all have to have an identical structure.
Once you have that function you can use it like this:
select *
from get_summary();
Not sure how fast that is going to be though. My guess is that a view (or any form of partitioning) will be faster.