1

This query returns all the rows(around 850+) from the table successfully:

select * from my_db_log
 where date_trunc('day',creation_date) >= to_date('2014-03-05'::text,'yyyy-mm-dd');

But when I add the count(*) with the same query like below:

select count(*) from my_db_log
 where date_trunc('day',creation_date) >= to_date('2014-03-05'::text,'yyyy-mm-dd');

Then it returns me:

********** Error **********

ERROR: attribute number 10 exceeds number of columns 0
SQL state: XX000

FYI: creation_date is the 10th column of my table.

Interestingly, when i replace count(*) with count(id) then it returns me 0 but I have records in my table those meet the condition.

EDITS: I tried the vacuumdb command over the whole database but still it is not working for me. Here is the verbose output for the vacuumdb on this particular table.

>vacuumdb --full --analyze -h 192.168.1.10 -p 8888 -U root -W --verbose --table my_db_log my_db
Password:

Output:

INFO:  vacuuming "public.my_db_log"
INFO:  "my_db_log": found 0 removable, 0 nonremovable row versions in 0 pages
DETAIL:  0 dead row versions cannot be removed yet.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  analyzing "public.my_db_log"
INFO:  "my_db_log": scanned 0 of 0 pages, containing 0 live rows and 0 dead rows; 0 rows in sample, 0 estimated total rows
Sabuj Hassan
  • 38,281
  • 14
  • 75
  • 85
  • Could be a corrupt index. Does rebuilding the indexes on that table help? –  Mar 10 '14 at 08:26
  • @a_horse_with_no_name re-indexing didn't work. But the problem is gone after doing a dump/restore. May be I had some corrupt filesystem in my db-server! – Sabuj Hassan Mar 10 '14 at 12:31

1 Answers1

1

Something is broken in your database. Try

VACCUM FULL ANALYZE my_db_log;

Or, more radically, run from the shell on your db server:

vacuumdb --full --analyze my_database

Details in the manual.

The error message indicate breakage in the system catalog pg_attribute or one of the associated indices. Before you do anything else, read about corruption in the Postgres Wiki. Be very careful not to lose valuable data.
Then one other thing to try:

reindexdb --system my_database

If nothing helps, to repair your obviously broken DB, you could try to pg_dumpall the whole cluster, drop the cluster, create a new cluster and restore the backup. Also make sure you find out what broke your db. That does not usually happen (never happened to me, yet). Chances are, you are facing serious hardware problems, in which case you need to act soon ...

Erwin Brandstetter
  • 605,456
  • 145
  • 1,078
  • 1,228
  • the `vacuumdb` didn't work for me. I have updated the question with verbose output. However dumping the table into another db without change does work. – Sabuj Hassan Mar 10 '14 at 08:01
  • 1
    I don't see any mentioning of 850 *columns* in the question. *"rows(around 850+)*" indicates 850 *rows* to me. But I don't understand how the first query can return 850 rows if vacuum claims the table is empty. –  Mar 10 '14 at 08:05
  • @a_horse_with_no_name: A right, thanks. I had misread that. Removed the misguided remark. – Erwin Brandstetter Mar 10 '14 at 08:14
  • Why is the shell command "more radical"? The doc says: "There is no effective difference between vacuuming and analyzing databases via this utility and via other methods for accessing the server." – exhuma Mar 10 '14 at 08:22
  • @exhuma: Because the second option vacuums the whole db, not just the one table. – Erwin Brandstetter Mar 10 '14 at 08:23
  • `vacuum` without specifying a table will also do it for the whole DB –  Mar 10 '14 at 08:25
  • @ErwinBrandstetter Ah, I see. I missed that difference in your commands :) – exhuma Mar 10 '14 at 08:30
  • @ErwinBrandstetter The problem is vanished after doing a dump and restore. Also I had only `2GB` space left on my db server. I have freed few space over there as well. So I want to believe on your guess(corruption or any other filesystem error!) that you already mention earlier. – Sabuj Hassan Mar 10 '14 at 12:29