From: | "Paul B(dot) Anderson" <paul(dot)a(at)pnlassociates(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | "[ADMIN]" <pgsql-admin(at)postgresql(dot)org> |
Subject: | Re: Vacuum error on database postgres |
Date: | 2006-09-01 15:04:36 |
Message-ID: | 44F84C04.1030805@pnlassociates.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin pgsql-hackers |
I removed the duplicates and then immediately reindexed. All is well.
The vacuum analyze on the postgres database works now too. Thanks.
It is good to know the pg_statistic table can be emptied in case this
ever happens again.
Paul
Tom Lane wrote:
> "Paul B. Anderson" <paul(dot)a(at)pnlassociates(dot)com> writes:
>
>> I did delete exactly one of each of these using ctid and the query then
>> shows no duplicates. But, the problem comes right back in the next
>> database-wide vacuum.
>>
>
> That's pretty odd --- I'm inclined to suspect index corruption.
>
>
>> I also tried reindexing the table.
>>
>
> Get rid of the duplicates (actually, I'd just blow away all the
> pg_statistic entries for each of these tables) and *then* reindex.
> Then re-analyze and see what happens.
>
> Worst case you could just delete everything in pg_statistic, reindex it,
> do a database-wide ANALYZE to repopulate it. By definition there's not
> any original data in that table...
>
> regards, tom lane
>
> .
>
>
From | Date | Subject | |
---|---|---|---|
Next Message | Josef J. Micka | 2006-09-01 15:14:25 | Re: problem with initlocation |
Previous Message | Tom Lane | 2006-09-01 14:41:09 | Re: Vacuum error on database postgres |
From | Date | Subject | |
---|---|---|---|
Next Message | Bruno Wolff III | 2006-09-01 15:07:06 | Re: [PATCHES] Backend SSL configuration enhancement |
Previous Message | Gregory Stark | 2006-09-01 15:02:23 | Sort performance |