Re: cache lookup failed for index

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Willy-Bas Loos <willybas(at)gmail(dot)com>
Cc: "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org>
Subject: Re: cache lookup failed for index
Date: 2016-06-29 14:26:51
Message-ID: 6841.1467210411@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Willy-Bas Loos <willybas(at)gmail(dot)com> writes:
> So what i don't get is, -if the above is the case- If pg_dump expects to
> find an index, it already knows about its existence. Then why does it need
> to look for it again?

Because what it does is:

BEGIN ISOLATION LEVEL REPEATABLE READ; -- run in a single transaction
SELECT ... FROM pg_class; -- find out what all the tables are
LOCK TABLE foo IN ACCESS SHARE MODE; -- repeat for each table to be dumped

after which it runs around and collects subsidiary data such as what
indexes exist for each table. But the transaction's view of the catalogs
was frozen at the start of the first SELECT. So it can see entries for
an index in pg_class and pg_index even if that index got dropped between
transaction start and where pg_dump was able to lock the index's table.
pg_dump can't tell the index is no longer there --- but some of the
backend functions it calls can tell, and they throw errors.

There are various ways this might be rejiggered, but none of them
entirely remove all risk of failure in the presence of concurrent DDL.
Personally I'd recommend just retrying the pg_dump until it succeeds.

regards, tom lane

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Kaixi Luo 2016-06-29 14:51:16 How safe is pg_basebackup + continuous archiving?
Previous Message Adrian Klaver 2016-06-29 14:07:59 Re: Enquiry