Re: pg_dump and thousands of schemas

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Tatsuo Ishii <ishii(at)postgresql(dot)org>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: pg_dump and thousands of schemas
Date: 2012-05-29 15:52:22
Message-ID: 12147.1338306742@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers pgsql-performance

Tatsuo Ishii <ishii(at)postgresql(dot)org> writes:
> So I did qucik test with old PostgreSQL 9.0.2 and current (as of
> commit 2755abf386e6572bad15cb6a032e504ad32308cc). In a fresh initdb-ed
> database I created 100,000 tables, and each has two integer
> attributes, one of them is a primary key. Creating tables were
> resonably fast as expected (18-20 minutes). This created a 1.4GB
> database cluster.

> pg_dump dbname >/dev/null took 188 minutes on 9.0.2, which was pretty
> long time as the customer complained. Now what was current? Well it
> took 125 minutes. Ps showed that most of time was spent in backend.

Yeah, Jeff's experiments indicated that the remaining bottleneck is lock
management in the server. What I fixed so far on the pg_dump side
should be enough to let partial dumps run at reasonable speed even if
the whole database contains many tables. But if psql is taking
AccessShareLock on lots of tables, there's still a problem.

regards, tom lane

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Robert Haas 2012-05-29 16:10:41 Re: Uh, I change my mind about commit_delay + commit_siblings (sort of)
Previous Message Robert Haas 2012-05-29 15:33:59 Re: Function call hierarchy/path since getting the buffer until access its data

Browse pgsql-performance by date

  From Date Subject
Next Message Tom Lane 2012-05-29 17:42:42 Re: Strong slowdown on huge tables
Previous Message Job 2012-05-29 14:54:18 Strong slowdown on huge tables