From: | Bruce Momjian <bruce(at)momjian(dot)us> |
---|---|
To: | PostgreSQL-development <pgsql-hackers(at)postgreSQL(dot)org> |
Cc: | Magnus Hagander <magnus(at)hagander(dot)net> |
Subject: | Pg_upgrade speed for many tables |
Date: | 2012-11-05 20:08:17 |
Message-ID: | 20121105200817.GA16323@momjian.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Magnus reported that a customer with a million tables was finding
pg_upgrade slow. I had never considered many table to be a problem, but
decided to test it. I created a database with 2k tables like this:
CREATE TABLE test1990 (x SERIAL);
Running the git version of pg_upgrade on that took 203 seconds. Using
synchronous_commit=off dropped the time to 78 seconds. This was tested
on magnetic disks with a write-through cache. (No change on an SSD with
a super-capacitor.)
I don't see anything unsafe about having pg_upgrade use
synchronous_commit=off. I could set it just for the pg_dump reload, but
it seems safe to just use it always. We don't write to the old cluster,
and if pg_upgrade fails, you have to re-initdb the new cluster anyway.
Patch attached. I think it should be applied to 9.2 as well.
--
Bruce Momjian <bruce(at)momjian(dot)us> http://momjian.us
EnterpriseDB http://enterprisedb.com
+ It's impossible for everything to be true. +
Attachment | Content-Type | Size |
---|---|---|
sync_off.diff | text/x-diff | 1.4 KB |
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2012-11-05 20:14:40 | Re: Pg_upgrade speed for many tables |
Previous Message | Tom Lane | 2012-11-05 20:07:02 | Re: Limiting the number of parameterized indexpaths created |