Skip site navigation (1) Skip section navigation (2)

Pg_upgrade speed for many tables

From: Bruce Momjian <bruce(at)momjian(dot)us>
To: PostgreSQL-development <pgsql-hackers(at)postgreSQL(dot)org>
Cc: Magnus Hagander <magnus(at)hagander(dot)net>
Subject: Pg_upgrade speed for many tables
Date: 2012-11-05 20:08:17
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-hackers
Magnus reported that a customer with a million tables was finding
pg_upgrade slow.  I had never considered many table to be a problem, but
decided to test it.  I created a database with 2k tables like this:

	CREATE TABLE test1990 (x SERIAL);

Running the git version of pg_upgrade on that took 203 seconds.  Using
synchronous_commit=off dropped the time to 78 seconds.  This was tested
on magnetic disks with a write-through cache.  (No change on an SSD with
a super-capacitor.)

I don't see anything unsafe about having pg_upgrade use
synchronous_commit=off.  I could set it just for the pg_dump reload, but
it seems safe to just use it always.  We don't write to the old cluster,
and if pg_upgrade fails, you have to re-initdb the new cluster anyway.

Patch attached.  I think it should be applied to 9.2 as well.

  Bruce Momjian  <bruce(at)momjian(dot)us>

  + It's impossible for everything to be true. +

Attachment: sync_off.diff
Description: text/x-diff (1.4 KB)


pgsql-hackers by date

Next:From: Tom LaneDate: 2012-11-05 20:14:40
Subject: Re: Pg_upgrade speed for many tables
Previous:From: Tom LaneDate: 2012-11-05 20:07:02
Subject: Re: Limiting the number of parameterized indexpaths created

Privacy Policy | About PostgreSQL
Copyright © 1996-2018 The PostgreSQL Global Development Group