Skip site navigation (1) Skip section navigation (2)

Re: pg_upgrade and statistics

From: "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov>
To: "Bruce Momjian" <bruce(at)momjian(dot)us>
Cc: "Daniel Farina" <daniel(at)heroku(dot)com>,"Greg Stark" <stark(at)mit(dot)edu>, "pgsql-hackers" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: pg_upgrade and statistics
Date: 2012-03-13 19:07:14
Message-ID: (view raw or whole thread)
Lists: pgsql-hackers
Bruce Momjian <bruce(at)momjian(dot)us> wrote:
> On Tue, Mar 13, 2012 at 01:18:58PM -0500, Kevin Grittner wrote:
>> cir=# analyze "CaseHist";
>> Time: 143450.467 ms
>> cir=# select relpages, reltuples from pg_class where relname =
>> 'CaseHist';
>>  relpages |  reltuples  
>> ----------+-------------
>>   3588659 | 2.12391e+08
>> (1 row)
>> Either way, there are about 500 tables in the database.
> That is 2.5 minutes.  How large is that database?
cir=# select pg_size_pretty(pg_database_size('cir'));
 2563 GB
(1 row)
In case you meant "How large is that table that took 2.5 minutes to
cir=# select pg_size_pretty(pg_total_relation_size('"CaseHist"'));
 44 GB
(1 row)
I've started a database analyze, to see how long that takes.  Even
if each table took 1/4 second (like on the small database) with over
500 user tables, plus the system tables, it'd be 15 minutes.  I'm
guessing it'll run over an hour, but I haven't timed it lately, so
-- we'll see.

In response to


pgsql-hackers by date

Next:From: Robert HaasDate: 2012-03-13 19:22:19
Subject: Re: patch for parallel pg_dump
Previous:From: Peter GeogheganDate: 2012-03-13 19:02:18
Subject: Re: Re: pg_stat_statements normalisation without invasive changes to the parser (was: Next steps on pg_stat_statements normalisation)

Privacy Policy | About PostgreSQL
Copyright © 1996-2015 The PostgreSQL Global Development Group