Re: pg_upgrade and statistics

From: "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov>
To: "Daniel Farina" <daniel(at)heroku(dot)com>,"Greg Stark" <stark(at)mit(dot)edu>
Cc: "pgsql-hackers" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: pg_upgrade and statistics
Date: 2012-03-13 18:18:58
Message-ID: 4F5F4942020000250004622E@gw.wicourts.gov
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Greg Stark <stark(at)mit(dot)edu> wrote:
> Daniel Farina <daniel(at)heroku(dot)com> wrote:
>> You probably are going to ask: "why not just run ANALYZE and be
>> done with it?"
>
> Uhm yes. If analyze takes a long time then something is broken.
> It's only reading a sample which should be pretty much a fixed
> number of pages per table. It shouldn't take much longer on your
> large database than on your smaller databases.

On a small database:

cc=# analyze "CaseHist";
ANALYZE
Time: 255.107 ms
cc=# select relpages, reltuples from pg_class where relname =
'CaseHist';
relpages | reltuples
----------+-----------
1264 | 94426
(1 row)

Same table on a much larger database (and much more powerful
hardware):

cir=# analyze "CaseHist";
ANALYZE
Time: 143450.467 ms
cir=# select relpages, reltuples from pg_class where relname =
'CaseHist';
relpages | reltuples
----------+-------------
3588659 | 2.12391e+08
(1 row)

Either way, there are about 500 tables in the database.

-Kevin

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Peter Eisentraut 2012-03-13 18:32:14 Re: patch: CREATE OR REPLACE FUNCTION autocomplete
Previous Message Andrew Dunstan 2012-03-13 18:10:45 Re: patch for parallel pg_dump