Re: ANALYZE sampling is too good

From: Kevin Grittner <kgrittn(at)ymail(dot)com>
To: Gavin Flower <GavinFlower(at)archidevsys(dot)co(dot)nz>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Greg Stark <stark(at)mit(dot)edu>
Cc: Simon Riggs <simon(at)2ndquadrant(dot)com>, Peter Geoghegan <pg(at)heroku(dot)com>, Jim Nasby <jim(at)nasby(dot)net>, Josh Berkus <josh(at)agliodbs(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: ANALYZE sampling is too good
Date: 2013-12-11 19:31:44
Message-ID: 1386790304.24146.YahooMailNeo@web162906.mail.bf1.yahoo.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Gavin Flower <GavinFlower(at)archidevsys(dot)co(dot)nz> wrote:

> For example, assume 1000 rows of 200 bytes and 1000 rows of 20 bytes,
> using 400 byte pages.  In the pathologically worst case, assuming
> maximum packing density and no page has both types: the large rows would
> occupy  500 pages and the smaller rows 50 pages. So if one selected 11
> pages at random, you get about 10 pages of large rows and about one for
> small rows!

With 10 * 2 = 20 large rows, and 1 * 20 = 20 small rows.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Daniel Wood 2013-12-11 19:32:33 9.3 reference constraint regression
Previous Message Tom Lane 2013-12-11 19:29:53 Re: -d option for pg_isready is broken