| From: | Heikki Linnakangas <hlinnakangas(at)vmware(dot)com> |
|---|---|
| To: | Greg Stark <stark(at)mit(dot)edu> |
| Cc: | Mark Kirkwood <mark(dot)kirkwood(at)catalyst(dot)net(dot)nz>, Josh Berkus <josh(at)agliodbs(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
| Subject: | Re: ANALYZE sampling is too good |
| Date: | 2013-12-08 19:49:43 |
| Message-ID: | 52A4CD57.1000308@vmware.com |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-hackers |
On 12/08/2013 08:14 PM, Greg Stark wrote:
> The whole accounts table is 1.2GB and contains 10 million rows. As
> expected with rows_per_block set to 1 it reads 240MB of that
> containing nearly 2 million rows (and takes nearly 20s -- doing a full
> table scan for select count(*) only takes about 5s):
One simple thing we could do, without or in addition to changing the
algorithm, is to issue posix_fadvise() calls for the blocks we're going
to read. It should at least be possible to match the speed of a plain
sequential scan that way.
- Heikki
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Alexander Korotkov | 2013-12-08 19:56:40 | Re: GIN improvements part 1: additional information |
| Previous Message | Josh Berkus | 2013-12-08 19:03:02 | Re: ANALYZE sampling is too good |