Re: Maximum statistics target

From: Peter Eisentraut <peter_e(at)gmx(dot)net>
To: pgsql-hackers(at)postgresql(dot)org
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Martijn van Oosterhout <kleptog(at)svana(dot)org>
Subject: Re: Maximum statistics target
Date: 2008-03-10 10:36:03
Message-ID: 200803101136.06204.peter_e@gmx.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Am Freitag, 7. März 2008 schrieb Tom Lane:
> I'm not wedded to the number 1000 in particular --- obviously that's
> just a round number. But it would be good to see some performance tests
> with larger settings before deciding that we don't need a limit.

Well, I'm not saying we should raise the default statistics target. But
setting an arbitrary limit on the grounds that larger values might slow the
system is like limiting the size of tables because larger tables will cause
slower queries. Users should have the option of finding out the best balance
for themselves. If there are concerns with larger statistics targets, we
should document them. I find nothing about this in the documentation at the
moment.

> IIRC, egjoinsel is one of the weak spots, so tests involving planning of
> joins between two tables with large MCV lists would be a good place to
> start.

I have run tests with joining two and three tables with 10 million rows each,
and the planning times seem to be virtually unaffected by the statistics
target, for values between 10 and 800000. They all look more or less like
this:

test=# explain select * from test1, test2 where test1.a = test2.b;
QUERY PLAN
-----------------------------------------------------------------------------
Hash Join (cost=308311.00..819748.00 rows=10000000 width=16)
Hash Cond: (test1.a = test2.b)
-> Seq Scan on test1 (cost=0.00..144248.00 rows=10000000 width=8)
-> Hash (cost=144248.00..144248.00 rows=10000000 width=8)
-> Seq Scan on test2 (cost=0.00..144248.00 rows=10000000 width=8)
(5 rows)

Time: 132,350 ms

and with indexes

test=# explain select * from test1, test2 where test1.a = test2.b;
QUERY PLAN
--------------------------------------------------------------------------------------------
Merge Join (cost=210416.65..714072.26 rows=10000000 width=16)
Merge Cond: (test1.a = test2.b)
-> Index Scan using test1_index1 on test1 (cost=0.00..282036.13
rows=10000000 width=8)
-> Index Scan using test2_index1 on test2 (cost=0.00..282036.13
rows=10000000 width=8)
(4 rows)

Time: 168,455 ms

The time to analyze is also quite constant, just before you run out of
memory. :) The MaxAllocSize is the limiting factor in all this. In my
example, statistics targets larger than about 800000 created pg_statistic
rows that would have been larger than 1GB, so they couldn't be stored.

I suggest that we get rid of the limit of 1000, adequately document whatever
issues might exist with large values (possibly not many, see above), and add
an error message more user-friendly than "invalid memory alloc request size"
for the cases where the value is too large to be storable.

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Simon Riggs 2008-03-10 10:43:09 Re: Include Lists for Text Search
Previous Message Martijn van Oosterhout 2008-03-10 07:52:37 Re: Lazy constraints / defaults