On Wed, Oct 27, 2010 at 1:32 PM, Reid Thompson <Reid(dot)Thompson(at)ateb(dot)com> wrote:
> On Wed, 2010-10-27 at 13:23 -0500, Jon Nelson wrote:
>> set it to 500 and restarted postgres.
> did you re-analyze?
Not recently. I tried that, initially, and there was no improvement.
I'll try it again now that I've set the stats to 500.
The most recent experiment shows me that, unless I create whatever
indexes I would like to see used *before* the large (first) update,
then they just don't get used. At all. Why would I need to ANALYZE the
table immediately following index creation? Isn't that part of the
index creation process?
Currently executing is a test where I place an "ANALYZE foo" after the
COPY, first UPDATE, and first index, but before the other (much
Nope. The ANALYZE made no difference. This is what I just ran:
CREATE TEMPORARY TABLE foo
UPDATE ... -- 1/3 of table, approx
CREATE INDEX foo_rowB_idx on foo (rowB);
-- queries from here to 'killed' use WHERE rowB = 'someval'
UPDATE ... -- 7 rows. seq scan!
UPDATE ... -- 242 rows, seq scan!
UPDATE .. -- 3700 rows, seq scan!
UPDATE .. -- 3100 rows, seq scan!
In response to
pgsql-performance by date
|Next:||From: Andreas Kretschmer||Date: 2010-10-27 18:58:04|
|Subject: Re: Massive update, memory usage|
|Previous:||From: Merlin Moncure||Date: 2010-10-27 18:51:23|
|Subject: Re: Postgres insert performance and storage requirement
compared to Oracle|