Re: using a lot of maintenance_work_mem

From: Frederik Ramm <frederik(at)remote(dot)org>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: using a lot of maintenance_work_mem
Date: 2011-02-18 19:31:29
Message-ID: 4D5EC911.2030404@remote.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Tom & Kevin,

thank you for your replies. Kevin, I had already employed all the
tricks you mention, except using temporary tables which would be hard
for me due to the structure of my application (but I could try using
something like pgbouncer or so), but thanks a lot for sharing the ideas.

Tom Lane wrote:
>> If I were to either (a) increase MaxAllocSize to, say, 48 GB instead of
>> 1 GB, or (b) hack tuplesort.c to ignore MaxAllocSize, just for my local
>> setup - would that likely be viable in my situation, or would I break
>> countless things?
>
> You would break countless things.

Indeed I did. I tried to raise the MaxAllocSize from 1 GB to a large
number, but immediately got strange memory allocation errors during the
regression test (something that looked like a wrapped integer in a
memory allocation request).

I reduced the number in steps, and found I could compile and run
PostgreSQL 8.3 with a MaxAllocSize of 4 GB, and PostgreSQL 9.0 with 2 GB
without breakage.

In a completely un-scientific test run, comprising 42 individual SQL
statements aimed at importing and indexing a large volume of data, I got
the following results:

pg8.3 with normal MaxAllocSize .................. 15284s
pg8.3 with MaxAllocSize increased to 4 GB ....... 14609s (-4.5%)
pg9.0 with normal MaxAllocSize .................. 12969s (-15.2%)
pg9.0 with MaxAllocSize increased to 2 GB ....... 13211s (-13.5%)

> I'd want to see some evidence that it's actually
> helpful for production situations. I'm a bit dubious that you're going
> to gain much here.

So, on the whole it seems you were right; the performance, at least with
that small memory increase I managed to build in without breaking
things, doesn't increase a lot, or not at all for PostgreSQL 9.0.

The single query that gained most from the increase in memory was an
ALTER TABLE statement to add a BIGINT primary key to a table with about
50 million records - this was 75% faster on the both 8.3 and 9.0 but
since it took only 120 seconds to begin with, didn't change the result a
lot.

The single query where pg9.0 beat pg8.3 by a country mile was a CREATE
INDEX statement on a BIGINT column to a table with about 500 million
records - this cost 2679 seconds on normal 8.3, 2443 seconds on
large-memory 8.3, and aroung 1650 seconds on 9.0, large memory or not.

The query that, on both 8.3 and 9.0, took about 10% longer with more
memory was a CREATE INDEX statement on a TEXT column.

All this, as I said, completely un-scientific - I did take care to flush
caches and not run anything in parallel, but that was about all I did so
it might come out differently when run often.

My result of all of this? Switch to 9.0 of course ;)

Bye
Frederik

--
Frederik Ramm ## eMail frederik(at)remote(dot)org ## N49°00'09" E008°23'33"

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Bruce Momjian 2011-02-18 19:33:14 Re: pg_basebackup and wal streaming
Previous Message Robert Haas 2011-02-18 19:26:01 Re: Assertion failure on UNLOGGED VIEW and SEQUENCE