Re: plsql gets "out of memory"

From: Rural Hunter <ruralhunter(at)gmail(dot)com>
To: Kevin Grittner <Kevin(dot)Grittner(at)wicourts(dot)gov>
Cc: Julio Leyva <jcleyva(at)hotmail(dot)com>, pgsql-admin(at)postgresql(dot)org
Subject: Re: plsql gets "out of memory"
Date: 2011-08-29 15:45:39
Message-ID: 4E5BB423.6010303@gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin

Hi Kevin,

Thank you very much for the quick and detailed answers/suggestions. I
will check and try them.

于 2011/8/29 23:18, Kevin Grittner 写道:
> Good (but don't forget to change that once the bulk load is done). You
> should probably also turn off full_page_writes and synchronous_commit.
> I've seen benchmarks which show that this helps, even with the other
> settings you mention.
I forgot to mention synchronous_commit is already off and I will test
with full_page_writes off.
>> Increase checkpoint_segments
>
> There can be paradoxical results with that. For reasons yet to be
> determined, bulk conversion (unlike most workloads) sometimes runs
> faster with a small setting like the default. You have to test to
> see how it works in your environment.
yes, will check that.
> Yeah, you definitely want to build those only after the data for a
> table is loaded. I also recommend a VACUUM FREEZE ANALYZE on the
> database unless most of these rows will be deleted or updated before
> you run a billion database transactions. Otherwise you will get a
> painful "anti-wraparound" autovacuum on everything, probably at a time
> of heavy usage.
hmm....I will try to turn autovacuum off though I didn't see any
resource intension caused by it.
>> I have about 5G memory free on the server and have these memory
>> settings:
>> shared_buffers = 1GB
>> work_mem = 8MB
>> maintenance_work_mem = 1GB
>> effective_cache_size = 4GB
>
> Reasonable for that amount of RAM, but that's seems like
> underpowered hardware for the size of the database. If the
> production conversion is going to be run against different hardware,
> these tests may not be giving you very good numbers for what to
> expect.
I have mysql run on the server. The memory above is just for the import
for pgsql. I will shutdown mysql and give the memory(32G totally) to
pgsql when I doing the performance tests.
> Network latency? Avoid encrypted connections for the load, and do
> whatever you can to minimize latency, like attaching both machines to
> the same switch. You can't improve performance much if you're working
> on the things which are only using a small part of the time.
> Identifying the source of your delays is the most important thing at
> this point.
No, I do the import locally on the db server so the network letency can
be excluded.

In response to

Responses

Browse pgsql-admin by date

  From Date Subject
Next Message Kevin Grittner 2011-08-29 16:06:00 Re: plsql gets "out of memory"
Previous Message Kevin Grittner 2011-08-29 15:18:57 Re: plsql gets "out of memory"