Skip site navigation (1) Skip section navigation (2)

Re: exceptionally large UPDATE

From: Vick Khera <vivek(at)khera(dot)org>
To: pgsql-general <pgsql-general(at)postgresql(dot)org>
Subject: Re: exceptionally large UPDATE
Date: 2010-10-29 14:21:14
Message-ID: AANLkTikQRvni6ZmeHFJ7KBi-5aYsHYk8N-c1BZ-tJVZ7@mail.gmail.com (view raw or flat)
Thread:
Lists: pgsql-general
On Thu, Oct 28, 2010 at 1:06 PM, Ivan Sergio Borgonovo
<mail(at)webthatworks(dot)it> wrote:
> What I'm planning to do is:
> max_connections = 5
> shared_buffers = 240M
> work_mem = 90MB
> maintenance_work_mem = 1GB
> max_fsm_pages = 437616
> max_fsm_relations = 1200
> checkpoint_segments = 70
> default_statistics_target = 30
> #log_min_duration_statement = 1000
>

default_statistics_target = 100 is the new "default" for newer
postgres, and with good reason... try that.

if you boost your checkpoint_segments, also twiddle the
checkpoint_timeout (increase it) and checkpoint_completion_target
(something like 0.8 would be good, depending on how fast your disks
are) values to try to smooth out your I/O (ie, keep it from bursting
at checkpoint timeout).  Is 5 connections really enough for you?

And like I said before, you can set the work_mem and/or
maintenance_work_mem on a per-connection basis as needed, so for your
big update you can increase those values just during that work without
affecting the rest of the system.

In response to

Responses

pgsql-general by date

Next:From: Tom LaneDate: 2010-10-29 14:25:49
Subject: Re: create table as select VS create table; insert as select
Previous:From: Adrian KlaverDate: 2010-10-29 13:58:47
Subject: Re: Can Postgres Not Do This Safely ?!?

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group