On Tue, Mar 21, 2006 at 05:04:16PM -0800, Craig A. James wrote:
> Tom Lane wrote:
> >"Craig A. James" <cjames(at)modgraph-usa(dot)com> writes:
> >>It looks to me like the problem is the use of nested loops when a hash
> >>join should be used, but I'm no expert at query planning.
> >Given the sizes of the tables involved, you'd likely have to boost up
> >work_mem before the planner would consider a hash join. What nondefault
> >configuration settings do you have, anyway?
> shared_buffers = 20000
> work_mem = 32768
> effective_cache_size = 300000
> This is on a 4GB machine. Is there a guideline for work_mem that's related
> to table size? Something like, "allow 2 MB per million rows"?
No. The general guide is "set it as large as possible without making the
machine start swapping." In some cases, you'll want to bump it up much
higher for certain queries, especially if you know those queries will
only run one at a time.
Jim C. Nasby, Sr. Engineering Consultant jnasby(at)pervasive(dot)com
Pervasive Software http://pervasive.com work: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461
In response to
pgsql-performance by date
|Next:||From: Jim C. Nasby||Date: 2006-03-22 12:47:32|
|Subject: Re: WAL logging of SELECT ... INTO command|
|Previous:||From: Alejandro D. Burne||Date: 2006-03-22 12:23:53|
|Subject: Re: Sequence Scan vs. Index scan|
pgsql-hackers by date
|Next:||From: Jonah H. Harris||Date: 2006-03-22 13:17:35|
|Subject: Request from Tom--offlist|
|Previous:||From: Simon Riggs||Date: 2006-03-22 10:03:34|
|Subject: Re: [PATCHES] Automatically setting work_mem|