"Campbell, Lance" <lance(at)uiuc(dot)edu> wrote:
> I have been researching how to improve my overall performance of
> postgres. I am a little confused on the reasoning for how work_mem is
> used in the postgresql.conf file. The way I understand the
> documentation is you define with work_mem how much memory you want to
> allocate per search. Couldn't you run out of memory? This approach
> seems kind of odd to me. How do you tell the system not to allocate too
> much memory if you all of the sudden got hit with a heavier number of
work_mem tells PostgreSQL how much memory to use for each sort/join.
If a sort/join exceeds that amount, PostgreSQL uses temp files on the
disk instead of memory to do the work.
If you want a query to complete, you've got to allow Postgres to finish
it. The work_mem setting gives Postgres information on how to go about
doing that in the best way.
If you want to guarantee that individual processes can't suck up tons of
memory, use your operating system's ulimit or equivalent functionality.
That's one of the advantages of the forking model, it allows the operating
system to enforce a certain amount of policy an a per-connection basis.
In response to
- work-mem at 2007-01-28 20:21:39 from Campbell, Lance
pgsql-performance by date
|Next:||From: Josh Berkus||Date: 2007-01-28 23:24:24|
|Subject: Re: Tuning|
|Previous:||From: imad||Date: 2007-01-28 20:26:21|
|Subject: Re: work-mem|