From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | 高健 <luckyjackgao(at)gmail(dot)com> |
Cc: | Jeff Janes <jeff(dot)janes(at)gmail(dot)com>, pgsql-general <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: My Experiment of PG crash when dealing with huge amount of data |
Date: | 2013-09-02 02:37:07 |
Message-ID: | 16209.1378089427@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
=?UTF-8?B?6auY5YGl?= <luckyjackgao(at)gmail(dot)com> writes:
> If data grows rapidly, maybe our customer will use too much memory , Is
> ulimit command a good idea for PG?
There's no received wisdom saying that it is. There's a fairly widespread
consensus that disabling OOM kill can be a good idea, but I don't recall
that many people have tried setting specific ulimits on server processes.
Keep in mind that exceeding a ulimit would cause queries to fail outright
(whether the server was under much load or not), versus just getting
slower if the server starts to swap under too much load. I can imagine
situations where that would be considered a good tradeoff, but it's hardly
right for everyone.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Pavel Stehule | 2013-09-02 07:27:16 | Re: store multiple rows with the SELECT INTO statement |
Previous Message | 高健 | 2013-09-02 01:25:43 | Re: My Experiment of PG crash when dealing with huge amount of data |