Re: Selecting large tables gets killed

From: Ashutosh Bapat <ashutosh(dot)bapat(at)enterprisedb(dot)com>
To: Pavan Deolasee <pavan(dot)deolasee(at)gmail(dot)com>
Cc: amul sul <sul_amul(at)yahoo(dot)co(dot)in>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Selecting large tables gets killed
Date: 2014-02-20 09:19:28
Message-ID: CAFjFpRfCeXB03HmgG7HQsrJUsr=zjSG=7kpF0bJwoNghp9jsNQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Ian, Pavan,

That's correct, OS is killing the process
You are correct, the OS is killing the process

3766 Feb 20 14:30:14 ubuntu kernel: [23820.175868] Out of memory: Kill
process 34080 (psql) score 756 or sacrifice child
3767 Feb 20 14:30:14 ubuntu kernel: [23820.175871] Killed process 34080
(psql) total-vm:1644712kB, anon-rss:820336kB, file-rss:0kB

I thought, it's a memory leak, but no this is "implicitly" documented
behaviour

psql documentation talks about a special variable FETCH_COUNT
--
FETCH_COUNT

If this variable is set to an integer value > 0, the results of SELECT
queries are fetched and displayed in groups of that many rows, rather than
the default behavior of collecting the entire result set before display.
Therefore only a limited amount of memory is used, regardless of the size
of the result set. Settings of 100 to 1000 are commonly used when enabling
this feature. Keep in mind that when using this feature, a query might fail
after having already displayed some rows.

--

If I set some positive value for this variable, psql runs smoothly with any
size of data. But unset that variable, and it gets killed. But it's nowhere
written explicitly that psql can run out of memory while collecting the
result set. Either the documentation or the behaviour should be modified.

On Thu, Feb 20, 2014 at 2:35 PM, Pavan Deolasee <pavan(dot)deolasee(at)gmail(dot)com>wrote:

>
>
>
> On Thu, Feb 20, 2014 at 2:32 PM, Ashutosh Bapat <
> ashutosh(dot)bapat(at)enterprisedb(dot)com> wrote:
>
>>
>>
>> May be each setup has it's own breaking point. So trying with larger
>> number might reproduce the issue.
>>
>> I tried to debug it with gdb, but all it showed me was that psql received
>> a SIGKILL signal. I am not sure why.
>>
>>
> Is the psql process running out of memory ? AFAIK OOM killer sends
> SIGKILL.
>
> Thanks,
> Pavan
>
> --
> Pavan Deolasee
> http://www.linkedin.com/in/pavandeolasee
>
>

--
Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Heikki Linnakangas 2014-02-20 09:48:30 Re: GIN improvements part2: fast scan
Previous Message Christian Kruse 2014-02-20 09:06:31 Re: Patch: show xid and xmin in pg_stat_activity and pg_stat_replication