Re: My Experiment of PG crash when dealing with huge amount of data

From: 高健 <luckyjackgao(at)gmail(dot)com>
To: Jeff Janes <jeff(dot)janes(at)gmail(dot)com>
Cc: pgsql-general <pgsql-general(at)postgresql(dot)org>
Subject: Re: My Experiment of PG crash when dealing with huge amount of data
Date: 2013-09-06 08:58:00
Message-ID: CAL454F1JjxP=AqbYyQ5dYWAD8v+SpE8FzM4QpSZ48ZVRDoGmCQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Hello:

Sorry for disturbing again.
Some of my friends told me about cgroups, So I tried it first.
I found that cgroups can work for task such as wget.
But it can't work for my postgres process.

[root(at)cent6 Desktop]# cat /etc/cgconfig.conf
#
# Copyright IBM Corporation. 2007
#
# Authors: Balbir Singh <balbir(at)linux(dot)vnet(dot)ibm(dot)com>
# This program is free software; you can redistribute it and/or modify it
# under the terms of version 2.1 of the GNU Lesser General Public License
# as published by the Free Software Foundation.
#
# This program is distributed in the hope that it would be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
#
# See man cgconfig.conf for further details.
#
# By default, mount all controllers to /cgroup/<controller>

mount {
cpuset = /cgroup/cpuset;
cpu = /cgroup/cpu;
cpuacct = /cgroup/cpuacct;
memory = /cgroup/memory;
devices = /cgroup/devices;
freezer = /cgroup/freezer;
net_cls = /cgroup/net_cls;
blkio = /cgroup/blkio;
}

group test1 {
perm {
task{
uid=postgres;
gid=postgres;
}

admin{
uid=root;
gid=root;
}

} memory {
memory.limit_in_bytes=500M;
}
}

[root(at)cent6 Desktop]#

[root(at)cent6 Desktop]# service cgconfig status
Running
[root(at)cent6 Desktop]#

When I start postgres and run the above sql statement, It still consume too
much memory. As if cgroups does not work.

Best Regards

2013/9/3 高健 <luckyjackgao(at)gmail(dot)com>

> Thanks, I'll consider it carefully.
>
> Best Regards
>
> 2013/9/3 Jeff Janes <jeff(dot)janes(at)gmail(dot)com>
>
>> On Sun, Sep 1, 2013 at 6:25 PM, 高健 <luckyjackgao(at)gmail(dot)com> wrote:
>> >>To spare memory, you would want to use something like:
>> >
>> >>insert into test01 select generate_series,
>> >>repeat(chr(int4(random()*26)+65),1024) from
>> >>generate_series(1,2457600);
>> >
>> > Thanks a lot!
>> >
>> > What I am worrying about is that:
>> > If data grows rapidly, maybe our customer will use too much memory ,
>>
>>
>> The size of the data has little to do with it. Take your example as
>> an example. The database could have been nearly empty before you
>> started running that query. A hostile or adventurous user can craft
>> queries that will exhaust the server's memory without ever needing any
>> particular amount of data in data_directory, except maybe in the temp
>> tablespace.
>>
>> So it is a matter of what kind of users you have, not how much data
>> you anticipate having on disk.
>>
>> The parts of PostgreSQL that might blow up memory based on ordinary
>> disk-based tables are pretty well protected by shared_buffers,
>> temp_buffers, work_mem, maintenance_work_mem, etc. already. It is the
>> things that don't directly map to data already on disk which are
>> probably more vulnerable.
>>
>> > Is
>> > ulimit command a good idea for PG?
>>
>> I've used ulimit -v on a test server (which was intentionally used to
>> test things to limits of destruction), and was happy with the results.
>> It seemed like it would error out the offending process, or just the
>> offending statement, in a graceful way; rather than having random
>> processes other than the culprit be brutally killed by OOM, or having
>> the machine just swap itself into uselessness. I'd be reluctant to
>> use it on production just on spec that something bad *might* happen
>> without it, but if I started experiencing problems caused by a single
>> rogue process using outrageous amounts of memory, that would be one of
>> my first stops.
>>
>> Experimentally, shared memory does count against the -v limit, and the
>> limit has to be set rather higher than shared_buffers, or else your
>> database won't even start.
>>
>> Cheers,
>>
>> Jeff
>>
>
>

In response to

Browse pgsql-general by date

  From Date Subject
Next Message vinayak 2013-09-06 11:41:42 Re: pg_dump question (exclude schemas)
Previous Message Szymon Guz 2013-09-06 08:39:38 Re: Failed to autoconvert '1' to text.