From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Łukasz Dejneka <l(dot)dejneka(at)gmail(dot)com> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Certain query eating up all free memory (out of memory error) |
Date: | 2010-06-02 21:26:35 |
Message-ID: | AANLkTinLTTAXRam1jB064onlHZW4JHlxZ1RMy-PJjAze@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Mon, May 24, 2010 at 12:50 PM, Łukasz Dejneka <l(dot)dejneka(at)gmail(dot)com> wrote:
> Hi group,
>
> I could really use your help with this one. I don't have all the
> details right now (I can provide more descriptions tomorrow and logs
> if needed), but maybe this will be enough:
>
> I have written a PG (8.3.8) module, which uses Flex Lexical Analyser.
> It takes text from database field and finds matches for defined rules.
> It returns a set of two text fields (value found and value type).
>
> When I run query like this:
> SELECT * FROM flex_me(SELECT some_text FROM some_table WHERE id = 1);
> It works perfectly fine. Memory never reaches more than 1% (usually
> its below 0.5% of system mem).
>
> But when I run query like this:
> SELECT flex_me(some_text_field) FROM some_table WHERE id = 1;
> Memory usage goes through the roof, and if the result is over about
> 10k matches (rows) it eats up all memory and I get "out of memory"
> error.
I'm not sure exactly what's happening in your particular case, but
there is some known suckage in this area.
http://archives.postgresql.org/pgsql-hackers/2010-05/msg00230.php
http://archives.postgresql.org/pgsql-hackers/2010-05/msg00395.php
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company
From | Date | Subject | |
---|---|---|---|
Next Message | Mark Kirkwood | 2010-06-02 22:13:29 | Re: File system choice for Red Hat systems |
Previous Message | Stephen Frost | 2010-06-02 20:59:57 | Re: requested shared memory size overflows size_t |