Skip site navigation (1) Skip section navigation (2)

Re: [HACKERS] out of memory

From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Tatsuo Ishii <ishii(at)postgresql(dot)org>
Cc: mahavir(dot)trivedi(at)gmail(dot)com, pgsql-performance(at)postgresql(dot)org, pgsql-hackers(at)postgresql(dot)org
Subject: Re: [HACKERS] out of memory
Date: 2012-11-05 17:27:17
Message-ID: CA+TgmoYKfSY5-PcFqNJyYbe3sAxTc8mN=CL1MoXW+PDYdd3SVw@mail.gmail.com (view raw or flat)
Thread:
Lists: pgsql-hackerspgsql-performance
On Tue, Oct 30, 2012 at 6:08 AM, Tatsuo Ishii <ishii(at)postgresql(dot)org> wrote:
>> i have sql file (it's size are 1GB  )
>> when i execute it then the String is 987098801 bytr too long for encoding
>> conversion  error occured .
>> pls give me solution about
>
> You hit the upper limit of internal memory allocation limit in
> PostgreSQL. IMO, there's no way to avoid the error except you use
> client encoding identical to backend.

We recently had a customer who suffered a failed in pg_dump because
the quadruple-allocation required by COPY OUT for an encoding
conversion exceeded allocatable memory.  I wonder whether it would be
possible to rearrange things so that we can do a "streaming" encoding
conversion.  That is, if we have a large datum that we're trying to
send back to the client, could we perhaps chop off the first 50MB or
so, do the encoding on that amount of data, send the data to the
client, lather, rinse, repeat?

Your recent work to increase the maximum possible size of large
objects (for which I thank you) seems like it could make these sorts
of issues more common.  As objects get larger, I don't think we can go
on assuming that it's OK for peak memory utilization to keep hitting
5x or more.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


In response to

Responses

pgsql-performance by date

Next:From: John R PierceDate: 2012-11-05 17:30:30
Subject: Re: [PERFORM] out of memory
Previous:From: Jeff JanesDate: 2012-11-05 17:09:08
Subject: Re: Re: Increasing work_mem and shared_buffers on Postgres 9.2 significantly slows down queries

pgsql-hackers by date

Next:From: John R PierceDate: 2012-11-05 17:30:30
Subject: Re: [PERFORM] out of memory
Previous:From: Robert HaasDate: 2012-11-05 17:19:08
Subject: Re: WIP checksums patch

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group