From: | John R Pierce <pierce(at)hogranch(dot)com> |
---|---|
To: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: [PERFORM] out of memory |
Date: | 2012-11-05 17:30:30 |
Message-ID: | 5097F7B6.4040300@hogranch.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers pgsql-performance |
On 11/05/12 9:27 AM, Robert Haas wrote:
> That is, if we have a large datum that we're trying to
> send back to the client, could we perhaps chop off the first 50MB or
> so, do the encoding on that amount of data, send the data to the
> client, lather, rinse, repeat?
I'd suggest work_mem sized chunks for this?
--
john r pierce N 37, W 122
santa cruz ca mid-left coast
From | Date | Subject | |
---|---|---|---|
Next Message | Robert Haas | 2012-11-05 17:33:05 | Re: What are the advantages of not being able to access multiple databases with one connection? |
Previous Message | Robert Haas | 2012-11-05 17:27:17 | Re: [HACKERS] out of memory |
From | Date | Subject | |
---|---|---|---|
Next Message | Claudio Freire | 2012-11-05 17:40:31 | Re: Re: Increasing work_mem and shared_buffers on Postgres 9.2 significantly slows down queries |
Previous Message | Robert Haas | 2012-11-05 17:27:17 | Re: [HACKERS] out of memory |