Re: very large record sizes and ressource usage

From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: jtkells(at)verizon(dot)net
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: very large record sizes and ressource usage
Date: 2011-07-29 00:25:20
Message-ID: CA+TgmoYhnBxFX8nW4RphXDMo=_jFddMWgJXSvtVTHtCBLO2dpQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On Thu, Jul 7, 2011 at 10:33 AM, <jtkells(at)verizon(dot)net> wrote:
> Is there any guidelines to sizing work_mem, shared_bufferes and other
> configuration parameters etc., with regards to very large records?  I
> have a table that has a bytea column and I am told that some of these
> columns contain over 400MB of data.  I am having a problem on several
> servers reading and more specifically dumping these records (table)
> using pg_dump

work_mem shouldn't make any difference to how well that performs;
shared_buffers might, but there's no special advice for tuning it for
large records vs. anything else. Large records just get broken up
into small records, under the hood. At any rate, your email is a
little vague about exactly what the problem is. If you provide some
more detail you might get more help.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Rohan Malhotra 2011-07-29 13:37:15 Queries related to checkpoints
Previous Message Li Jin 2011-07-28 21:00:06 Performance penalty when using WITH