very large record sizes and ressource usage

From: jtkells(at)verizon(dot)net
To: pgsql-performance(at)postgresql(dot)org
Subject: very large record sizes and ressource usage
Date: 2011-07-07 14:33:05
Message-ID: pqgb17p8h6nqrdfrpgfgcrit9h3nfvvqnf@4ax.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Is there any guidelines to sizing work_mem, shared_bufferes and other
configuration parameters etc., with regards to very large records? I
have a table that has a bytea column and I am told that some of these
columns contain over 400MB of data. I am having a problem on several
servers reading and more specifically dumping these records (table)
using pg_dump

Thanks

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message sergio mayoral 2011-07-07 15:35:05 INSERT query times
Previous Message vincent dephily 2011-07-07 13:34:19 DELETE taking too much memory