Skip site navigation (1) Skip section navigation (2)

Re: [HACKERS] Avoiding io penalty when updating large objects

From: Mark Dilger <pgsql(at)markdilger(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Alvaro Herrera <alvherre(at)surnet(dot)cl>,pgsql-hackers(at)postgresql(dot)org, pgsql-general(at)postgresql(dot)org,Jan Wieck <JanWieck(at)Yahoo(dot)com>
Subject: Re: [HACKERS] Avoiding io penalty when updating large objects
Date: 2005-06-29 05:30:49
Message-ID: 42C23209.7020607@markdilger.com (view raw or flat)
Thread:
Lists: pgsql-generalpgsql-hackers
Tom Lane wrote:
> Alvaro Herrera <alvherre(at)surnet(dot)cl> writes:
> 
>>On Tue, Jun 28, 2005 at 07:38:43PM -0700, Mark Dilger wrote:
>>
>>>If, for a given row, the value of c is, say, approximately 2^30 bytes 
>>>large, then I would expect it to be divided up into 8K chunks in an 
>>>external table, and I should be able to fetch individual chunks of that 
>>>object (by offset) rather than having to detoast the whole thing.
> 
> 
>>I don't think you can do this with the TOAST mechanism.  The problem is
>>that there's no API which allows you to operate on only certain chunks
>>of data.
> 
> 
> There is the ability to fetch chunks of a toasted value (if it was
> stored out-of-line but not compressed).  There is no ability at the
> moment to update it by chunks.  If Mark needs the latter then large
> objects are probably the best bet.
> 
> I'm not sure what it'd take to support chunkwise update of toasted
> fields.  Jan, any thoughts?
> 
> 			regards, tom lane
> 
> ---------------------------(end of broadcast)---------------------------
> TIP 3: if posting/reading through Usenet, please send an appropriate
>        subscribe-nomail command to majordomo(at)postgresql(dot)org so that your
>        message can get through to the mailing list cleanly

Ok,

If there appears to be a sane path to implementing this, I may be able to 
contribute engineering effort to it.  (I manage a group of engineers and could 
spare perhaps half a man year towards this.)  But I would like direction as to 
how you all think this should be done, or whether it is just a bad idea.

I can also go with the large object approach.  I'll look into that.

Mark Dilger

In response to

pgsql-hackers by date

Next:From: strkDate: 2005-06-29 06:29:23
Subject: Re: CVS pg_config --includedir-server broken
Previous:From: laserDate: 2005-06-29 05:30:14
Subject: Re: Proposed TODO: --encoding option for pg_dump

pgsql-general by date

Next:From: Richard HuxtonDate: 2005-06-29 07:00:09
Subject: Re: Advice on merging two primary keys...
Previous:From: Tom LaneDate: 2005-06-29 03:58:59
Subject: Re: [HACKERS] Avoiding io penalty when updating large objects

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group