From: | The Hermit Hacker <scrappy(at)hub(dot)org> |
---|---|
To: | Vadim Mikheev <vadim(at)krs(dot)ru> |
Cc: | Bruce Momjian <maillist(at)candle(dot)pha(dot)pa(dot)us>, t-ishii(at)sra(dot)co(dot)jp, pgsql-hackers(at)postgreSQL(dot)org |
Subject: | Re: [HACKERS] Arbitrary tuple size |
Date: | 1999-07-28 12:04:54 |
Message-ID: | Pine.BSF.4.05.9907280902192.78452-100000@thelab.hub.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Fri, 9 Jul 1999, Vadim Mikheev wrote:
>
> Bruce Momjian wrote:
> >
> > > Bruce Momjian wrote:
> > > >
> > > > If we get wide tuples, we could just throw all large objects into one
> > > > table, and have an on it. We can then vacuum it to compact space, etc.
> > >
> > > Storing 2Gb LO in table is not good thing.
> > >
> > > Vadim
> > >
> >
> > Ah, but we have segemented tables now. It will auto-split at 1 gig.
>
> Well, now consider update of 2Gb row!
> I worry not due to non-overwriting but about writing
> 2Gb log record to WAL - we'll not be able to do it, sure.
What I'm kinda curious about is *why* you would want to store a LO in the
table in the first place? And, consequently, as Bruce had
suggested...index it? Unless something has changed recently that I
totally missed, the only time the index would be used is if a query was
based on a) start of string (ie. ^<string>) or b) complete string (ie.
^<string>$) ...
So what benefit would an index be on a LO?
Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy
Systems Administrator @ hub.org
primary: scrappy(at)hub(dot)org secondary: scrappy(at){freebsd|postgresql}.org
From | Date | Subject | |
---|---|---|---|
Next Message | Oleg Bartunov | 1999-07-28 12:28:27 | Re: [HACKERS] row reuse while UPDATE and vacuum analyze problem |
Previous Message | The Hermit Hacker | 1999-07-28 12:00:21 | Re: [HACKERS] row reuse while UPDATE and vacuum analyze problem |