Large Object problems (was Re: JDBC int8 hack)

From: Peter Mount <peter(at)retep(dot)org(dot)uk>
To: Kyle VanderBeek <kylev(at)yaga(dot)com>, pgsql-patches(at)postgresql(dot)org
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Large Object problems (was Re: JDBC int8 hack)
Date: 2001-04-10 13:24:24
Message-ID: 5.0.2.1.0.20010410141636.022faaa0@mail.retep.org.uk
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers pgsql-patches

At 18:30 09/04/01 -0700, Kyle VanderBeek wrote:
>On Thu, Apr 05, 2001 at 04:08:48AM -0400, Peter T Mount wrote:
> > Quoting Kyle VanderBeek <kylev(at)yaga(dot)com>:
> >
> >
> > > Please consider applying my patch to the 7.0 codebase as a stop-gap
> > > measure until such time as the optimizer can be improved to notice
> > > indecies on INT8 columns and cast INT arguments up.
> >
> > This will have to wait until after 7.1 is released. As this is a "new"
> feature,
> > this can not be included into 7.1 as it's now in the final Release
> Candidate
> > phase.
>
>This is a new feature? Using indecies is "new"? I guess I really beg to
>differ. Seems like a bugfix to me (in the "workaround" category).

Yes they are. INT8 is not a feature/type yet supported by the driver, hence
it's "new".

Infact the jdbc driver supports no array's at this time (as PostgreSQL &
SQL3 arrays are different beasts).

If it's worked in the past, then that was sheer luck.

>I'm going to start digging around in the optimizer code so such hacks as
>mine aren't needed. It's really haenous to find out your production
>server is freaking out and doing sequential scans for EVERYTHING.

Are you talking about the optimiser in the backend as there isn't one in
the jdbc driver.

>Another hack I need to work on (or someone else can) is to squish in a
>layer of filesystem hashing for large objects. We tried to use large
>objects and got destroyed. 40,000 rows and the server barely functioned.
>I think this is because of 2 things:
>
>1) Filehandles not being closed. This was an oversite I've seen covered
>in the list archives somewhere.

Ok, ensure you are closing the large objects within JDBC. If you are then
this is a backend problem.

One thing to try is to commit the transaction a bit more often (if you are
running within a single transaction for all 40k objects). Committing the
transaction will force the backend to close all open large objects on that
connection.

>2) The fact that all objects are stored in a the single data directory.
>Once you get up to a good number of objects, directory scans really take a
>long, long time. This slows down any subsequent openings of large
>objects. Is someone working on this problem? Or have a patch already?

Again not JDBC. Forwarding to the hackers list on this one. The naming
conventions were changed a lot in 7.1, and it was for more flexability.

Peter

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Alessio Bragadini 2001-04-10 13:32:52 Re: AW: Truncation of char, varchar types
Previous Message Andrew McMillan 2001-04-10 12:04:20 Re: "--tuning" compile and runtime option (?)

Browse pgsql-patches by date

  From Date Subject
Next Message Peter Eisentraut 2001-04-10 16:42:12 Re: debian stylesheets
Previous Message andrea gelmini 2001-04-10 10:09:36 debian stylesheets