From: | Barry Lind <barry(at)xythos(dot)com> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Speed of locating tables? |
Date: | 2000-05-26 16:47:17 |
Message-ID: | 392EAA95.3BBC5EE1@xythos.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Does this also mean that if you are using large objects that you really
won't be able to store large numbers of large objects in a database?
(If I am correct, each large object creates two files, one for the large
object and one for it's index.) If this is true for Large Objects, is
there any workaround? The application I am porting from Oracle will be
storing on the order of 1,000,000 large objects.
thanks,
--Barry
Tom Lane wrote:
>
> Steve Wampler <swampler(at)noao(dot)edu> writes:
> > To me, the most natural way to encode the sets is to
> > create a separate table for each set, since the attributes
> > can then be indexed and referenced quickly once the table
> > is accessed. But I don't know how fast PG is at locating
> > a table, given its name.
>
> > So, to refine the question - given a DB with (say) 100,000
> > tables, how quickly can PG access a table given its name?
>
> Don't even think about 100000 separate tables in a database :-(.
> It's not so much that PG's own datastructures wouldn't cope,
> as that very few Unix filesystems can cope with 100000 files
> in a directory. You'd be killed on directory search times.
>
> I don't see a good reason to be using more than one table for
> your attributes --- add one more column to what you were going
> to use, to contain an ID for each attribute set, and you'll be
> a lot better off. You'll want to make sure there's an index
> on the ID column, of course, or on whichever columns you plan
> to search by.
>
> regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Steve Wampler | 2000-05-26 17:28:53 | Rollover of tables? |
Previous Message | Tom Lane | 2000-05-26 16:34:17 | Re: Speed of locating tables? |