From: | Guillaume Lelarge <guillaume(at)lelarge(dot)info> |
---|---|
To: | Tzvi R <sefer(at)hotmail(dot)com> |
Cc: | pgadmin-support(at)postgresql(dot)org |
Subject: | Re: Slow connect due to some queries |
Date: | 2010-01-18 15:56:39 |
Message-ID: | 4B5484B7.5020405@lelarge.info |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgadmin-support |
Le 15/01/2010 00:30, Tzvi R a écrit :
> [...]
> A quick overview of our database server:
> * Four databases.
> * Each database has about 20 schemas.
>
>
> The largest database contains:
> * select count(*) from pg_class where relkind = 'v'
> 101
> * select count(*) from pg_class where relkind = 'r'
> 11911 (about 500 tables in each schema, I know, it's a lot - but I'd bet it's not uncommon)
> * About 10 sequences.
> * About 150 functions.
>
>
> select count(*) from pg_class
>> 36444
>
>
> All these tables are large ones and have some toasted rows (you can see it in pg_type).
>
>
> Those queries are rather fast, it's just that operating over a (relatively) slow network exposes us to latencies of shipping that much traffic.
> I was wondering if the need to access that table can be delayed, so queries would join against it instead of prefetching it? Or perhaps cache it locally on disk and fetch only higher OID values (I'm guessing here, possibly incorrectly, that rows are not updated but only added) this would enable one full fetch and incremental updates since.
>
>
That would need quite a lot of work. I know there are a lot of things to
do to behave better with a database containing a lot of objects. Not
sure we'll have time to address this for the next release.
--
Guillaume.
http://www.postgresqlfr.org
http://dalibo.com
From | Date | Subject | |
---|---|---|---|
Next Message | Guillaume Lelarge | 2010-01-18 16:05:26 | Re: PgAdmin3 on Mac OS X Observations |
Previous Message | Guillaume Lelarge | 2010-01-18 15:52:31 | Re: can't get pgpass.conf to work with pgagent |