Re: Table partitioning for maximum speed?

From: "David Busby" <busby(at)pnts(dot)com>
To: "Jean-Luc Lachance" <jllachan(at)nsd(dot)ca>, "Vivek Khera" <khera(at)kcilink(dot)com>
Cc: <pgsql-general(at)postgresql(dot)org>
Subject: Re: Table partitioning for maximum speed?
Date: 2003-10-10 21:10:30
Message-ID: 007a01c38f72$ef8a7c60$1100000a@busbydev
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Is this partitioning like the schemas mentioned here:
http://www.postgresql.org/docs/current/static/ddl-schemas.html? Would those
help and increase performance?

/B

----- Original Message -----
From: "Jean-Luc Lachance" <jllachan(at)nsd(dot)ca>
To: "Vivek Khera" <khera(at)kcilink(dot)com>
Cc: <pgsql-general(at)postgresql(dot)org>
Sent: Friday, October 10, 2003 14:23
Subject: Re: [GENERAL] Table partitioning for maximum speed?

> BULL.
>
> How many times does PG have to scan the whole table because of MVCC?
> At least with partitioning there is a fighting chance that that won't be
> necessary.
> Queries that involve the field on which the table is partitioned execute
> faster by an order of magnitude.
> It also helps with vaccuming as PG can vaccum only one partition at a
> time.
> I have 17M row table where all records get frequently updated over a
> year.
> I would do my own partitioning with inheritance if it was not broken.
> Partitioning would be a BIG plus in my book. So would visibility of
> records but that is another fight.
>
> JLL
>
> Vivek Khera wrote:
> >
> > >>>>> "JB" == Jeff Boes <jboes(at)nexcerpt(dot)com> writes:
> >
> > JB> Will a query against a table of 0.5 million rows beat a query
against
> > JB> a table of 7 million rows by a margin that makes it worth the hassle
> > JB> of supporting 15 "extra" tables?
> >
> > I think you'll be better off with a single table, as you won't have
> > contention for the index pages in the cache.
> >
> > One thing to do is to reindex reasonably often (for PG < 7.4) to avoid
> > index bloat, which will make them not fit in cache. Just check the
> > size of your index in the pg_class table, and when it gets big,
> > reindex (assuming you do lots of updates/inserts to the table).
> >
> > Your table splitting solution sounds like something I'd do if I were
> > forced to use mysql ;-)
> >
> > --
> > =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
> > Vivek Khera, Ph.D. Khera Communications, Inc.
> > Internet: khera(at)kciLink(dot)com Rockville, MD +1-240-453-8497
> > AIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/
> >
> > ---------------------------(end of broadcast)---------------------------
> > TIP 9: the planner will ignore your desire to choose an index scan if
your
> > joining column's datatypes do not match
>
> ---------------------------(end of broadcast)---------------------------
> TIP 8: explain analyze is your friend

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Vivek Khera 2003-10-10 21:14:25 Re: go for a script! / ex: PostgreSQL vs. MySQL
Previous Message Nigel J. Andrews 2003-10-10 21:08:12 Redhat RPMs (Was: Debian experimental packages ofPostgreSQL 7.4beta4)