Interesting suggestion on the partial index!
I'll find out if we can extract our code that did the work. It was simple but scattered in a few routines.
In concept it worked like this:
1 - Ignore if hint bits are unset, use them if set. This affects heapam and vacuum I think.
2 - implement a cache for clog lookups based on the optimistic assumption that the data was inserted in bulk. Put the cache one call away from heapgetnext()
I forget the details of (2). As I recall, if we fall off of the assumption, the penalty for long scans get large-ish (maybe 2X), but since when do people full table scan when they're updates/inserts are so scattered across TIDs? It's an obvious big win for DW work.
We also have a GUC to turn it off if needed, in which case a vacuum will write the hint bits.
----- Original Message -----
From: Hannu Krosing <hannu(at)krosing(dot)net>
To: Luke Lonergan
Cc: Pavan Deolasee <pavan(dot)deolasee(at)gmail(dot)com>; Greg Smith <gsmith(at)gregsmith(dot)com>; Alvaro Herrera <alvherre(at)commandprompt(dot)com>; pgsql-performance(at)postgresql(dot)org <pgsql-performance(at)postgresql(dot)org>
Sent: Thu May 22 12:10:02 2008
Subject: Re: [PERFORM] I/O on select count(*)
On Thu, 2008-05-15 at 10:52 +0800, Luke Lonergan wrote:
> BTW – we’ve removed HINT bit checking in Greenplum DB and improved the
> visibility caching which was enough to provide performance at the same
> level as with the HINT bit optimization, but avoids this whole “write
> the data, write it to the log also, then write it again just for good
> measure” behavior.
> For people doing data warehousing work like the poster, this Postgres
> behavior is miserable. It should be fixed for 8.4 for sure
I might try it. I think I have told you about my ideas ;)
I plan to first do "cacheing" (for being able to doi index only scans
among other things) and then if the cache works reliably, use the
"cacheing" code as the main visibility / MVCC mechanism.
Is Greenplums code available, or should I roll my own ?
> BTW – for the poster’s benefit, you should implement partitioning by
> date, then load each partition and VACUUM ANALYZE after each load.
> You probably won’t need the date index anymore – so your load times
> will vastly improve (no indexes), you’ll store less data (no indexes)
> and you’ll be able to do simpler data management with the partitions.
> You may also want to partition AND index if you do a lot of short
> range selective date predicates. Example would be: partition by day,
> index on date field, queries selective on date ranges by hour will
> then select out only the day needed, then index scan to get the
> hourly values.
If your queries allow it, you may try indexing on
int2::extract('HOUR' from date)
so the index may be smaller
storing the date as type abstime is another way to reduce index size.
> Typically time-oriented data is nearly time sorted anyway, so you’ll
> also get the benefit of a clustered index.
pgsql-performance by date
|Next:||From: Vlad Arkhipov||Date: 2008-05-23 09:01:16|
|Subject: join/from_collapse_limit and geqo_threshold default values|
|Previous:||From: Guillaume Smet||Date: 2008-05-22 21:52:11|
|Subject: Re: Index creation time and distribution|