Skip site navigation (1) Skip section navigation (2)

Re: Random Page Cost and Planner

From: David Jarvis <thangalin(at)gmail(dot)com>
To: Alexey Klyukin <alexk(at)commandprompt(dot)com>
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Kevin Grittner <Kevin(dot)Grittner(at)wicourts(dot)gov>, pgsql-performance(at)postgresql(dot)org
Subject: Re: Random Page Cost and Planner
Date: 2010-05-26 16:30:19
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-performance
Hi, Alexey.

Is it necessary to get the data as far as 1900 all the time ? Maybe there is
> a possibility to aggregate results from the past years if they are constant.

This I have done. I created another table (station_category) that associates
stations with when they started to take measurements and when they stopped
(based on the data in the measurement table). For example:

station_id; category_id; taken_start; taken_end

This means that station 1 has data for categories 4 through 7. The
measurement table returns 3865 rows for station 1 and category 7 (this uses
an index and took 7 seconds cold):

station_id; taken; amount

The station_category table is basically another index.

Would explicitly sorting the measurement table (273M rows) by station then
by date help?


In response to

pgsql-performance by date

Next:From: Stephen FrostDate: 2010-05-26 16:32:51
Subject: Re: PostgreSQL Function Language Performance: C vsPL/PGSQL
Previous:From: Eliot GableDate: 2010-05-26 16:29:04
Subject: Re: PostgreSQL Function Language Performance: C vs PL/PGSQL

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group