Re: Random Page Cost and Planner

From: David Jarvis <thangalin(at)gmail(dot)com>
To: Kevin Grittner <Kevin(dot)Grittner(at)wicourts(dot)gov>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: Random Page Cost and Planner
Date: 2010-05-25 23:26:52
Message-ID: AANLkTin-j4SACaMeJCH4r5ekEWrtuiFqU56gVM1_lIyV@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Hi, Kevin.

Thanks for the response.

It sounds as though the active portion of your database is pretty
> much cached in RAM. True?
>

I would not have thought so; there are seven tables, each with 39 to 43
million rows as:

CREATE TABLE climate.measurement (
id bigserial NOT NULL,
taken date NOT NULL,
station_id integer NOT NULL,
amount numeric(8,2) NOT NULL,
flag character varying(1) NOT NULL DEFAULT ' '::character varying,
category_id smallint NOT NULL,
}

The machine has 4GB of RAM, donated to PG as follows:

*shared_buffers = 1GB
temp_buffers = 32MB
work_mem = 32MB
maintenance_work_mem = 64MB
effective_cache_size = 256MB
*

Everything else is at its default value. The kernel:

$ cat /proc/sys/kernel/shmmax
2147483648

Two postgres processes are enjoying the (virtual) space:

2619 postgres 20 0 *1126m* 524m 520m S 0 13.2 0:09.41 postgres
2668 postgres 20 0 *1124m* 302m 298m S 0 7.6 0:04.35 postgres

can make such plans look more attractive by cutting both
> random_page_cost and seq_page_cost. Some highly cached loads
> perform well with these set to equal values on the order of 0.1 to
> 0.001.
>

I tried this: no improvement.

It would tend to be better than random access to 43 million rows, at
> least if you need to go to disk for many of them.
>

I thought that the index would take care of this? The index has been set to
the unique key of:

station_id, taken, and category_id (the filter for child tables).

Each time I scan for data, I always provide the station identifier and its
date range. The date range is obtained from another table (given the same
station_id).

I will be trying various other indexes. I've noticed now that sometimes the
results are very quick and sometimes very slow. For the query I posted, it
would be great to know what would be the best indexes to use. I have a
suspicion that that's going to require trial and many errors.

Dave

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Tom Lane 2010-05-26 00:24:23 Re: Random Page Cost and Planner
Previous Message Merlin Moncure 2010-05-25 21:03:33 Re: shared_buffers advice