On Tue, Jul 1, 2008 at 1:29 PM, samantha mahindrakar
> I have a select statement that runs on a partition having say couple
> million rows.
> The tabel has indexes on two colums. However the query uses the
> non-indexed colums too in its where clause.
> For example:
> SELECT lane_id,measurement_start,
> FROM tss.lane_data_06_08
> WHERE lane_id in(select lane_id from lane_info where inactive is null )
> AND date_part('hour', measurement_start) between 5 and 23
> AND date_part('day',measurement_start)=30
> GROUP BY lane_id,measurement_start,measurement_end,speed,volume,occupancy,quality,effective_date
> ORDER BY lane_id, measurement_start
> out of this only lane_id and mesaurement_start are indexed. This query
> will return around 10,000 rows. But it seems to be taking a long time
> to execute which doesnt make sense for a select statement. It doesnt
> make any sense to create index for every field we are gonna use in tne
> where clause.
> Isnt there any way we can improve the performance?
I'm guessing that adding an index for either
date_part('day',measurement_start) or both would help.
What does explain analyze select ... (rest of query here) say?
In response to
pgsql-performance by date
|Next:||From: Jeffrey Baker||Date: 2008-07-02 00:18:53|
|Subject: Fusion-io ioDrive|
|Previous:||From: samantha mahindrakar||Date: 2008-07-01 19:29:26|
|Subject: Select running slow on Postgres|