Re: optimization join on random value

From: Heikki Linnakangas <hlinnaka(at)iki(dot)fi>
To: Anton Bushmelev <djeday84(at)gmail(dot)com>, pgsql-performance(at)postgresql(dot)org
Subject: Re: optimization join on random value
Date: 2015-05-03 21:28:46
Message-ID: 5546930E.5080201@iki.fi
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On 05/04/2015 12:23 AM, Anton Bushmelev wrote:
> Hello guru of postgres, it's possoble to tune query with join on random
> string ?
> i know that it is not real life example, but i need it for tests.
>
> soe=# explain
> soe-# SELECT ADDRESS_ID,
> soe-# CUSTOMER_ID,
> soe-# DATE_CREATED,
> soe-# HOUSE_NO_OR_NAME,
> soe-# STREET_NAME,
> soe-# TOWN,
> soe-# COUNTY,
> soe-# COUNTRY,
> soe-# POST_CODE,
> soe-# ZIP_CODE
> soe-# FROM ADDRESSES
> soe-# WHERE customer_id = trunc( random()*45000) ;
> QUERY
> PLAN
> -------------------------------------------------------------------------------------------
> Seq Scan on addresses (cost=0.00..165714.00 rows=22500 width=84)
> Filter: ((customer_id)::double precision = trunc((random() *
> 45000::double precision)))
> (2 rows)
>

There are two problems here that prohibit the index from being used:

1. random() is volatile, so it's recalculated for each row.
2. For the comparison, customer_id is cast to a float, and the index is
on the bigint value.

To work around the first problem, put the random() call inside a
subquery. And for the second problem, cast to bigint.

SELECT ... FROM addresses
WHERE customer_id = (SELECT random()*45000)::bigint

- Heikki

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Muthusamy, Sivaraman 2015-05-11 09:55:06 How to clean/truncate / VACUUM FULL pg_largeobject without (much) downtime?
Previous Message Anton Bushmelev 2015-05-03 21:23:28 optimization join on random value