From: | Tomas Vondra <tomas(dot)vondra(at)enterprisedb(dot)com> |
---|---|
To: | Mohan Radhakrishnan <radhakrishnan(dot)mohan(at)gmail(dot)com>, pgsql-general(at)lists(dot)postgresql(dot)org |
Subject: | Re: BRIN index on timestamptz |
Date: | 2021-04-24 00:06:17 |
Message-ID: | abf9aacc-cd77-b08d-311e-7bc932f4ddd8@enterprisedb.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On 4/23/21 10:31 AM, Mohan Radhakrishnan wrote:
> Hi,
> I am planning to use as I search based on timestamptz fields.
> There are millions of records.I refer
> https://www.percona.com/blog/2019/07/16/brin-index-for-postgresql-dont-forget-the-benefits
> <https://www.percona.com/blog/2019/07/16/brin-index-for-postgresql-dont-forget-the-benefits>
>
> I execute this on the AWS RDS instance. Is there something in the plan I
> should pay attention to ? I notice the Execution Time.
>
> Thanks,
> Mohan
>
> INSERT INTO testtab (id, date, level, msg) SELECT g, CURRENT_TIMESTAMP +
> ( g || 'minute' ) :: interval, random() * 6, md5(g::text) FROM
> generate_series(1,8000000) as g;
Two things:
a) You need to do ANALYZE, otherwise there are no statistics the
optimizer could use (which is why the row estimates in the plans are
entirely bogus).
b) BRIN indexes don't work on random data, because the whole idea is
about eliminating large blocks of data (e.g. 1MB). But with random data
that's not going to happen, because each such range will match anything.
Which is why seqscan is a bit faster than when using BRIN index.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From | Date | Subject | |
---|---|---|---|
Next Message | Justin Pryzby | 2021-04-24 00:28:27 | Re: pg_upgrade can result in early wraparound on databases with high transaction load |
Previous Message | Andres Freund | 2021-04-23 23:42:56 | Re: pg_upgrade can result in early wraparound on databases with high transaction load |