Re: Fast insertion indexes: why no developments

From: Leonardo Francalanci <m_lists(at)yahoo(dot)it>
To: pgsql-hackers(at)postgresql(dot)org
Subject: Re: Fast insertion indexes: why no developments
Date: 2013-11-05 15:22:15
Message-ID: 1383664935793-5777020.post@n5.nabble.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Claudio Freire wrote
> real data isn't truly random

Well, let's try normal_rand???

create table t1 as select trunc(normal_rand(1000000, 500000, 30000)) as n,
generate_series(1, 1000000) as i;

with cte as
(select min(n) as minn, max(n) as maxn, i/100 from t1 group by i/100),
inp as (select generate_series(1, 100) iinp, trunc(normal_rand(100,
500000, 30000)) as s)

select count(*),iinp from inp
left outer join cte on inp.s between minn and maxn group by iinp;

Not that much different in my run...

Claudio Freire wrote
> you haven't really
> analyzed update cost, which is what we were talking about in that last
> post.

I don't care for a better update cost if the cost to query is a table scan.
Otherwise, I'll just claim that no index at all is even better than minmax:
0 update cost, pretty much same query time.

Maybe there's value in minmax indexes for sequential data, but not for
random data, which is the topic of this thread.

BTW I would like to see some performance tests on the minmax indexes
vs btree for the sequential inputs... is the gain worth it? I couldn't find
any mention of performance tests in the minmax threads.

--
View this message in context: http://postgresql.1045698.n5.nabble.com/Fast-insertion-indexes-why-no-developments-tp5776227p5777020.html
Sent from the PostgreSQL - hackers mailing list archive at Nabble.com.

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Stephen Frost 2013-11-05 15:32:25 Re: [PATCH] configure: add git describe output to PG_VERSION when building a git tree
Previous Message Andres Freund 2013-11-05 15:21:41 Re: logical changeset generation v6.5