BUG #14253: b-tree no index range scan?

From: digoal(at)126(dot)com
To: pgsql-bugs(at)postgresql(dot)org
Subject: BUG #14253: b-tree no index range scan?
Date: 2016-07-17 03:03:16
Message-ID: 20160717030316.1425.90322@wrigleys.postgresql.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-bugs

The following bug has been logged on the website:

Bug reference: 14253
Logged by: Zhou Digoal
Email address: digoal(at)126(dot)com
PostgreSQL version: 9.5.3
Operating system: CentOS 6.x x64
Description:

HI,
when i use b-tree scan many tuples(spread index leaf page), there has big
index page scans, larger than the index's real size. why?
is the explain's bug? or PostgreSQL no index range scan?

postgres=# create unlogged table tbl(id int primary key, info text, crt_time
timestamp);
postgres=# insert into tbl select trunc(random()*100000000),'test',now()
from generate_series(1,50000000) on conflict on constraint tbl_pkey do
nothing;
postgres=# vacuum analyze tbl;
索引和heap的pages占用

postgres=# select relpages from pg_class where relname='tbl';
relpages
----------
250600
(1 row)

postgres=# select relpages from pg_class where relname='tbl_pkey';
relpages
----------
107881
(1 row)

i see postgresql has bi-link between index pages, why not use it reduce scan
index pages?
src/include/access/nbtree.h

typedef struct BTPageOpaqueData
{
BlockNumber btpo_prev; /* left sibling, or P_NONE if
leftmost */
BlockNumber btpo_next; /* right sibling, or P_NONE if
rightmost */
union
{
uint32 level; /* tree level --- zero for
leaf pages */
TransactionId xact; /* next transaction ID, if
deleted */
} btpo;
uint16 btpo_flags; /* flag bits, see below */
BTCycleId btpo_cycleid; /* vacuum cycle ID of latest split
*/
} BTPageOpaqueData;

postgres=# explain (analyze,verbose,timing,costs,buffers) select id from tbl
offset 1000000 limit 10;
QUERY
PLAN
-----------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=25968.49..25968.75 rows=10 width=4) (actual
time=528.914..528.921 rows=10 loops=1)
Output: id
Buffers: shared hit=750554
-> Index Only Scan using tbl_pkey on public.tbl (cost=0.56..1021687.32
rows=39344184 width=4) (actual time=0.030..347.409 rows=1000010 loops=1)
Output: id
Heap Fetches: 0
Buffers: shared hit=750554
Planning time: 0.083 ms
Execution time: 528.948 ms
(9 rows)

postgres=# explain (analyze,verbose,timing,costs,buffers) select id from tbl
limit 1000010;
QUERY
PLAN
-----------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=0.56..25968.75 rows=1000010 width=4) (actual
time=0.032..736.929 rows=1000010 loops=1)
Output: id
Buffers: shared hit=750554
-> Index Only Scan using tbl_pkey on public.tbl (cost=0.56..1021687.32
rows=39344184 width=4) (actual time=0.031..362.791 rows=1000010 loops=1)
Output: id
Heap Fetches: 0
Buffers: shared hit=750554
Planning time: 0.097 ms
Execution time: 916.256 ms
(9 rows)

thanks,
best regards,
digoal

Responses

Browse pgsql-bugs by date

  From Date Subject
Next Message Peter Geoghegan 2016-07-17 03:38:15 Re: Invalid indexes should not consume update overhead
Previous Message Tom Lane 2016-07-17 00:58:47 Re: Invalid indexes should not consume update overhead