Skip site navigation (1) Skip section navigation (2)

Re: Query seem to slow if table have more than 200 million rows

From: "Qingqing Zhou" <zhouqq(at)cs(dot)toronto(dot)edu>
To: pgsql-performance(at)postgresql(dot)org
Subject: Re: Query seem to slow if table have more than 200 million rows
Date: 2005-09-27 01:43:14
Message-ID: dh9ti0$ovs$ (view raw or whole thread)
Lists: pgsql-performance
""Ahmad Fajar"" <gendowo(at)konphalindo(dot)or(dot)id> wrote
> Select ids, keywords from dict where keywords='blabla' ('blabla' is a 
> single
> word);
> The table have 200 million rows, I have index the keywords field. On the
> first time my query seem to slow to get the result, about 15-60 sec to get
> the result. But if I repeat the query I will get fast result. My question 
> is
> why on the first time the query seem very slow.
> Table structure is quite simple:
> Ids bigint, keywords varchar(150), weight varchar(1), dpos int.

The first slowness is obviously caused by disk IOs. The second time is 
faster because all data pages it requires are already in buffer pool. 200 
million rows is not a problem for btree index, even if your client tool 
appends some spaces to your keywords at your insertion time, the ideal btree 
is 5 to 6 layers high at most. Can you show the iostats of index from your 
statistics view?


In response to


pgsql-performance by date

Next:From: Ron PeacetreeDate: 2005-09-27 05:09:19
Subject: Re: [PERFORM] A Better External Sort?
Previous:From: Tom LaneDate: 2005-09-27 01:42:18
Subject: Re: [PERFORM] A Better External Sort?

Privacy Policy | About PostgreSQL
Copyright © 1996-2015 The PostgreSQL Global Development Group