Re: Are 50 million rows a problem for postgres ?

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: "Donald Fraser" <demolish(at)cwgsy(dot)net>
Cc: "[ADMIN]" <pgsql-admin(at)postgresql(dot)org>
Subject: Re: Are 50 million rows a problem for postgres ?
Date: 2003-09-08 15:14:00
Message-ID: 13996.1063034040@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin

"Donald Fraser" <demolish(at)cwgsy(dot)net> writes:
> My analysis at the time was that to access random records, performance
> deteriorated the further away the records that you were accessing were
> from the beginning of the index. For example using a query that had
> say OFFSET 250000 would cause large delays.

Well, yeah. OFFSET implies generating and discarding that number of
records. AFAICS there isn't any shortcut for this, even in a query
that's just an indexscan, since the index alone can't tell us whether
any given record would actually be returned.

regards, tom lane

In response to

Browse pgsql-admin by date

  From Date Subject
Next Message Kir ZK 2003-09-08 16:07:28 Problem
Previous Message Rhaoni Chiu Pereira 2003-09-08 14:29:21 Explain Doc