Re: 110,000,000 rows

From: david(at)gardnerit(dot)net
To: pgsql-general(at)postgresql(dot)org
Subject: Re: 110,000,000 rows
Date: 2010-05-26 22:18:17
Message-ID: 20100526221817.GF6007@monster.gardnerit.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

At work I have one table with 32 million rows, not quite the size you
are talking about, but to give you an idea of the performance, the
following query returns 14,659 rows in 405ms:

SELECT * FROM farm.frame
WHERE process_start > '2010-05-26';

process_start is a timestamp without time zone column, and is covered by
an index. Rows are reletively evenly distributed over time, so the index
performs quite well.

A between select also performs well:
SELECT * FROM farm.frame
WHERE process_start
BETWEEN '2010-05-26 08:00:00'
AND '2010-05-26 09:00:00';

fetches 1,350 rows at 25ms.

I also have a summary table that is maintained by triggers, which is a
bit of denormalization, but speeds up common reporting queries.

On 22:29 Wed 26 May , John Gage wrote:
> Please forgive this intrusion, and please ignore it, but how many
> applications out there have 110,000,000 row tables? I recently
> multiplied 85,000 by 1,400 and said now way Jose.
>
> Thanks,
>
> John Gage
>
> --
> Sent via pgsql-general mailing list (pgsql-general(at)postgresql(dot)org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Len Walter 2010-05-27 00:42:33 Re: Commit every N rows in PL/pgsql
Previous Message David Wilson 2010-05-26 22:06:29 Re: 110,000,000 rows