Re: 110,000,000 rows

From: Thom Brown <thombrown(at)gmail(dot)com>
To: Nikolas Everett <nik9000(at)gmail(dot)com>
Cc: "Massa, Harald Armin" <chef(at)ghum(dot)de>, Dann Corbit <DCorbit(at)connx(dot)com>, John Gage <jsmgage(at)numericable(dot)fr>, PostgreSQL - General <pgsql-general(at)postgresql(dot)org>
Subject: Re: 110,000,000 rows
Date: 2010-05-27 13:59:36
Message-ID: AANLkTikWhvUgwac_gIZgNzoI0eJ4zeighffg-u4foqnt@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On 27 May 2010 14:48, Nikolas Everett <nik9000(at)gmail(dot)com> wrote:
> I've had a reporting database with just about a billion rows.  Each row
> was horribly large because the legacy schema had problems.  We partitioned
> it out by month and it ran about 30 million rows a month.  With a reasonably
> large box you can get that kind of data into memory and indexes are
> almost unnecessary.  So long as you have constraint exclusion and a good
> partition scheme you should be fine.  Throw in a well designed schema and
> you'll be cooking well into the tens of billions of rows.
> We ran self joins of that table reasonably consistently by the way:
> SELECT lhs.id, rhs.id
> FROM bigtable lhs, bigtable rhs
> WHERE lhs.id > rhs.id
>      AND '' > lhs.timestamp AND lhs.timestamp >= ''
>      AND '' > rhs.timestamp AND rhs.timestamp >= ''
>      AND lhs.timestamp = rhs.timestamp
>      AND lhs.foo = rhs.foo
>      AND lhs.bar = rhs.bar
> This really liked the timestamp index and we had to be careful to only do it
> for a few days at a time.  It took a few minutes each go but it was
> definitely doable.
> Once you get this large you do have to be careful with a few things though:
> *It's somewhat easy to write super long queries or updates.  This can lots
> of dead rows in your tables.  Limit your longest running queries to a day or
> so.  Note that queries are unlikely to take that long but updates with
> massive date ranges could.  SELECT COUNT(*) FROM bigtable too about 30
> minutes when the server wasn't under heavy load.
> *You sometimes get bad plans because:
> **You don't or can't get enough statistics about a column.
> **PostgreSQL doesn't capture statistics about two columns together.
>  PostgreSQL has no way of knowing that columnA = 'foo' implies columnB =
> 'bar' about 30% of the time.
> Nik

What's that middle bit about?

> AND '' > lhs.timestamp AND lhs.timestamp >= ''
> AND '' > rhs.timestamp AND rhs.timestamp >= ''

If blank is greater than the timestamp? What is that doing out of curiosity?

Thom

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Nikolas Everett 2010-05-27 14:04:37 Please help me write a query
Previous Message Torsten Zühlsdorff 2010-05-27 13:54:32 Re: 110,000,000 rows