Re: Scaling 10 million records in PostgreSQL table

From: Craig Ringer <ringerc(at)ringerc(dot)id(dot)au>
To: Navaneethan R <nava(at)gridlex(dot)com>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: Scaling 10 million records in PostgreSQL table
Date: 2012-10-08 20:27:45
Message-ID: 50733741.30609@ringerc.id.au
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On 10/08/2012 11:26 PM, Navaneethan R wrote:
> Hi all,
>
> I have 10 million records in my postgres table.I am running the database in amazon ec2 medium instance.

EC2 usually means "My I/O performance is terrible" and "medium instance"
means "I don't have enough RAM for caching to make up for my terrible
I/O" at the database sizes you're talking.

Anything that hits most of the database is likely to perform pretty
poorly on something like EC2. It might be worth considering one of the
high memory or high I/O instances, but unfortunately they only come in
"really big and really expensive".

If you already have appropriate indexes and have used `explain analyze`
to verify that the query isn't doing anything slow and expensive, it's
possible the easiest way to improve performance is to set up async
replication or log shipping to a local hot standby on real physical
hardware, then do the query there.

--
Craig Ringer

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Craig James 2012-10-08 21:45:18 Two identical systems, radically different performance
Previous Message Navaneethan R 2012-10-08 20:25:02 Re: Scaling 10 million records in PostgreSQL table