Re: Planning for Scalability

From: Claudio Freire <klaussfreire(at)gmail(dot)com>
To: Roberto Grandi <roberto(dot)grandi(at)trovaprezzi(dot)it>
Cc: postgres performance list <pgsql-performance(at)postgresql(dot)org>
Subject: Re: Planning for Scalability
Date: 2014-10-03 16:33:18
Message-ID: CAGTBQpZKD2mAjNKBohUpKb2CZpv1216ay3=hjEMBGZL7xeqHhQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On Fri, Oct 3, 2014 at 5:55 AM, Roberto Grandi
<roberto(dot)grandi(at)trovaprezzi(dot)it> wrote:
> Dear Pg people,
>
> I would ask for your help considering this scaling issue. We are planning to move from 3Millions of events/day instance of postgres (8 CPU, 65 gb ram) to 5 millions of items/day.

The most important hardware part there is your I/O subsystem, which
you didn't include. Lets assume you put whatever works.

> What do you suggest in order to plan this switch? Add separate server? Increase RAM? Use SSD?

With that kind of hardware, and a RAID10 of 4 SSDs, we're handling
about 6000 peak (1300 sustained) read transactions per second. They're
not trivial reads. They each process quite a lot of data. Write load
is not huge, steady at 15 writes per second, but we've got lots of
bulk inserts/update as well. Peak write thoughput is about 30 qps, but
each query bulk-loads so it's probably equivalent to 3000 or so.

In essence, unless your I/O subsystem sucks, I think you'll be fine.

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Jeff Janes 2014-10-03 19:58:46 Re: Yet another abort-early plan disaster on 9.3
Previous Message Andrey Lizenko 2014-10-03 15:38:15 query plan question, nested loop vs hash join