Skip site navigation (1) Skip section navigation (2)

Re: How to achieve sustained disk performance of 1.25 GB write for 5 mins

From: "Eric Comeau" <Eric(dot)Comeau(at)signiant(dot)com>
To: "Merlin Moncure" <mmoncure(at)gmail(dot)com>
Cc: <pgsql-performance(at)postgresql(dot)org>
Subject: Re: How to achieve sustained disk performance of 1.25 GB write for 5 mins
Date: 2010-11-17 21:11:26
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-performance

On 10-11-17 12:28 PM, Merlin Moncure wrote: 

	On Wed, Nov 17, 2010 at 9:26 AM, Eric Comeau <ecomeau(at)signiant(dot)com> <mailto:ecomeau(at)signiant(dot)com>  wrote:
	> This is not directly a PostgreSQL performance question but I'm hoping some
	> of the chaps that build high IO PostgreSQL servers on here can help.
	> We build file transfer acceleration s/w (and use PostgreSQL as our database)
	> but we need to build a test server that can handle a sustained write
	> throughput of 1,25 GB for 5 mins.
	> Why this number, because we want to push a 10 Gbps network link for 5-8
	> mins, 10Gbps = 1.25 GB write, and would like to drive it for 5-8 mins which
	> would be 400-500 GB.
	> Note this is just a "test" server therefore it does not need fault
	> tolerance.
	I really doubt you will see 1.25gb/sec over 10gige link.  Even if you
	do though, you will hit a number of bottlenecks if you want to see
	anything close to those numbers.  Even with really fast storage you
	will probably become cpu bound, or bottlenecked in the WAL, or some
	other place.
	*) what kind of data do you expect to be writing out at this speed?

Large Video files ... our s/w is used to displace FTP.

	*) how many transactions per second will you expect to have?

Ideally 1 large file, but it may have to be multiple. We find that if we send multiple files it just causes the disk to thrash more so we get better throughput by sending one large file.

	*) what is the architecture of the client? how many connections will
	be open to postgres writing?

Our s/w can do multiple streams, but I believe we get better performance with 1 stream handling one large file, you could have 4 streams with 4 files in flight, but the disk thrashes more... postgres is not be writing the file data, our agent reports back to postgres stats on the transfer rate being achieved ... postgres transactions is not the issue. The client and server are written in C and use UDP (with our own error correction) to achieve high network throughput as opposed to TCP.

	*) how many cores are in this box? what kind?

Well obviously thats part of the equation as well, but its sort of unbounded right now not defined, but our s/w is multi-threaded and can make use of the multiple cores... so I'll say for now at a minimum 4.


In response to


pgsql-performance by date

Next:From: Greg SmithDate: 2010-11-17 21:24:54
Subject: Re: Defaulting wal_sync_method to fdatasync on Linux for 9.1?
Previous:From: Tomas VondraDate: 2010-11-17 20:47:31
Subject: Re: Query Performance SQL Server vs. Postgresql

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group