Re: PostgreSQL Write Performance

From: Greg Smith <greg(at)2ndquadrant(dot)com>
To: Yan Cheng Cheok <yccheok(at)yahoo(dot)com>
Cc: Dann Corbit <DCorbit(at)connx(dot)com>, pgsql-general(at)postgresql(dot)org
Subject: Re: PostgreSQL Write Performance
Date: 2010-01-07 04:49:35
Views: Raw Message | Whole Thread | Download mbox
Lists: pgsql-general
Yan Cheng Cheok wrote:
> The time taken to perform measurement per unit is in term of ~30 milliseconds. We need to record down the measurement result for every single unit. Hence, the time taken by record down the measurement result shall be far more less than milliseconds, so that it will have nearly 0 impact on the machine speed (If not, machine need to wait for database to finish writing, before performing measurement on next unit)

Saving a piece of data to a hard disk permanently takes a few 
milliseconds.  As pointed out already, exactly how many depends on the 
drive, but it's probably going to be 8ms or longer on your system.  
There are a few options here:

1) Add a battery-backed write caching controller to your system.  Then 
the battery will help make sure the data doesn't get lost even if the 
power goes out before the driver writes it out.  This will cost you 
around $300.

2) Use some other type of faster storage, such as a SSD drive that has a 
battery on it to cache any unfinished writes.  Probably also going to be 
around that price, the cheaper SSDs (and some of the expensive ones) 
don't take data integrity very seriously.

3) Write the data to a flat file.  Periodically import the results into 
the database in a batch.

The thing you should realize is that using (3) is going to put you in a 
position where it's possible you've told the machine the measurement was 
saved, but if the system crashes it won't actually be in the database.  
If you're saving to a flat file now, you're already in this 
position--you can't write to a flat file and make sure the result is on 
disk in less than around 8ms either, you just probably haven't tested 
that out yet.  Just because the write has returned successfully, that 
doesn't mean it's really stored permanently.  Power the system off in 
the window between that write and when the memory cache goes out to 
disk, and you'll discover the data missing from the file after the 
system comes back up.

If you're OK with the possibility of losing a measurement in the case of 
a system crash, then you should just write measurements to a series of 
flat files, then have another process altogether (one that isn't holding 
up the machine) load those files into the database.  The fact that it 
takes a few ms to write to disk is a physical limitation you can't get 
around without using more expensive hardware to improve the situation.  
If you haven't been seeing that in your app already, I assure you it's 
just because you haven't looked for the issue before--this limitation on 
disk write speed has been there all along, the database is just forcing 
you to address it.

Greg Smith    2ndQuadrant   Baltimore, MD
PostgreSQL Training, Services and Support

In response to


Browse pgsql-general by date

  From Date Subject
Next Message Guy Rouillier 2010-01-07 04:52:19 Re: FM format modifier does not remove leading zero from year
Previous Message Richard Broersma 2010-01-07 04:39:57 Re: Is there any different for foreign key to be serial instead of integer