"Gnanakumar" <gnanam(at)zoniac(dot)com> wrote:
> We're running a web-based application powered by PostgreSQL.
> Recently, we've developed a "new" separate Java-based standalone
> (daemon process) threaded program that performs both read and
> write operations heavily on 2 "huge" tables. One table has got
> 5.4 million records and other has 1.3 million records. Moreover,
> more than one read and/or write operations may be executing
We're running a web application using PostgreSQL and Java which has
80 tables with over 1 million records each, the largest of which has
212 million rows. It is updated by replication from 3000 directly
attached users at 72 sites, using a multi-threaded Java application.
We have one connection pool for the read-only web application,
which allows about 30 concurrent requests, and a connection pool for
the replication which allows 6.
If you want a peek at our performance, you can access the site here:
http://wcca.wicourts.gov/ -- if you view a case and click on the
"Court Record Events" button, you'll be viewing records in the table
with 212 million rows.
My point is that you're not asking PostgreSQL to do anything it
*can't* handle well.
> The issue that we're facing currently in our Production server is,
> whenever this "newly" developed Java program is started/run, then
> immediately the entire web application becomes very slow in
> response. At this time, I could also see from the output of "
> iostat -tx" that "%util" is even crossing more than 80%. So, what
> I could infer here based on my knowledge is, this is creating
> heavy IO traffic because of write operation. Since it was
> entirely slowing down web application, we've temporarily stopped
> running this standalone application.
How are you handling concurrency? (Are you using FOR SHARE on your
SELECT statements? Are you explicitly acquiring table locks before
modifying data? Etc.) You might be introducing blocking somehow.
When things are slow, try running some of the queries show on this
page to get more clues:
In particular, I recommend that you *never* leave transactions open
or hold locks while waiting for user response or input. They *will*
answer phone calls or go to lunch with things pending, potentially
blocking other users for extended periods.
> Meantime, I also read about "checkpoint spikes" could be a reason
> for slow down in "write workload" database.
When you hit that issue, there is not a continual slowdown --
queries which normally run very fast (a small fraction of a second)
may periodically all take tens of seconds. Is that the pattern
> We're running PostgreSQL v8.2.22 on CentOS5.2 having 35 GB RAM.
> "log_checkpoints" is not available in PostgreSQL v8.2.22.
> We want to optimize our Production database to handle both reads
> and writes, any suggestions/advice/guidelines on this are highly
Support for 8.2 was dropped last year, five years after it was
released. PostgreSQL has had a new major release every year since
8.2 was released, many of which have provided dramatic performance
improvements. If you want good performance my first suggestion
would be to upgrade your version of PostgreSQL to at least 9.0, and
preferably 9.1. Because of stricter typing in 8.3 and later,
upgrading from 8.2 takes a bit more work than most PostgreSQL major
releases. Be sure to test well.
> # - Background writer -
> bgwriter_delay = 200ms
> bgwriter_lru_percent = 1.0
> bgwriter_lru_maxpages = 5
> bgwriter_all_percent = 0.333
> bgwriter_all_maxpages = 5
These settings result in a very slow dribble of dirty buffers out to
the OS cache. *If* you're hitting the "checkpoint spikes" issue
(see above), you might want to boost the aggressiveness of the
background writer. I couldn't recommend settings without knowing a
lot more about your storage system and its capabilities. In
supported releases of PostgreSQL, the checkpoint system and
background writer are much improved, so again -- upgrading would be
the most effective way to solve the problem.
Besides the outdated PostgreSQL release and possible blocking, I
would be concerned if you are using any sort of ORM for the update
application. You want to watch that very closely because the
default behavior of many of them does not scale well. There's
usually a way to get better performance through configuration and/or
bypassing automatic query generation for complex data requests.
In response to
pgsql-performance by date
|Next:||From: Merlin Moncure||Date: 2012-03-22 16:18:43|
|Subject: Re: Write workload is causing severe slowdown in Production|
|Previous:||From: Tomas Vondra||Date: 2012-03-22 12:55:28|
|Subject: Re: Write workload is causing severe slowdown in