On 2/25/2011 7:26 AM, Vick Khera wrote:
> On Thu, Feb 24, 2011 at 6:38 PM, Aleksey Tsalolikhin
> <atsaloli(dot)tech(at)gmail(dot)com> wrote:
>> In practice, if I pg_dump our 100 GB database, our application, which
>> is half Web front end and half OLTP, at a certain point, slows to a
>> crawl and the Web interface becomes unresponsive. I start getting
>> check_postgres complaints about number of locks and query lengths. I
>> see locks around for over 5 minutes.
> I'd venture to say your system does not have enough memory and/or disk
> bandwidth, or your Pg is not tuned to make use of enough of your
> memory. The most likely thing is that you're saturating your disk
> Check the various system statistics from iostat and vmstat to see what
> your baseline load is, then compare that when pg_dump is running. Are
> you dumping over the network or to the local disk as well?
Agreed... additionally, how much of that 100GB is actually changing?
You are probably backing up the same thing over and over. Maybe some
replication or differential backup would make your backup's smaller and
easier on your IO.
In response to
pgsql-general by date
|Next:||From: David Johnston||Date: 2011-02-25 14:40:01|
|Subject: Re: The scope of sequence|
|Previous:||From: akp geek||Date: 2011-02-25 14:30:28|
|Subject: select to_timestamp('02/26/2011 14:50', 'MM/DD/YYYY HH24MI')|