From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Philip Warner <pjw(at)rhyme(dot)com(dot)au> |
Cc: | pgsql-hackers(at)postgresql(dot)org, brianb-pggeneral(at)edsamail(dot)com |
Subject: | Re: pg_dump & performance degradation |
Date: | 2000-07-28 16:22:37 |
Message-ID: | 258.964801357@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general pgsql-hackers |
Philip Warner <pjw(at)rhyme(dot)com(dot)au> writes:
> Brian Baquiran in the [GENERAL] list recently asked if it was possible to
> 'throttle-down' pg_dump so that it did not cause an IO bottleneck when
> copying large tables.
> Can anyone see a reason not to pause periodically?
Because it'd slow things down?
As long as the default behavior is "no pauses", I have no strong
objection.
> Finally, can anyone point me to the most portable subsecond timer routines?
You do not want a timer routine, you want a delay. I think using a
dummy select() with a timeout parameter might be the most portable way.
Anyway we've used it for a long time --- see the spinlock backoff code
in s_lock.c.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Keith G. Murphy | 2000-07-28 16:48:10 | Re: Re: 4 billion record limit? |
Previous Message | Matthew | 2000-07-28 15:55:07 | RE: Backup/dump of huge tables and performance |
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2000-07-28 16:25:18 | Re: Security choices... |
Previous Message | Tom Lane | 2000-07-28 15:59:42 | Re: Questionable coding in proc.c & lock.c |