Re: [HACKERS] CVS target for docs

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Michael Meskes <meskes(at)postgreSQL(dot)org>
Cc: PostgreSQL Hacker <pgsql-hackers(at)postgreSQL(dot)org>
Subject: Re: [HACKERS] CVS target for docs
Date: 1999-03-21 15:40:28
Message-ID: 7326.922030828@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

> I'm currently thinking about moving to cvs completely but wonder how much
> more network traffic this will cause.

FWIW, I've been using remote cvs from my home machine and it seems to
work very well, and reasonably speedily. I ran a "cvs update" on the
Postgres tree just now, while watching hub's CPU load via "top" in
another window. Elapsed time was 2m 45s, and the server's CPU usage
on hub never got above 3%. This run only had to pull a couple of files,
since I'd just updated yesterday --- a typical run probably takes more
like 4m or so. Network bandwidth doesn't seem to be the limiting factor
in an update (to judge from das blinkenlights on my router), though it
is the bottleneck in a full checkout.

If what you're currently doing is cvs or cvsup into a local directory
at hub, then transferring the files to home via tar and ftp, I've got
to think that remote cvs is a vastly more efficient and less error-prone
solution.

BTW, I recommend putting
cvs -z3
update -d -P
checkout -P
in your ~/.cvsrc. The first of these invokes gzip -3 compression for
all cvs network transfers; that should take care of bandwidth problems.
The other two make the default handling of subdirectories more
reasonable.

regards, tom lane

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Oleg Broytmann 1999-03-21 15:42:59 Re: [HACKERS] VACUUM ANALYZE problem on linux
Previous Message Thomas Lockhart 1999-03-21 15:06:24 Re: [HACKERS] parser enhancement request for 6.5