Re: Git out of sync vs. CVS

From: "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov>
To: "Magnus Hagander" <magnus(at)hagander(dot)net>, <pgsql-hackers(at)postgresql(dot)org>
Cc: "peter_e" <peter_e(at)gmx(dot)net>,"Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Subject: Re: Git out of sync vs. CVS
Date: 2010-01-19 20:07:53
Message-ID: 4B55BCB9020000250002E79C@gw.wicourts.gov
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

I wrote:

> Perhaps it is as simple, though, as using the client's time
> instead of the CVS server's time -- that's one of the things I've
> seen cause problems for this sort of thing using CVS before.

I got a brief consult with a Ruby programmer here under the "if it's
less than ten minutes you don't have to schedule it through a
manager" rule. From what we can see, fromcvs scans for all entries
*after* a "previous run" time, but it isn't setting an upper bound
on time during the scan. I haven't found where it saves the time
for the lower limit of the next run, but I rather suspect that it
grabs the current time near the end of the scan. If this is an
accurate assessment, to avoid a window for lost commits, we'd have
to fix a time before we started the scan to use as the upper bound
for CVS commits to handle, and use it for the "previous run" time.

There's still the possible issue of *whose* clock we're using for
this.

Reality check: does the frequency of lost CVS commits within git
seem consistent with this theory?

-Kevin

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Tom Lane 2010-01-19 20:13:23 Re: quoting psql varible as identifier
Previous Message Jeff Davis 2010-01-19 20:07:46 Re: MySQL-ism help patch for psql