Skip site navigation (1) Skip section navigation (2)

Re: Cygwin PostgreSQL Regression Test Problems (Revisited)

From: Jason Tishler <Jason(dot)Tishler(at)dothill(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: pgsql-ports(at)postgresql(dot)org
Subject: Re: Cygwin PostgreSQL Regression Test Problems (Revisited)
Date: 2001-03-28 21:29:28
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-ports

On Wed, Mar 28, 2001 at 01:57:33PM -0500, Tom Lane wrote:
> Jason Tishler <Jason(dot)Tishler(at)dothill(dot)com> writes:
> > I previously reported the above problem with the parallel version of
> > the regression test (i.e., make check) on a machine with limited memory.
> > Unfortunately, I am seeing similar problems on a machine with 192 MB of
> > physical memory and about 208 MB of swap space.  So, now I feel that my
> > initial conclusion that limited memory was the root cause is erroneous.
> Not necessarily.  18 parallel tests imply 54 concurrent processes
> (a shell, a psql, and a backend per test).  Depending on whether Windoze
> is any good about sharing sharable pages across processes, it's not hard
> at all to believe that each process might chew up a few meg of memory
> and/or swap.  You don't have a whole lot of headroom there if so.

I just increased the swap space (i.e., pagefile.sys) to 384 MB and I
still get hangs.  Watching memory usage via the NT Task Manager, Windows
tells me that the memory usage during the regression test is <= 80 MB
which is significantly less than my physical memory.

I wonder if I'm bucking up against some Cygwin limitations.  On the
cygwin-developers list, there was a recent discussion that indicated
that a Cygwin process can only have a max of 64 children.  May be there
is a limit like that which is causing backends to abort?

> Try modifying the parallel_schedule file to break the largest set of
> concurrent tests down into two sets of nine tests.

I'm sure that will work (at least most of the time) since I only get one
of two psql processes to hangs for any given run.  But, "fixing" the
problem this way just doesn't feel right to me.

> Considering that we've seen people run into maxuprc limits on some Unix
> versions, I wonder whether we ought to just do that across-the-board.

Of course, this solution is much better. :,)

> > What is the best way to "catch" this problem?  What are the best set of
> > options to pass to postmaster that will be in turn passed to the back-end
> > postgres processes to hopefully shed some light on this situation?
> I'd use -d1 which should be enough to see backends starting and exiting.
> Any more will clutter the log with individual queries, which probably
> would be more detail than you really want...

I've done the above and it seems to indicate that all backends exited
with a status of 0.  So, I still don't know why some backends "aborted."

Any other suggestions?  Such as somehow specifying an individual log
file for each backend.


Jason Tishler
Director, Software Engineering       Phone: +1 (732) 264-8770 x235
Dot Hill Systems Corp.               Fax:   +1 (732) 264-8798
82 Bethany Road, Suite 7             Email: Jason(dot)Tishler(at)dothill(dot)com
Hazlet, NJ 07730 USA                 WWW:

In response to


pgsql-ports by date

Next:From: Bruce MomjianDate: 2001-03-28 21:31:08
Subject: Re: [PORTS] pgmonitor and Solaris
Previous:From: Peter EisentrautDate: 2001-03-28 20:55:58
Subject: Re: [PORTS] pgmonitor and Solaris

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group