Re: Spinlock performance improvement proposal

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: "Vadim Mikheev" <vmikheev(at)sectorbase(dot)com>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: Spinlock performance improvement proposal
Date: 2001-09-29 14:25:18
Message-ID: 23098.1001773518@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

"Vadim Mikheev" <vmikheev(at)sectorbase(dot)com> writes:
>> I have committed changes to implement this proposal. I'm not seeing
>> any significant performance difference on pgbench on my single-CPU
>> system ... but pgbench is I/O bound anyway on this hardware, so that's
>> not very surprising. I'll be interested to see what other people
>> observe. (Tatsuo, care to rerun that 1000-client test?)

> What is your system? CPU, memory, IDE/SCSI, OS?
> Scaling factor and # of clients?

HP C180, SCSI-2 disks, HPUX 10.20. I used scale factor 10 and between
1 and 10 clients. Now that I think about it, I was running with the
default NBuffers (64), which probably constrained performance too.

> BTW1 - shouldn't we rewrite pgbench to use threads instead of
> "libpq async queries"? At least as option. I'd say that with 1000
> clients current pgbench implementation is very poor.

Well, it uses select() to wait for activity, so as long as all query
responses arrive as single packets I don't see the problem. Certainly
rewriting pgbench without making libpq thread-friendly won't help a bit.

> BTW2 - shouldn't we learn if there are really portability/performance
> issues in using POSIX mutex-es (and cond. variables) in place of
> TAS (and SysV semaphores)?

Sure, that'd be worth looking into on a long-term basis.

regards, tom lane

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message mlw 2001-09-29 15:00:06 Re: Spinlock performance improvement proposal
Previous Message Chamanya 2001-09-29 13:18:56 Re: Spinlock performance improvement proposal