From: | "Trevor Talbot" <quension(at)gmail(dot)com> |
---|---|
To: | "Magnus Hagander" <magnus(at)hagander(dot)net> |
Cc: | "Scott Ribe" <scott_ribe(at)killerbytes(dot)com>, "pgsql general" <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Linux v.s. Mac OS-X Performance |
Date: | 2007-11-28 17:53:34 |
Message-ID: | 90bce5730711280953h47a7d978m211469bfe7a88ec1@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On 11/28/07, Magnus Hagander <magnus(at)hagander(dot)net> wrote:
> On Wed, 2007-11-28 at 07:29 -0700, Scott Ribe wrote:
> > > Yes, very much so. Windows lacks the fork() concept, which is what makes
> > > PostgreSQL much slower there.
> >
> > So grossly slower process creation would kill postgres connection times. But
> > what about the cases where persistent connections are used? Is it the case
> > also that Windows has a performance bottleneck for interprocess
> > communication?
>
> There is at least one other bottleneck, probably more than one. Context
> switching between processes is a lot more expensive than on Unix (given
> that win32 is optimized towards context switching between threads). NTFS
> isn't optimized for having 100+ processes reading and writing to the
> same file. Probably others..
I'd be interested to know what this info is based on. The only
fundamental difference between a process and a thread context switch
is VM mapping (extra TLB flush, possible pagetable mapping tweaks).
And why would NTFS care about anything other than handles?
I mean, I can understand NT having bottlenecks in various areas
compared to Unix, but this "threads are specially optimized" thing is
seeming a bit overblown. Just how often do you see threads from a
single process get contiguous access to the CPU?
From | Date | Subject | |
---|---|---|---|
Next Message | Magnus Hagander | 2007-11-28 17:59:08 | Re: Linux v.s. Mac OS-X Performance |
Previous Message | Merlin Moncure | 2007-11-28 17:50:03 | Re: Select all fields except one |