Re: SAN, clustering, MPI, Backplane Re: Postgresql on SAN

From: Gaetano Mendola <mendola(at)bigfoot(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Subject: Re: SAN, clustering, MPI, Backplane Re: Postgresql on SAN
Date: 2004-07-12 23:09:06
Message-ID: 40F31A12.4010107@bigfoot.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Tom Lane wrote:

> Andrew Piskorski <atp(at)piskorski(dot)com> writes:
>
>>Another thing I've been wondering about, but haven't been able to find
>>any discussion of:
>>Just how closely tied is PostgreSQL to its use of shared memory?
>
>
> Pretty damn closely. You would not be happy with the performance of
> anything that tried to insert a network communication layer into access
> to what we think of as shared memory.
>
> For a datapoint, check the list archives for discussions a few months
> ago about performance with multiple Xeons. We were seeing significant
> performance degradation simply because the communications architecture
> for multiple Xeon chips on one motherboard is badly designed :-(
> The particular issue we were able to document was cache-line swapping
> for spinlock variables, but AFAICS the issue would not go away even
> if we had a magic zero-overhead locking mechanism: the Xeons would
> still suck, because of contention for access to the shared variables
> that the spinlocks are protecting.
>
> OpenMosix is in the category of "does not work, and would be unusably
> slow if it did work" ... AFAIK any similar design would have the same
> problem.

However shall be nice if the postmaster is not selfish as is it now (two
postmastera are not able to work on the same shared memory segment),
projects like cashmere ( www.cs.rochester.edu/research/cashmere/ ) or
this www.tu-chemnitz.de/informatik/HomePages/RA/projects/VIA_SCI/via_sci_hardware.html

are able to run a single database mananged by a postmaster for each node in a
distributed architecture.

I seen these hardware working at CeBIT some years ago and it's possible to setup
any kind of configuration: linear, triangular, cube, ipercube. Basically each node
share part of the local RAM in order to create a big shared memory segment and the
shared memory is managed "without kernel intervention".

Regards
Gaetano Mendola

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Alvaro Herrera 2004-07-12 23:43:33 Re: [subxacts] Open nested xact items
Previous Message Oliver Jowett 2004-07-12 22:34:08 Re: [subxacts] Open nested xact items