mstone+postgres(at)mathom(dot)us (Michael Stone) writes:
> On Fri, Mar 24, 2006 at 01:21:23PM -0500, Chris Browne wrote:
>>A naive read on this is that you might start with one backend process,
>>which then spawns 16 more. Each of those backends is scanning through
>>one of those 16 files; they then throw relevant tuples into shared
>>memory to be aggregated/joined by the central one.
> Of course, table scanning is going to be IO limited in most cases, and
> having every query spawn 16 independent IO threads is likely to slow
> things down in more cases than it speeds them up. It could work if you
> have a bunch of storage devices, but at that point it's probably
> easier and more direct to implement a clustered approach.
All stipulated, yes. It obviously wouldn't be terribly useful to scan
more aggressively than I/O bandwidth can support. The point is that
this is one of the kinds of places where concurrent processing could
do some good...
let name="cbbrowne" and tld="acm.org" in name ^ "@" ^ tld;;
Save the whales. Collect the whole set.
In response to
pgsql-performance by date
|Next:||From: PFC||Date: 2006-03-24 22:54:37|
|Subject: Query plan from hell|
|Previous:||From: Jim C. Nasby||Date: 2006-03-24 19:24:04|
|Subject: Re: Scaling up PostgreSQL in Multiple CPU / Dual Core|