Re: compact flash disks?

From: "James Mansion" <james(at)mansionfamily(dot)plus(dot)com>
To: "Magnus Hagander" <magnus(at)hagander(dot)net>
Cc: "Ron" <rjpeace(at)earthlink(dot)net>, <pgsql-performance(at)postgresql(dot)org>
Subject: Re: compact flash disks?
Date: 2007-03-09 06:24:11
Message-ID: HCEPKPMCAJLDGJIBCLGHGEEIHEAA.james@mansionfamily.plus.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Isn't it likely that a single stream (or perhaps one that can be partitioned
across spindles) will tend to be fastest, since it has a nice localised
stream that a) allows for compression of reasonable blocks and b) fits with
commit aggregation?

RAM capacity on servers is going up and up, but the size of a customer
address or row on an invoice isn't. I'd like to see an emphasis on speed of
update with an assumption that most hot data is cached, most of the time.

My understanding also is that storing data columnwise is handy when its
persisted because linear scans are much faster. Saw it once with a system
modelled after APL, blew me away even on a sparc10 once the data was
organised and could be mapped.

Still, for the moment anything that helps with the existing system would be
good. Would it help to define triggers to be deferrable to commit as well
as end of statement (and per row)? Seems to me it should be, at least for
ones that raise 'some thing changed' events. And/or allow specification
that events can fold and should be very cheap (don't know if this is the
case now? Its not as well documented how this works as I'd like)

James
--
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.5.446 / Virus Database: 268.18.7/713 - Release Date: 07/03/2007
09:24

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Bruce McAlister 2007-03-09 10:45:16 Re: PostgreSQL 8.2.3 VACUUM Timings/Performance
Previous Message Steve 2007-03-09 02:09:52 Re: Question about PGSQL functions