From: | Zeugswetter Andreas SB <ZeugswetterA(at)wien(dot)spardat(dot)at> |
---|---|
To: | "'Michael A(dot) Olson'" <mao(at)sleepycat(dot)com>, "'pgsql-hackers(at)postgreSQL(dot)org'" <pgsql-hackers(at)postgreSQL(dot)org> |
Subject: | AW: Berkeley DB... |
Date: | 2000-05-25 10:59:44 |
Message-ID: | 219F68D65015D011A8E000006F8590C604AF7DA3@sdexcsrv1.f000.d0188.sd.spardat.at |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
> Frankly, based on my experience with Berkeley DB, I'd bet on mine.
> I can do 2300 tuple fetches per CPU per second, with linear scale-
> up to at least four processors (that's what we had on the box we
> used). That's 9200 fetches a second. Performance isn't going
> to be the deciding issue.
Wow, that sounds darn slow. Speed of a seq scan on one CPU,
one disk should give you more like 19000 rows/s with a small record size.
Of course you are probably talking about random fetch order here,
but we need fast seq scans too.
(10 Mb/s disk, 111 b/row, no cpu bottleneck, nothing cached ,
Informix db, select count(*) ... where notindexedfield != 'notpresentvalue';
Table pages interleaved with index pages, tabsize 337 Mb
(table with lots of insert + update + delete history) )
Andreas
From | Date | Subject | |
---|---|---|---|
Next Message | Louis-David Mitterrand | 2000-05-25 11:25:19 | Re: understanding Datum -> char * -> Datum conversions |
Previous Message | Karel Zak | 2000-05-25 10:51:52 | Re: understanding Datum -> char * -> Datum conversions |