Re: Any risk in increasing BLCKSZ to get larger tuples?

From: "Steve Wolfe" <steve(at)iboats(dot)com>
To: "PostgreSQL General" <pgsql-general(at)postgresql(dot)org>
Subject: Re: Any risk in increasing BLCKSZ to get larger tuples?
Date: 2000-10-19 22:25:50
Message-ID: 003401c03a1b$91da3a60$50824e40@iboats.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general


> In some cases yes, in some no. Simple text should compress/decompress
> quickly and the cpu time wasted is made up for by less hardware access
> time and smaller db files. If you have a huge database the smaller db
> files could be critical.

Hmm... that doesn't seem quite right to me. Whether it is compressed or
not, the same amount of final data has to move across the system bus to the
CPU for processing. It's the difference of (A) moving a large amount of
data to the CPU and processing it, or (B) moving a small amount of data to
the CPU, use the CPU cycles to turn it into the large set (as large as in
(A)), then processing it. I could be wrong, though.

steve

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Joseph Shraibman 2000-10-19 22:32:01 Re: A stupid question :)
Previous Message Herbert Liechti 2000-10-19 22:24:47 Re: prefer (+) oracle notation