Re: Simple (hopefully) throughput question?

From: "Pierre C" <lists(at)peufeu(dot)com>
To: pgsql-performance(at)postgresql(dot)org, "Nick Matheson" <Nick(dot)D(dot)Matheson(at)noaa(dot)gov>
Subject: Re: Simple (hopefully) throughput question?
Date: 2010-11-04 09:12:11
Message-ID: op.vlm2ilzpeorkce@apollo13
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance


> Is there any way using stored procedures (maybe C code that calls
> SPI directly) or some other approach to get close to the expected 35
> MB/s doing these bulk reads? Or is this the price we have to pay for
> using SQL instead of some NoSQL solution. (We actually tried Tokyo
> Cabinet and found it to perform quite well. However it does not measure
> up to Postgres in terms of replication, data interrogation, community
> support, acceptance, etc).

Reading from the tables is very fast, what bites you is that postgres has
to convert the data to wire format, send it to the client, and the client
has to decode it and convert it to a format usable by your application.
Writing a custom aggregate in C should be a lot faster since it has direct
access to the data itself. The code path from actual table data to an
aggregate is much shorter than from table data to the client...

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Nick Matheson 2010-11-04 14:31:25 Re: Simple (hopefully) throughput question?
Previous Message Andy Colson 2010-11-03 17:40:20 Re: Simple (hopefully) throughput question?