From: | Nick Matheson <Nick(dot)D(dot)Matheson(at)noaa(dot)gov> |
---|---|
To: | Marti Raudsepp <marti(at)juffo(dot)org> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Simple (hopefully) throughput question? |
Date: | 2010-11-04 14:34:55 |
Message-ID: | 4CD2C48F.1040208@noaa.gov |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Marti-
> Just some ideas that went through my mind when reading your post
> PostgreSQL 8.3 and later have 22 bytes of overhead per row, plus
> page-level overhead and internal fragmentation. You can't do anything
> about row overheads, but you can recompile the server with larger
> pages to reduce page overhead.
>
>
>> Is there any way using stored procedures (maybe C code that calls
>> SPI directly) or some other approach to get close to the expected 35
>> MB/s doing these bulk reads?
>>
>
> Perhaps a simpler alternative would be writing your own aggregate
> function with four arguments.
>
> If you write this aggregate function in C, it should have similar
> performance as the sum() query.
>
You comments seem to confirm some of our foggy understanding of the
storage 'overhead' and nudge us in the direction of C stored procedures.
Do you have any results or personal experiences from moving calculations
in this way? I think we are trying to get an understanding of how much
we might stand to gain by the added investment.
Thanks,
Nick
From | Date | Subject | |
---|---|---|---|
Next Message | Nick Matheson | 2010-11-04 14:38:23 | Re: Simple (hopefully) throughput question? |
Previous Message | Nick Matheson | 2010-11-04 14:31:25 | Re: Simple (hopefully) throughput question? |