On 03.11.2010 17:52, Nick Matheson wrote:
> We have an application that needs to do bulk reads of ENTIRE
> Postgres tables very quickly (i.e. select * from table). We have
> observed that such sequential scans run two orders of magnitude slower
> than observed raw disk reads (5 MB/s versus 100 MB/s). Part of this is
> due to the storage overhead we have observed in Postgres. In the
> example below, it takes 1 GB to store 350 MB of nominal data. However
> that suggests we would expect to get 35 MB/s bulk read rates.
> Observations using iostat and top during these bulk reads suggest
> that the queries are CPU bound, not I/O bound. In fact, repeating the
> queries yields similar response times. Presumably if it were an I/O
> issue the response times would be much shorter the second time through
> with the benefit of caching.
> We have tried these simple queries using psql, JDBC, pl/java stored
> procedures, and libpq. In all cases the client code ran on the same
> box as the server. We have experimented with Postgres 8.1, 8.3 and 9.0.
Try COPY, ie. "COPY bulk_performance.counts TO STDOUT BINARY".
In response to
pgsql-performance by date
|Next:||From: Marti Raudsepp||Date: 2010-11-03 17:17:09|
|Subject: Re: Simple (hopefully) throughput question?|
|Previous:||From: Kevin Grittner||Date: 2010-11-03 17:09:42|
|Subject: Re: Bufer cache replacement LRU algorithm?|