From: | "Merlin Moncure" <mmoncure(at)gmail(dot)com> |
---|---|
To: | "Luke Lonergan" <llonergan(at)greenplum(dot)com> |
Cc: | "Krishna Kumar" <kumar(dot)ramanathan(at)gmail(dot)com>, pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Benchmarking PGSQL? |
Date: | 2007-02-14 16:20:53 |
Message-ID: | b42b73150702140820k7fa0afabn43ec61c3d1d881b7@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On 2/14/07, Luke Lonergan <llonergan(at)greenplum(dot)com> wrote:
>
> Here's one:
>
> Insert performance is limited to about 10-12 MB/s no matter how fast the
> underlying I/O hardware. Bypassing the WAL (write ahead log) only boosts
> this to perhaps 20 MB/s. We've found that the biggest time consumer in the
> profile is the collection of routines that "convert to datum".
>
> You can perform the test using any dataset, you might consider using the
> TPC-H benchmark kit with a data generator available at www.tpc.org. Just
> generate some data, load the schema, then perform some COPY statements,
> INSERT INTO SELECT FROM and CREATE TABLE AS SELECT.
I am curious what is your take on the maximum insert performance, in
mb/sec of large bytea columns (toasted), and how much if any greenplum
was able to advance this over the baseline. I am asking on behalf of
another interested party. Interested in numbers broken down per core
on 8 core quad system and also aggreate.
merlin
From | Date | Subject | |
---|---|---|---|
Next Message | Mark Stosberg | 2007-02-14 16:28:38 | reindex vs 'analyze' (was: Re: cube operations slower than geo_distance() on production server) |
Previous Message | Luke Lonergan | 2007-02-14 15:35:43 | Re: Benchmarking PGSQL? |