| From: | Greg Smith <greg(at)2ndquadrant(dot)com> |
|---|---|
| To: | "jgardner(at)jonathangardner(dot)net" <jgardner(at)jonathangardner(dot)net> |
| Cc: | pgsql-performance(at)postgresql(dot)org |
| Subject: | Re: PostgreSQL as a local in-memory cache |
| Date: | 2010-06-16 08:27:00 |
| Message-ID: | 4C188AD4.20209@2ndquadrant.com |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-performance |
jgardner(at)jonathangardner(dot)net wrote:
> NOTE: If I do one giant commit instead of lots of littler ones, I get
> much better speeds for the slower cases, but I never exceed 5,500
> which appears to be some kind of wall I can't break through.
>
That's usually about where I run into the upper limit on how many
statements Python can execute against the database per second. Between
that and the GIL preventing better multi-core use, once you pull the
disk out and get CPU bound it's hard to use Python for load testing of
small statements and bottleneck anywhere except in Python itself.
I normally just write little performance test cases in the pgbench
scripting language, then I get multiple clients and (in 9.0) multiple
driver threads all for free.
--
Greg Smith 2ndQuadrant US Baltimore, MD
PostgreSQL Training, Services and Support
greg(at)2ndQuadrant(dot)com www.2ndQuadrant.us
| From | Date | Subject | |
|---|---|---|---|
| Next Message | David Jarvis | 2010-06-16 08:48:27 | Re: Analysis Function |
| Previous Message | Pierre C | 2010-06-16 07:53:58 | Re: PostgreSQL as a local in-memory cache |