Skip site navigation (1) Skip section navigation (2)

Re: LWLock Queue Jumping

From: Stefan Kaltenbrunner <stefan(at)kaltenbrunner(dot)cc>
To: Jeff Janes <jeff(dot)janes(at)gmail(dot)com>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: LWLock Queue Jumping
Date: 2009-08-31 08:08:49
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-hackers
Jeff Janes wrote:
> On Sun, Aug 30, 2009 at 11:01 AM, Stefan Kaltenbrunner 
> <stefan(at)kaltenbrunner(dot)cc> wrote:
>     Jeff Janes wrote:
>            ---------- Forwarded message ----------
>            From: Stefan Kaltenbrunner <stefan(at)kaltenbrunner(dot)cc>
>            To: Heikki Linnakangas <heikki(dot)linnakangas(at)enterprisedb(dot)com
>         <mailto:heikki(dot)linnakangas(at)enterprisedb(dot)com>
>            <mailto:heikki(dot)linnakangas(at)enterprisedb(dot)com
>         <mailto:heikki(dot)linnakangas(at)enterprisedb(dot)com>>>
>            Date: Sun, 30 Aug 2009 11:48:47 +0200
>            Subject: Re: LWLock Queue Jumping
>            Heikki Linnakangas wrote:
>                I don't have any pointers right now, but WALInsertLock does
>                often show
>                up as a bottleneck in write-intensive benchmarks.
>            yeah I recently ran accross that issue with testing
>         concurrent COPY
>            performance:
>            discussed here:
>         It looks like this is the bulk loading of data into unindexed
>         tables.  How good is that as a target for optimization?  I can
>         see several (quite difficult to code and maintain) ways to make
>         bulk loading into unindexed tables faster, but they would not
>         speed up the more general cases.
>     well bulk loading into unindexed tables is quite a common workload -
>     apart from dump/restore cycles (which we can now do in parallel) a
>     lot of analytic workloads are that way.
>     Import tons of data from various sources every night/weeek/month,
>     index, analyze & aggregate, drop again.
> In those cases where you end by dropping the tables, we should be 
> willing to bypass WAL altogether, right?  Is the problem we can bypass 
> WAL (by doing the COPY in the same transaction that created or truncated 
> the table), or we can COPY in parallel, but we can't do both simultaneously?

well yes that is part of the problem - if you bulk load into one or few 
tables concurrently you can only sometimes make use of the WAL bypass 
optimization. This is especially interesting if you consider that COPY 
alone is more or less CPU bottlenecked these days so using multiple 
cores makes sense to get higher load rates.


In response to

pgsql-hackers by date

Next:From: Heikki LinnakangasDate: 2009-08-31 08:12:13
Subject: Tightening binary receive functions
Previous:From: Zdenek KotalaDate: 2009-08-31 07:05:33
Subject: set_client_encoding is broken

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group