Nope, this was just a benchmark script that caused this, any sane person
would use an intbig index instead I guess. A better error message would be
nice though, I was pretty confused when this happened.
This can also bring down postgresql - it happens occasionally and causes
the server to terminate. Someone in #postgresql said this happens when the
failure to acquire the lock occurs in a "critical section"? That might be
cause for concern.
On Mon, May 7, 2012 at 4:37 PM, Heikki Linnakangas <
> On 05.05.2012 22:49, tom(at)tomforb(dot)es wrote:
>> The following bug has been logged on the website:
>> Bug reference: 6629
>> Logged by: Tom Forbes
>> Email address: tom(at)tomforb(dot)es
>> PostgreSQL version: 9.1.3
>> Operating system: Windows 7 64bit
>> On a test database with 10,000 rows, each containing a array of 50 unique
>> random integers from 0 to 1000 creating a gist index on the column with
>> gist__int_ops as an option fails with the following error:
>> "too many LWLocks taken".
> I modified the way GiST page splitting works in 9.1, this seems to be
> caused by those changes. When a page is split and the downlink for the new
> page is inserted to the parent, we keep a lock on the child and the parent.
> But inserting the downlink to the parent can cause the parent to split too,
> and so forth, all the way to the root. There's a hard-coded limit that a
> backend can hold at most 100 lwlocks simultaneously, and what happens is
> that when the tree is very tall, about 50 levels tall in this case, you run
> into that limit when you have to do a page split at every level.
> We could rearrange the page splitting algorithm to release locks earlier,
> before traversing to the next parent level. I didn't do that because I
> thought no-one would create an index that tall and the code was a bit
> easier to follow when locks are released in the same function where they're
> acquired, but looks like I was wrong. I'm not sure how useful such an index
> is in practice, but at least it's apparently easy to create one.
> I wrote a quick patch to do that, and with the patch the index build
> finished - but it took hours. And the index was 10GB in size, where the
> heap is just 12 MB, and searches using the index take ages. Do you have a
> real-life scenario where you run into this limit? I'm a bit reluctant to
> change the code unless there's an actual use case for a gist index more
> than 50 levels deep.
> Heikki Linnakangas
> EnterpriseDB http://www.enterprisedb.com
In response to
pgsql-bugs by date
|Next:||From: Kevin Grittner||Date: 2012-05-07 16:54:22|
|Subject: Re: BUG #6627: Error while launching pgAdmin III|
|Previous:||From: Tom Lane||Date: 2012-05-07 15:51:42|
|Subject: Re: BUG #6629: Creating a gist index fails with "too many LWLocks taken" |