Re: Re: Connection hanging on INSERT apparently due to large batch size and 4 CPU cores

From: Kris Jurka <books(at)ejurka(dot)com>
To: John <jgassner(at)gmail(dot)com>
Cc: pgsql-jdbc(at)postgresql(dot)org
Subject: Re: Re: Connection hanging on INSERT apparently due to large batch size and 4 CPU cores
Date: 2008-10-29 17:54:23
Message-ID: Pine.BSO.4.64.0810291326250.14951@leary.csoft.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-jdbc

On Sat, 25 Oct 2008, John wrote:

> Could it be related to the INSERT being done on a table that has 330
> columns?
>

Yes, that's it (or at least part of it). In the simple case of using the
same, known parameter types for each batch entry, the data returned from
the server has no dependence on parameter count. The issue arises when
binding a parameter with unspecified type. In the attached
SimpleDeadLockDemo, you can see the driver issuing repeated
Describe(statement=S_1) which results in a ParameterDescription message
whose size depends on the number of parameters. With a large parameter
count, the backend is going to fill its network buffer with this data and
we'll get the deadlock you're seeing.

In this case we've supplied an explicit parameter type at parse time, so
describing it isn't going to tell us anything new. Even if it was going
to provide us information, we only need to issue the describe once, not
for every batch entry. I'm not sure how complicated a solution might be,
but at least I understand what's going on now.

Also attached (BatchDeadLock) is a test case that reliably locks up as
you've described.

Kris Jurka

Attachment Content-Type Size
SimpleDeadLockDemo.java text/plain 693 bytes
simple-output.txt text/plain 4.0 KB
BatchDeadLock.java text/plain 1.2 KB

In response to

Browse pgsql-jdbc by date

  From Date Subject
Next Message Mikko Tiihonen 2008-11-03 14:58:27 Binary transfer patches - again
Previous Message Oliver Jowett 2008-10-29 11:33:05 Re: Driver memory usage on select and autocommit