Skip site navigation (1) Skip section navigation (2)

Deadlock condition in driver

From: Daniel Migowski <dmigowski(at)ikoffice(dot)de>
To: pgsql-bugs(at)postgresql(dot)org
Subject: Deadlock condition in driver
Date: 2008-05-21 20:10:47
Message-ID: 483481C7.6030301@ikoffice.de (view raw or flat)
Thread:
Lists: pgsql-bugs
Hello dear developers,

I came across a deadlock condition in the JDBC driver that rises when 
very large queries, containing thousends of statements are send to the 
server with statement.execute(). I need this functionality, which work 
well with less than 6445 statements to send update scripts along with my 
application to the server.

The problem is the result of filled (and unemptied) TCP-Buffers. The 
driver takes all statements and sends them to the server with the 
extended query protocol. See this log snipped, which contains the last 
10 lines before the eternal halt of the test application below (repeats 
every 4 lines):

21:12:22.919 (1)  FE=> Parse(stmt=null,query="select 0",oids={})
21:12:22.919 (1)  FE=> Bind(stmt=null,portal=null)
21:12:22.919 (1)  FE=> Describe(portal=null)
21:12:22.919 (1)  FE=> Execute(portal=null,limit=0)
21:12:22.919 (1)  FE=> Parse(stmt=null,query="select 0",oids={})
21:12:22.919 (1)  FE=> Bind(stmt=null,portal=null)
21:12:22.919 (1)  FE=> Describe(portal=null)
21:12:22.919 (1)  FE=> Execute(portal=null,limit=0)
21:12:22.919 (1)  FE=> Parse(stmt=null,query="select 0",oids={})
21:12:22.919 (1)  FE=> Bind(stmt=null,portal=null)

Each statement sent to the server result in the following (yet still 
unreceived) answers send from the server to the client:

21:27:50.169 (1)  <=BE CommandStatus(SELECT)
21:27:50.169 (1)  <=BE ParseComplete [null]
21:27:50.169 (1)  <=BE BindComplete [null]
21:27:50.169 (1)  <=BE RowDescription(1)
21:27:50.169 (1)  <=BE DataRow

Since the driver is happy sending stuff, and the server happy answering 
it, after a while the clients TCP receive buffer is full, some millis 
later the servers TCP send buffer, some millies later the servers TCP 
receive buffer and then finally the client TCP send buffer. Increasing 
any of them delays the problem to a larger amount of statements.

When piping my script to psql, or sending it by PGAdmin there is no problem.

I suggest the following solution:

After sending 32kb (or the current send buffer size, 
Socket.getSendBufferSize()), the client checks for the amount of data in 
the receive buffer for every statement following. If its full (which 
means the server might already be blocked), there are two possiblities, 
from which the first is my favorite, and the second the option to 
choose, if the first is not supported on the platform.

1. The receive buffer is increased in 32kb steps (or the current send 
buffer size, or even 64k, taking the servers expected receive buffer 
into account, to). This would unblock the server and gives enough space 
for more (small) responses from the server. Afterwards the receive 
buffer should be trimmed to its original size to don't become trapped by 
decreased performance from large buffers. This method  might be a bit 
slower than the currently implemented one for cases in which  the 
answers of 32kb of statements would fit into the existing buffers, but 
you don't expect lightning speed in such situations anyway, and 
everything is better then being deadlocked in a production system.
2. We take all data available in the receive buffer already and place it 
into a local byte[] to make space for more data, and release blocking on 
the server side this way. This option might be a bit slower than the first.

A deadlock condition can yet be artificially constructed: If one sends a 
SELECT that sends large amounts of data from the server to the client, 
so the server is still not ready to process the next commands when we 
continue to bombard it with statements, there might still be a lock 
condition. But I cannot think of any use case where you have large 
quantities of statements like me, AND expect to receive large amounts of 
data which is discard anyway since there are following statements. So we 
can ignore this case, and happily send gig sized update scripts to the 
server in one call the driver :).

Currently my solution is to increase the TCP buffer size in a copy of 
the PGStream class, which shadows the original class in the driver jar 
(bad style...).

With best regards,
Daniel Migowski

------------- TEST APP --------------------

import java.io.IOException;
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Statement;
import java.util.Properties;

/**
 * This class demonstrates the deadlock condition in the JDBC driver.
 *
 * Author: dmigowski
 */
public class DeadLockDemonstration {

    public static void main (String args[]) throws IOException, 
SQLException {
        DriverManager.registerDriver(new org.postgresql.Driver());
        Properties props = new Properties();
        props.setProperty("user","ik");
        props.setProperty("password","ik0000");
        props.setProperty("loglevel","2");
        Connection c = DriverManager.getConnection(
            "jdbc:postgresql://localhost:5434/postgres",props);
        StringBuilder b = new StringBuilder();
        for(int i=1; i<100000; i++ ) {
            b.append("select 0;");
        }
        Statement s = c.createStatement();
        ResultSet r = s.executeQuery(b.toString());
    }
   
}

Responses

pgsql-bugs by date

Next:From: Kris JurkaDate: 2008-05-21 20:45:21
Subject: Re: Deadlock condition in driver
Previous:From: Dave PageDate: 2008-05-21 18:05:46
Subject: Re: Getting the command line to work

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group