Skip site navigation (1) Skip section navigation (2)

Re: TCP Overhead on Local Loopback

From: Samuel Gendler <sgendler(at)ideasculptor(dot)com>
To: Andrew Dunstan <andrew(at)dunslane(dot)net>
Cc: Claudio Freire <klaussfreire(at)gmail(dot)com>, Andy <angelflow(at)yahoo(dot)com>, Ofer Israeli <oferi(at)checkpoint(dot)com>, "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org>
Subject: Re: TCP Overhead on Local Loopback
Date: 2012-04-02 08:25:09
Message-ID: CAEV0TzDW=LuZ1V0VqX1Mub0O7yK1dTtq6HH_Q3Lo3X_LXqE0+g@mail.gmail.com (view raw or flat)
Thread:
Lists: pgsql-performance
On Sun, Apr 1, 2012 at 6:11 PM, Andrew Dunstan <andrew(at)dunslane(dot)net> wrote:

>
>
> On 04/01/2012 08:29 PM, Claudio Freire wrote:
>
>> On Sun, Apr 1, 2012 at 8:54 PM, Andrew Dunstan<andrew(at)dunslane(dot)net>
>>  wrote:
>>
>>> You could try using Unix domain socket and see if the performance
>>>> improves. A relevant link:
>>>>
>>>
>>> He said Windows. There are no Unix domain sockets on Windows. (And please
>>> don't top-post)
>>>
>> Windows supports named pipes, which are functionally similar, but I
>> don't think pg supports them.
>>
>>
> Correct, so telling the OP to have a look at them isn't at all helpful.
> And they are not supported on all Windows platforms we support either
> (specifically not on XP, AIUI).
>

But suggesting moving away from TCP/IP with no actual evidence that it is
network overhead that is the problem is a little premature, regardless.
 What, exactly, are the set of operations that each update is performing
and is there any way to batch them into fewer statements within the
transaction.  For example, could you insert all 60,000 records into a
temporary table via COPY, then run just a couple of queries to do bulk
inserts and bulk updates into the destination tble via joins to the temp
table?  60,000 rows updated with 25 columns, 1 indexed in 3ms is not
exactly slow.  That's a not insignificant quantity of data which must be
transferred from client to server, parsed, and then written to disk,
regardless of TCP overhead.  That is happening via at least 60,000
individual SQL statements that are not even prepared statements.  I don't
imagine that TCP overhead is really the problem here.  Regardless, you can
reduce both statement parse time and TCP overhead by doing bulk inserts
(COPY) followed by multi-row selects/updates into the final table.  I don't
know how much below 3ms you are going to get, but that's going to be as
fast as you can possibly do it on your hardware, assuming the rest of your
configuration is as efficient as possible.

In response to

Responses

pgsql-performance by date

Next:From: Andrew DunstanDate: 2012-04-02 11:34:45
Subject: Re: TCP Overhead on Local Loopback
Previous:From: Віталій ТимчишинDate: 2012-04-02 07:14:01
Subject: Re: database slowdown while a lot of inserts occur

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group