From: | Aryeh Leib Taurog <python(at)aryehleib(dot)com> |
---|---|
To: | psycopg(at)postgresql(dot)org |
Subject: | Re: speed concerns with executemany() |
Date: | 2017-01-19 12:23:15 |
Message-ID: | 20170119122315.GA2605@deb76.aryehleib.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | psycopg |
On Mon, Jan 2, 2017 at 3:35 PM, Adrian Klaver <adrian(dot)klaver(at)aklaver(dot)com> wrote:
>>> Same code across network, client in Bellingham WA, server in Fremont CA:
>>>
>>> Without autocommit:
>>>
>>> In [51]: %timeit -n 10 cur.executemany(sql, l)
>>> 10 loops, best of 3: 8.22 s per loop
>>>
>>>
>>> With autocommit:
>>>
>>> In [56]: %timeit -n 10 cur.executemany(sql, l)
>>> 10 loops, best of 3: 8.38 s per loop
>>
>> Adrian, have you got a benchmark "classic vs. joined" on remote
>> network? Thank you.
>
> With NRECS=10000 and page size=100:
>
> aklaver(at)tito:~> python psycopg_executemany.py -p 100
> classic: 427.618795156 sec
> joined: 7.55754685402 sec
This is really interesting. I have long been using a utility I put
together to insert using BINARY COPY. In fact I just brushed it up a
bit and put it on PyPi: <https://pypi.python.org/pypi/pgcopy>
I'm curious to run a benchmark against the improved executemany. I'd
hoped that pgcopy would be generally useful, but it may no longer be
necessary. A fast executemany() certainly suits more use cases.
Best,
Aryeh Leib Taurog
From | Date | Subject | |
---|---|---|---|
Next Message | Daniel Fortunov | 2017-01-22 16:49:18 | Re: Nested transactions support for code composability |
Previous Message | Jim Nasby | 2017-01-17 21:30:25 | Re: Turbo ODBC |