At 10:45 PM +0200 7/15/04, Oliver Fromme wrote:
>M. Bastin wrote:
> > Yes and no. I'm looking at the subject from the frontend-backend
> > protocol 3.0 point of view which is separated in the Extended Query
> > language and the Simple Query language.
> > With the PREPARE and EXECUTE statements you're accessing the Extended
> > Query language through the Simple Query language, which requires more
> > CPU time since your EXECUTE query itself is going to be parsed as a
> > Simple Query first before PostgreSQL realizes it must execute a
> > prepared statement, while when you send the commands directly through
> > the Extended Query language you skip that parsing step.
>Thanks for the explanation. I've only been using Postgres
>via the psql monitor and client applications (most of them
>written in Python, some in Perl), so I'm not familiar with
>the underlying client-server protocol.
>You are right that the EXECUTE statement still has to be
>parsed. However, I think the pasrsing overhead is small,
It's 16,000 vs 1,500 microseconds on my system, plus the time for the
client to receive and parse the data which is the same in both cases.
>because the EXECUTE command has a very simple structure
>("EXECUTE <plan> <arguments>"), as opposed to, say, the
>very complicated synopsis of a SELECT command. Apart
>from that, the overhead of the query planner is probably
>much bigger, so using PREPARE + EXECUTE is probably still
>a great win, I think.
Certainly for complex queries, but not for simple ones like "SELECT *
FROM mytable WHERE numcolumn > $1;"
Parsing a simple SELECT like this seems to take about the same time
as parsing an EXECUTE query.
I don't think you'd win anything by replacing INSERT with EXECUTE either.
>I just wonder ... I'm currently writing a client app (in
>Python) which has to insert quite a lot of things. This
>is for a network traffic accounting system. In particular,
>the program reads accumulated accounting data from a file,
>pre-processes it and creates appropriate INSERT statements
>(up to several hundreds or even thousands per session).
>I wonder if it will be worth to PREPARE those inserts and
>the EXECUTE them. But I guess it won't make much of a
>difference, because the INSERT statements are very simple
I'm not familiar with Python but if it allows you to do a "COPY
mytable FROM STDIN;" then I would strongly recommend you'd use that
instead of INSERT. You build your data in a file and then you COPY
that file into your table in one step. You can import millions of
records in a couple of minutes like that. (You need to drop your
indexes on that table first and recreate them afterwards.)
In response to
pgsql-novice by date
|Next:||From: Julian Leyh||Date: 2004-07-15 22:30:55|
|Subject: Re: postgres account default password|
|Previous:||From: Oliver Fromme||Date: 2004-07-15 20:45:28|
|Subject: Re: Extended query: prepared statements list?|