Re: PREPARE in bash scripts

From: "A(dot)j(dot) Langereis" <a(dot)j(dot)langereis(at)inter(dot)nl(dot)net>
To: <pgsql-general(at)postgresql(dot)org>
Subject: Re: PREPARE in bash scripts
Date: 2005-11-23 11:07:01
Message-ID: 003c01c5f01e$07bc8e50$3e01a8c0@aarjan2
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Dear Martijn,

The problem with your solution is that the script is meant to process a
log-file real-time.
Therefore the insert should be done immediately, however it is the same
statement over and over agian, just with different parameters i.e. an ideal
case fore PREPARE.

Yours,

Aarjan Langereis

Ps. I recieved your reply as an attechment in the email ?

> I think your speed is being limited by backend startup time and
> transaction commit time more than anything else. I don't think prepared
> statements will help in your case.
>
> The way I usually do it is pipe the output of a whole loop to psql like
> so:
>
> for i in blah ; do
> echo "insert into ..."
> done | psql -q
>
> Or more commonly, just have the script emit all the commands to stdout
> and then run it like so:
>
> ./myscript | psql -q
>
> An important way to increase speed would be to use explicit
> transactions (BEGIN/END). When executing a lot of statements this will
> speed up things considerably. Finally, if it's just INSERTs, consider
> using COPY, for even more efficiency.
>

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Oleg Bartunov 2005-11-23 11:07:08 Re: TSearch2 / German compound words / UTF-8
Previous Message Richard van den Berg 2005-11-23 10:39:49 pg_ctl start leaves dos window open