From: | <wespvp(at)syntegra(dot)com> |
---|---|
To: | Simon Windsor <simon(dot)windsor(at)cornfield(dot)org(dot)uk>, Postgres List <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Postgres tuning? |
Date: | 2004-06-29 23:01:51 |
Message-ID: | BD075F0F.EC6A%wespvp@syntegra.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On 6/29/04 4:30 PM, "Simon Windsor" <simon(dot)windsor(at)cornfield(dot)org(dot)uk> wrote:
> I am in the process of converting a small multi-user application from
> MySQL, and most queries are performing better. The only noticeable
> exception is a batch load, which is half the speed of MySQL version.
If you're talking about loading up and array and telling it to load the
array with a single INSERT, you can't do that. You have to insert a record
at a time. I wish it were possible - I could really use it.
The closest thing is COPY. I've been told COPY does such a bulk load. The
down side of COPY is that you have to know the column order - ok for initial
loads, but dangerous for application usage.
> begin;
> insert into ... (repeat a few thousand to million times)
> commit;
This does not accomplish the bulk load - it only makes all of the inserts
part of a single transaction for atomic commit or rollback.
Wes
From | Date | Subject | |
---|---|---|---|
Next Message | Dann Corbit | 2004-06-29 23:31:27 | Re: Postgres tuning? |
Previous Message | Paul Thomas | 2004-06-29 23:01:47 | Re: Postgres tuning? |