From: | Teemu Torma <teemu(at)torma(dot)org> |
---|---|
To: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: How import big amounts of data? |
Date: | 2005-12-29 17:05:45 |
Message-ID: | 200512291805.45684.teemu@torma.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Thursday 29 December 2005 17:19, Arnau wrote:
> > - Use plpgsql function to do the actual insert (or update/insert if
> > needed).
> >
> > - Inside a transaction, execute SELECT statements with maximum
> > possible number of insert function calls in one go. This minimizes
> > the number of round trips between the client and the server.
>
> Thanks Teemu! could you paste an example of one of those functions?
> ;-) An example of those SELECTS also would be great, I'm not sure I
> have completly understood what you mean.
An insert function like:
CREATE OR REPLACE FUNCTION
insert_values (the_value1 numeric, the_value2 numeric)
RETURNS void
LANGUAGE plpgsql VOLATILE AS $$
BEGIN
INSERT INTO values (value1, value2)
VALUES (the_value1, the_value2);
RETURN;
END;
$$;
Then execute queries like
SELECT insert_values(1,2), insert_values(2,3), insert_values(3,4);
with maximum number of insert_values calls as possible.
I think the transaction (BEGIN/COMMIT) has little time benefit if you
have at least hundreds of calls in one SELECT.
Teemu
From | Date | Subject | |
---|---|---|---|
Next Message | Jeffrey W. Baker | 2005-12-29 21:21:57 | Process executing COPY opens and reads every table on the system |
Previous Message | Arnau | 2005-12-29 16:19:27 | Re: How import big amounts of data? |