| From: | Matthew Kirkwood <matthew(at)hairy(dot)beasts(dot)org> |
|---|---|
| To: | Jules Bean <jules(at)jellybean(dot)co(dot)uk> |
| Cc: | pgsql-hackers(at)postgresql(dot)org |
| Subject: | Re: Performance on inserts |
| Date: | 2000-08-26 11:14:06 |
| Message-ID: | Pine.LNX.4.10.10008261206200.27577-100000@sphinx.mythic-beasts.com |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-hackers |
On Sat, 26 Aug 2000, Jules Bean wrote:
> Is there any simple way for Pg to combine inserts into one bulk?
> Specifically, their effect on the index files. It has always seemed
> to me to be one of the (many) glaring flaws in SQL that the INSERT
> statement only takes one row at a time.
One of MySQL's little syntax abuses allows:
INSERT INTO tab (col1, ..) VALUES (val1, ..), (val2, ..);
which is nice for avoiding database round trips. It's one
of the reasons that mysql can do a bulk import so quickly.
> But, using INSERT ... SELECT, I can imagine that it might be possible
> to do 'bulk' index updating. so that scanning process is done once per
> 'batch'.
Logic for these two cases would be excellent.
Matthew.
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Alfred Perlstein | 2000-08-26 11:32:51 | Re: Performance on inserts |
| Previous Message | Jules Bean | 2000-08-26 10:48:58 | Re: Performance on inserts |