| From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
|---|---|
| To: | "Karen Hill" <karen_hill22(at)yahoo(dot)com> |
| Cc: | pgsql-performance(at)postgresql(dot)org |
| Subject: | Re: How long should it take to insert 200,000 records? |
| Date: | 2007-02-06 05:33:35 |
| Message-ID: | 29906.1170740015@sss.pgh.pa.us |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-performance |
"Karen Hill" <karen_hill22(at)yahoo(dot)com> writes:
> I have a pl/pgsql function that is inserting 200,000 records for
> testing purposes. What is the expected time frame for this operation
> on a pc with 1/2 a gig of ram and a 7200 RPM disk?
I think you have omitted a bunch of relevant facts. Bare INSERT is
reasonably quick:
regression=# create table foo (f1 int);
CREATE TABLE
regression=# \timing
Timing is on.
regression=# insert into foo select x from generate_series(1,200000) x;
INSERT 0 200000
Time: 5158.564 ms
regression=#
(this on a not-very-fast machine) but if you weigh it down with a ton
of index updates, foreign key checks, etc, it could get slow ...
also you haven't mentioned what else that plpgsql function is doing.
regards, tom lane
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Bruno Wolff III | 2007-02-06 05:40:11 | Re: optimizing a geo_distance() proximity query (example and benchmark) |
| Previous Message | Karen Hill | 2007-02-06 00:35:59 | How long should it take to insert 200,000 records? |