Re: performance modality in 7.1 for large text attributes?

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Paul A Vixie <vixie(at)mfnx(dot)net>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: performance modality in 7.1 for large text attributes?
Date: 2000-12-20 18:17:30
Message-ID: 5859.977336250@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Paul A Vixie <vixie(at)mfnx(dot)net> writes:
> http://www.vix.com/~vixie/results-psql.png shows a gnuplot of the wall time
> of 70K executions of "pgcat" (shown below) using a CIDR key and TEXT value.

I get a 404 on that URL :-(

> anybody know what i could be doing wrong? (i'm also wondering why SELECT
> takes ~250ms whereas INSERT takes ~70ms... seems counterintuitive, unless
> TOAST is doing a LOT better than i think.)

Given your later post, the problem is evidently that the thing is
failing to use the index for the SELECT. I am not sure why, especially
since it clearly does know (after vacuuming) that the index would
retrieve just a single row. May we see the exact declaration of the
table --- preferably via "pg_dump -s -t TABLENAME DBNAME" ?

> furthermore, are there any plans to offer a better libpq interface to INSERT?

Consider using COPY if you don't want to quote the data.

COPY rss FROM stdin;
values here
more values here
\.

(If you don't like tab as column delimiter, you can specify another in
the copy command.) The libpq interface to this is relatively
straightforward IIRC.

regards, tom lane

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Thomas Lockhart 2000-12-20 18:18:01 Re: Re: Generating HISTORY file
Previous Message Thomas Lockhart 2000-12-20 18:17:05 MySQL conversion utility