Re: improving write performance for logging application

From: Steve Eckmann <eckmann(at)computer(dot)org>
To: Kelly Burkhart <kelly(at)kkcsm(dot)net>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: improving write performance for logging application
Date: 2006-01-05 00:19:08
Message-ID: 43BC65FC.3000903@computer.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Kelly Burkhart wrote:

> On 1/4/06, Steve Eckmann <eckmann(at)computer(dot)org
> <mailto:eckmann(at)computer(dot)org>> wrote:
>
> Thanks, Steinar. I don't think we would really run with fsync off,
> but I need to document the performance tradeoffs. You're right
> that my explanation was confusing; probably because I'm confused
> about how to use COPY! I could batch multiple INSERTS using COPY
> statements, I just don't see how to do it without adding another
> process to read from STDIN, since the application that is
> currently the database client is constructing rows on the fly. I
> would need to get those rows into some process's STDIN stream or
> into a server-side file before COPY could be used, right?
>
>
> Steve,
>
> You can use copy without resorting to another process. See the libpq
> documentation for 'Functions Associated with the copy Command". We do
> something like this:
>
> char *mbuf;
>
> // allocate space and fill mbuf with appropriately formatted data somehow
>
> PQexec( conn, "begin" );
> PQexec( conn, "copy mytable from stdin" );
> PQputCopyData( conn, mbuf, strlen(mbuf) );
> PQputCopyEnd( conn, NULL );
> PQexec( conn, "commit" );
>
> -K

Thanks for the concrete example, Kelly. I had read the relevant libpq
doc but didn't put the pieces together.

Regards, Steve

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Bruce Momjian 2006-01-05 00:39:42 Re: Stats collector performance improvement
Previous Message Steve Eckmann 2006-01-05 00:16:45 Re: improving write performance for logging application