On Tue, Oct 24, 2006 at 09:17:08AM -0400, Worky Workerson wrote:
> >http://stats.distributed.net used to use a perl script to do some
> >transformations before loading data into the database. IIRC, when we
> >switched to using C we saw 100x improvement in speed, so I suspect that
> >if you want performance perl isn't the way to go. I think you can
> >compile perl into C, so maybe that would help some.
> Like Craig mentioned, I have never seen those sorts of improvements
> going from perl->C, and developer efficiency is primo for me. I've
> profiled most of the stuff, and have used XS modules and Inline::C on
> the appropriate, often used functions, but I still think that it comes
> down to my using CSV and Text::CSV_XS. Even though its XS, CSV is
> still a pain in the ass.
> >Ultimately, you might be best of using triggers instead of rules for the
> >partitioning since then you could use copy. Or go to raw insert commands
> >that are wrapped in a transaction.
> Eh, I've put the partition loading logic in the loader, which seems to
> work out pretty well, especially since I keep things sorted and am the
> only one inserting into the DB and do so with bulk loads. But I'll
> keep this in mind for later use.
Well, given that perl is using an entire CPU, it sounds like you should
start looking either at ways to remove some of the overhead from perl,
or to split that perl into multiple processes.
Jim Nasby jim(at)nasby(dot)net
EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)
In response to
pgsql-performance by date
|Next:||From: Jim C. Nasby||Date: 2006-10-25 00:21:31|
|Subject: Re: Problems using a function in a where clause|
|Previous:||From: Jim C. Nasby||Date: 2006-10-25 00:14:49|
|Subject: Re: Best COPY Performance|