> http://stats.distributed.net used to use a perl script to do some
> transformations before loading data into the database. IIRC, when we
> switched to using C we saw 100x improvement in speed, so I suspect that
> if you want performance perl isn't the way to go. I think you can
> compile perl into C, so maybe that would help some.
Like Craig mentioned, I have never seen those sorts of improvements
going from perl->C, and developer efficiency is primo for me. I've
profiled most of the stuff, and have used XS modules and Inline::C on
the appropriate, often used functions, but I still think that it comes
down to my using CSV and Text::CSV_XS. Even though its XS, CSV is
still a pain in the ass.
> Ultimately, you might be best of using triggers instead of rules for the
> partitioning since then you could use copy. Or go to raw insert commands
> that are wrapped in a transaction.
Eh, I've put the partition loading logic in the loader, which seems to
work out pretty well, especially since I keep things sorted and am the
only one inserting into the DB and do so with bulk loads. But I'll
keep this in mind for later use.
In response to
pgsql-performance by date
|Next:||From: Jignesh Shah||Date: 2006-10-24 13:31:10|
|Subject: Re: Copy database performance issue|
|Previous:||From: Markus Schaber||Date: 2006-10-24 09:29:36|
|Subject: Re: Index on two columns not used|