I've now got to custom datatypes that map from an int2 value on disk to
a string by way of a table for each. Currently, I load these tables into
a dynahash per function call (fcinfo->flinfo->fn_extra). This is working
great is most situations. The problem situation is where there are many
queries (often INSERTS) that need to happen in a short amount of time.
This causes reloads of the dynahash (or hashes) for each query, making
them orders of magnitude slower than when these columns were varchar.
What I'd like to do is change the way I store this hash so that it
doesn't need to be as frequently updated. I'm open to most any solution.
One type rarely ever has new values (maybe once every several months)
and the other gets new values once a night.
Automated Trading Desk, LLC (ATD) is the sole owner of Automated Trading
Desk Financial Services, LLC (AUTO) and Automated Trading Desk Brokerage
Services, LLC (ATDB), both NASD members and Members SIPC. ATD does not
offer any brokerage services and is not a NASD member. All brokerage
services, trading functions, execution of order flow and related matters
are performed through AUTO and ATDB utilizing ATD's proprietary
technology and software. Any reference to ATD trading, ATD trading
services, ATD trading performance, ATD orders, we, us, our or other such
usage refers to the services and trading activities of AUTO and ATDB
utilizing ATD's proprietary technology and software. Periods of market
volatility or other system delays may adversely affect trade execution
and related services.
pgsql-hackers by date
|Next:||From: Mike Rylander||Date: 2006-11-27 16:03:47|
|Subject: Re: Configuring BLCKSZ and XLOGSEGSZ (in 8.3)|
|Previous:||From: Joshua D. Drake||Date: 2006-11-27 15:51:28|
|Subject: Re: [CORE] RC1 blocker issues|