Have you got any data (ie in percentage) of around how much more CPU
work needed with the bigserial type in the queries?
I have a log database with 100million records (the biggest table
contains 65million records) and I use bigserial data type as primary key
now. The primary key looks this way: YYYYMMDD1xxxxxxx where the first 8
numbers are the date, and the x's are the record sequence number on that
day. This way the records are in ascendant order. Almost all of the
queries contains date constraints (PK like 'YYYYMMDD%'). I'd like to
know if I do it in a stupid way or not. I'm not a DBA expert so every
idea are welcome. If you need more information about the
hardware/software environment, the DB structure then I'll post them.
Thanks in advance for your help.
pgsql-performance by date
|Next:||From: Curt Sampson||Date: 2003-01-26 23:10:09|
|Subject: Re: LOCK TABLE & speeding up mass data loads|
|Previous:||From: Ron Johnson||Date: 2003-01-26 08:04:45|
|Subject: Re: Mount options for Ext3?|