how to handle a big table for data log

From: kuopo <spkuo(at)cs(dot)nctu(dot)edu(dot)tw>
To: pgsql-performance(at)postgresql(dot)org
Subject: how to handle a big table for data log
Date: 2010-07-19 06:27:51
Message-ID: AANLkTilCP3sGTbHIvrM-ixG-P1Dz6ToqrhbgijY8m8V8@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Hi,

I have a situation to handle a log table which would accumulate a
large amount of logs. This table only involves insert and query
operations. To limit the table size, I tried to split this table by
date. However, the number of the logs is still large (46 million
records per day). To further limit its size, I tried to split this log
table by log type. However, this action does not improve the
performance. It is much slower than the big table solution. I guess
this is because I need to pay more cost on the auto-vacuum/analyze for
all split tables.

Can anyone comment on this situation? Thanks in advance.

kuopo.

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Daniel Ferreira de Lima 2010-07-19 12:24:01 IDE x SAS RAID 0 on HP DL 380 G5 P400i controller performance problem
Previous Message Craig Ringer 2010-07-19 06:08:43 Re: What is the best way to optimize the query.