Skip site navigation (1) Skip section navigation (2)

Re: how to handle a big table for data log

From: "Jorge Montero" <jorge_montero(at)homedecorators(dot)com>
To: "kuopo" <spkuo(at)cs(dot)nctu(dot)edu(dot)tw>,<pgsql-performance(at)postgresql(dot)org>
Subject: Re: how to handle a big table for data log
Date: 2010-07-19 15:37:55
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-performance
Large tables, by themselves, are not necessarily a problem. The problem is what you might be trying to do with them. Depending on the operations you are trying to do, partitioning the table might help performance or make it worse.
What kind of queries are you running? How many days of history are you keeping? Could you post an explain analyze output of a query that is being problematic?
Given the amount of data you hint about, your server configuration, and custom statistic targets for the big tables in question would be useful.

>>> kuopo <spkuo(at)cs(dot)nctu(dot)edu(dot)tw> 7/19/2010 1:27 AM >>>

I have a situation to handle a log table which would accumulate a
large amount of logs. This table only involves insert and query
operations. To limit the table size, I tried to split this table by
date. However, the number of the logs is still large (46 million
records per day). To further limit its size, I tried to split this log
table by log type. However, this action does not improve the
performance. It is much slower than the big table solution. I guess
this is because I need to pay more cost on the auto-vacuum/analyze for
all split tables.

Can anyone comment on this situation? Thanks in advance.


In response to


pgsql-performance by date

Next:From: Daniel Ferreira de LimaDate: 2010-07-19 19:53:37
Subject: Re: IDE x SAS RAID 0 on HP DL 380 G5 P400i controller performance problem
Previous:From: Vitalii TymchyshynDate: 2010-07-19 15:35:42
Subject: Re: Big field, limiting and ordering

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group