Re: performance while importing a very large data set in to database

From: Pierre Frédéric Caillaud <lists(at)peufeu(dot)com>
To: "Scott Marlowe" <scott(dot)marlowe(at)gmail(dot)com>, "Ashish Kumar Singh" <ashishkumar(dot)singh(at)altair(dot)com>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: performance while importing a very large data set in to database
Date: 2009-12-06 14:14:04
Message-ID: op.u4ishqzucke6l8@soyouz
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance


>> I have a very bit big database around 15 million in size, and the dump
>> file
>> is around 12 GB.
>>
>> While importing this dump in to database I have noticed that initially
>> query
>> response time is very slow but it does improves with time.
>>
>> Any suggestions to improve performance after dump in imported in to
>> database
>> will be highly appreciated!
>
> This is pretty normal. When the db first starts up or right after a
> load it has nothing in its buffers or the kernel cache. As you access
> more and more data the db and OS learned what is most commonly
> accessed and start holding onto those data and throw the less used
> stuff away to make room for it. Our production dbs run at a load
> factor of about 4 to 6, but when first started and put in the loop
> they'll hit 25 or 30 and have slow queries for a minute or so.
>
> Having a fast IO subsystem will help offset some of this, and
> sometimes "select * from bigtable" might too.

Maybe it's the updating of the the hint bits ?...

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Kris Kewley 2009-12-06 14:15:59 Re: performance while importing a very large data set in to database
Previous Message Greg Smith 2009-12-06 01:33:47 Re: Large DB, several tuning questions: Index sizes, VACUUM, REINDEX, Autovacuum