Thank you. I didn't understand what 'vacuum freeze' actually does. I
will check the detail to see if it's good for my situation. and I will
also test the load by tcp connection. Thanks again for all your advices
and they are really very helpful to me!
于 2011/8/30 0:06, Kevin Grittner 写道:
> Rural Hunter<ruralhunter(at)gmail(dot)com> wrote:
>> 2011/8/29 23:18, Kevin Grittner:
>>> I also recommend a VACUUM FREEZE ANALYZE on the database unless
>>> most of these rows will be deleted or updated before you run a
>>> billion database transactions. Otherwise you will get a painful
>>> "anti-wraparound" autovacuum on everything, probably at a time
>>> of heavy usage.
>> hmm....I will try to turn autovacuum off though I didn't see any
>> resource intension caused by it.
> Well, turning off autovacuum during a bulk load is probably a net
> gain if it's insert-only (i.e., no need to query just-loaded data to
> decide what to do with new rows); but that's not what I was getting
> at. Bulk loading 200 GB of data which is not going to be deleted or
> updated heavily is setting a performance time bomb without a VACUUM
> FREEZE. At some point, perhaps months later, it will be necessary
> to freeze the tuples to prevent data loss, and since this occurs
> based on a threshold of how many transaction IDs have been consumed,
> it is most likely to happen at peak OLTP loads, when it will be the
> biggest problem. A VACUUM FREEZE (and you might as well throw in
> ANALYZE while you're at it) will take care of that up front. As a
> side benefit it will keep SELECT statements from generating heavy
> *write* loads on the first access to tuples, and will perform other
> maintenance which will improve database performance.
> I just count the time for VACUUM FREEZE ANALYZE as part of the bulk
> load time required before letting in users.
>>> Network latency?
>> No, I do the import locally on the db server so the network
>> letency can be excluded.
> Hmm... I don't remember the details, but there was a problem at
> some point where Linux pipe connections could introduce significant
> latency, and you could get much better performance on a TCP
> connection through localhost. It might be worth a try. (Maybe
> someone else will remember the details.)
In response to
pgsql-admin by date
|Next:||From: Scott Marlowe||Date: 2011-08-30 02:14:40|
|Subject: Re: plsql gets "out of memory"|
|Previous:||From: pasman pasmański||Date: 2011-08-29 23:39:53|
|Subject: Re: Get data back after drop Command|