Re: COPY TO and VACUUM

From: Jeff Janes <jeff(dot)janes(at)gmail(dot)com>
To: Roberto Grandi <roberto(dot)grandi(at)trovaprezzi(dot)it>
Cc: Kevin Grittner <kgrittn(at)ymail(dot)com>, "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org>
Subject: Re: COPY TO and VACUUM
Date: 2013-09-05 18:14:26
Message-ID: CAMkU=1ygDqcwQR8rn7PAymmRVhnXK8mWFQ_58_X+FCFP4Kqk3g@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On Thu, Sep 5, 2013 at 9:05 AM, Roberto Grandi
<roberto(dot)grandi(at)trovaprezzi(dot)it> wrote:
> Hi Jeff,
>
> the proble is that when continously updloading vendors listing on our "big" table the autovacuum is not able to free space as we would.

It might not be able to free it (to be reused) as fast as you need it
to, but it should be freeing it eventually.

> Secondarly, if we launch a Vacuum after each "upload" we collide with other upload taht are running in parallel.

I wouldn't do a manual vacuum after *each* upload. Doing one after
every Nth upload, where N is estimated to make up about 1/5 of the
table, should be good. You are probably IO limited, so you probably
don't gain much by running these uploads in parallel, I would try to
avoid that. But in any case, there shouldn't be a collision between
manual vacuum and a concurrent upload. There would be one between two
manual vacuums but you could code around that by explicitly locking
the table in the correct mode nowait or with a timeout, and skipping
the vacuum if it can't get the lock.

>
> Is it possible, form your point of view, working with isolation levels or table partitioning to minimize table space growing?

Partitioning by vendor might work well for that purpose.

Cheers,

Jeff

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Merlin Moncure 2013-09-05 22:42:19 Re: [PERFORM] Can you please suggest me some links where I can learn:
Previous Message David Kerr 2013-09-05 16:24:17 Re: COPY TO and VACUUM