Re: COPY fast parse patch

From: "Luke Lonergan" <llonergan(at)greenplum(dot)com>
To: "Andrew Dunstan" <andrew(at)dunslane(dot)net>, neilc(at)samurai(dot)com
Cc: agoldshuv(at)greenplum(dot)com, pgsql-patches(at)postgresql(dot)org
Subject: Re: COPY fast parse patch
Date: 2005-06-02 03:56:54
Message-ID: BEC3D196.6C65%llonergan@greenplum.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-patches

Andrew,

> I will be the first to admit that there are probably some very good
> possibilities for optimisation of this code. My impression though has been
> that in almost all cases it's fast enough anyway. I know that on some very
> modest hardware I have managed to load a 6m row TPC line-items table in just
> a few minutes. Before we start getting too hung up, I'd be interested to
> know just how much data people want to load and how fast they want it to be.
> If people have massive data loads that take hours, days or weeks then it's
> obviously worth improving if we can. I'm curious to know what size datasets
> people are really handling this way.

x0+ GB files are common in data warehousing. The issue is often "can we
load our data within the time allotted for the batch window", usually a
matter of an hour or two.

Assuming that TPC lineitem is 140Bytes/row, 6M rows in 3 minutes is 4.7
MB/s. To load a 10GB file at that rate takes about 2/3 hour. If one were
to restore a 300GB database, it would take 18 hours. Maintenance operations
are impractical after a few hours, 18 is a non-starter.

In practice, we're usually replacing an Oracle system with PostgreSQL, and
the load speed difference between the two is currently embarrassing and
makes the work impractical.

- Luke

In response to

Responses

Browse pgsql-patches by date

  From Date Subject
Next Message Alon Goldshuv 2005-06-02 04:53:50 Re: COPY fast parse patch
Previous Message Bruce Momjian 2005-06-02 03:54:08 Re: Backslash handling in strings