Re: COPY FROM performance improvements

From: "Luke Lonergan" <llonergan(at)greenplum(dot)com>
To: "Mark Wong" <markw(at)osdl(dot)org>, "Andrew Dunstan" <andrew(at)dunslane(dot)net>
Cc: "Alon Goldshuv" <agoldshuv(at)greenplum(dot)com>, pgsql-patches(at)postgresql(dot)org
Subject: Re: COPY FROM performance improvements
Date: 2005-07-20 00:39:33
Message-ID: BF02EB55.8E9A%llonergan@greenplum.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-patches pgsql-performance

Good points on all, another element in the performance expectations is the
ratio of CPU speed to I/O subsystem speed, as Alon had hinted earlier.

This patch substantially (500%) improves the efficiency of parsing in the
COPY path, which, on a 3GHz P4 desktop with a commodity disk drive
represents 8 of a total of 30 seconds of processing time. So, by reducing
the parsing time from 8 seconds to 1.5 seconds, the overall COPY time is
reduced from 30 seconds to 23.5 seconds, or a speedup of about 20%.

On a dual 2.2GHz Opteron machine with a 6-disk SCSI RAID subsystem capable
of 240MB/s sequential read and writes, the ratios change and we see between
35% and 95% increase in COPY performance, with the bottleneck being CPU.
The disk is only running at about 90MB/s during this period.

I'd expect that as your CPUs slow down relative to your I/O speed, and
Itaniums or IT2s are quite slow, you should see an increased effect of the
parsing improvements.

One good way to validate the effect is to watch the I/O bandwidth using
vmstat 1 (on Linux) while the load is progressing. When you watch that with
the unpatched source and with the patched source, if they are the same, you
should see no benefit from the patch (you are I/O limited).

If you check your underlying sequential write speed, you will be
bottlenecked at roughly half that in performing COPY because of the
write-through the WAL.

- Luke

On 7/19/05 3:51 PM, "Mark Wong" <markw(at)osdl(dot)org> wrote:

> Whoopsies, yeah good point about the PRIMARY KEY. I'll fix that.
>
> Mark
>
> On Tue, 19 Jul 2005 18:17:52 -0400
> Andrew Dunstan <andrew(at)dunslane(dot)net> wrote:
>
>> Mark,
>>
>> You should definitely not be doing this sort of thing, I believe:
>>
>> CREATE TABLE orders (
>> o_orderkey INTEGER,
>> o_custkey INTEGER,
>> o_orderstatus CHAR(1),
>> o_totalprice REAL,
>> o_orderDATE DATE,
>> o_orderpriority CHAR(15),
>> o_clerk CHAR(15),
>> o_shippriority INTEGER,
>> o_comment VARCHAR(79),
>> PRIMARY KEY (o_orderkey))
>>
>> Create the table with no constraints, load the data, then set up primary keys
>> and whatever other constraints you want using ALTER TABLE. Last time I did a
>> load like this (albeit 2 orders of magnitude smaller) I saw a 50% speedup
>> from deferring constarint creation.
>>
>>
>> cheers
>>
>> andrew
>>
>>
>>
>> Mark Wong wrote:
>>
>>> Hi Alon,
>>>
>>> Yeah, that helps. I just need to break up my scripts a little to just
>>> load the data and not build indexes.
>>>
>>> Is the following information good enough to give a guess about the data
>>> I'm loading, if you don't mind? ;) Here's a link to my script to create
>>> tables:
>>> http://developer.osdl.org/markw/mt/getfile.py?id=eaf16b7831588729780645b2bb4
>>> 4f7f23437e432&path=scripts/pgsql/create_tables.sh.in
>>>
>>> File sizes:
>>> -rw-r--r-- 1 markw 50 2.3G Jul 8 15:03 customer.tbl
>>> -rw-r--r-- 1 markw 50 74G Jul 8 15:03 lineitem.tbl
>>> -rw-r--r-- 1 markw 50 2.1K Jul 8 15:03 nation.tbl
>>> -rw-r--r-- 1 markw 50 17G Jul 8 15:03 orders.tbl
>>> -rw-r--r-- 1 markw 50 2.3G Jul 8 15:03 part.tbl
>>> -rw-r--r-- 1 markw 50 12G Jul 8 15:03 partsupp.tbl
>>> -rw-r--r-- 1 markw 50 391 Jul 8 15:03 region.tbl
>>> -rw-r--r-- 1 markw 50 136M Jul 8 15:03 supplier.tbl
>>>
>>> Number of rows:
>>> # wc -l *.tbl
>>> 15000000 customer.tbl
>>> 600037902 lineitem.tbl
>>> 25 nation.tbl
>>> 150000000 orders.tbl
>>> 20000000 part.tbl
>>> 80000000 partsupp.tbl
>>> 5 region.tbl
>>> 1000000 supplier.tbl
>>>
>>> Thanks,
>>> Mark
>>>
>>> On Tue, 19 Jul 2005 14:05:56 -0700
>>> "Alon Goldshuv" <agoldshuv(at)greenplum(dot)com> wrote:
>>>
>>>
>>>
>>>> Hi Mark,
>>>>
>>>> I improved the data *parsing* capabilities of COPY, and didn't touch the
>>>> data conversion or data insertion parts of the code. The parsing
>>>> improvement
>>>> will vary largely depending on the ratio of parsing -to- converting and
>>>> inserting.
>>>>
>>>> Therefore, the speed increase really depends on the nature of your data:
>>>>
>>>> 100GB file with
>>>> long data rows (lots of parsing)
>>>> Small number of columns (small number of attr conversions per row)
>>>> less rows (less tuple insertions)
>>>>
>>>> Will show the best performance improvements.
>>>>
>>>> However, same file size 100GB with
>>>> Short data rows (minimal parsing)
>>>> large number of columns (large number of attr conversions per row)
>>>> AND/OR
>>>> more rows (more tuple insertions)
>>>>
>>>> Will show improvements but not as significant.
>>>> In general I'll estimate 40%-95% improvement in load speed for the 1st case
>>>> and 10%-40% for the 2nd. But that also depends on the hardware, disk speed
>>>> etc... This is for TEXT format. As for CSV, it may be faster but not as
>>>> much
>>>> as I specified here. BINARY will stay the same as before.
>>>>
>>>> HTH
>>>> Alon.
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> On 7/19/05 12:54 PM, "Mark Wong" <markw(at)osdl(dot)org> wrote:
>>>>
>>>>
>>>>
>>>>> On Thu, 14 Jul 2005 17:22:18 -0700
>>>>> "Alon Goldshuv" <agoldshuv(at)greenplum(dot)com> wrote:
>>>>>
>>>>>
>>>>>
>>>>>> I revisited my patch and removed the code duplications that were there,
>>>>>> and
>>>>>> added support for CSV with buffered input, so CSV now runs faster too
>>>>>> (although it is not as optimized as the TEXT format parsing). So now
>>>>>> TEXT,CSV and BINARY are all parsed in CopyFrom(), like in the original
>>>>>> file.
>>>>>>
>>>>>>
>>>>> Hi Alon,
>>>>>
>>>>> I'm curious, what kind of system are you testing this on? I'm trying to
>>>>> load 100GB of data in our dbt3 workload on a 4-way itanium2. I'm
>>>>> interested in the results you would expect.
>>>>>
>>>>> Mark
>>>>>
>>>>>
>>>>>
>>>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 6: explain analyze is your friend
>

In response to

Browse pgsql-patches by date

  From Date Subject
Next Message Koichi Suzuki 2005-07-20 00:44:26 Re: A couple of patches for PostgreSQL 64bit support
Previous Message Michael Fuhr 2005-07-19 23:47:37 Re: [HACKERS] Patch to fix plpython on OS X

Browse pgsql-performance by date

  From Date Subject
Next Message Oliver Crosby 2005-07-20 01:50:18 Re: Looking for tips
Previous Message PFC 2005-07-19 23:01:56 Re: Looking for tips