Re: VLDB Features

From: Simon Riggs <simon(at)2ndquadrant(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Neil Conway <neilc(at)samurai(dot)com>, Hannu Krosing <hannu(at)skype(dot)net>, Josh Berkus <josh(at)agliodbs(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: VLDB Features
Date: 2007-12-15 09:14:16
Message-ID: 1197710056.12912.32.camel@ebony.site
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Fri, 2007-12-14 at 18:22 -0500, Tom Lane wrote:
> Neil Conway <neilc(at)samurai(dot)com> writes:
> > By modifying COPY: COPY IGNORE ERRORS or some such would instruct COPY
> > to drop (and log) rows that contain malformed data. That is, rows with
> > too many or too few columns, rows that result in constraint violations,
> > and rows containing columns where the data type's input function raises
> > an error. The last case is the only thing that would be a bit tricky to
> > implement, I think: you could use PG_TRY() around the InputFunctionCall,
> > but I guess you'd need a subtransaction to ensure that you reset your
> > state correctly after catching an error.
>
> Yeah. It's the subtransaction per row that's daunting --- not only the
> cycles spent for that, but the ensuing limitation to 4G rows imported
> per COPY.

I'd suggest doing everything at block level
- wrap each new block of data in a subtransaction
- apply data to the table block by block (can still work with FSM).
- apply indexes in bulk for each block, unique ones first.

That then gives you a limit of more than 500 trillion rows, which should
be enough for anyone.

--
Simon Riggs
2ndQuadrant http://www.2ndQuadrant.com

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Gregory Stark 2007-12-15 09:27:59 Re: EXPLAIN ANALYZE printing logical and hardware I/O per-node
Previous Message Simon Riggs 2007-12-15 09:09:38 Re: Negative LIMIT and OFFSET?