Skip site navigation (1) Skip section navigation (2)

Re: BUG #1830: Non-super-user must be able to copy from a

From: Bernard <bht(at)actrix(dot)gen(dot)nz>
To: Oliver Jowett <oliver(at)opencloud(dot)com>
Cc: pgsql-bugs(at)postgresql(dot)org, pgsql-general(at)postgresql(dot)org
Subject: Re: BUG #1830: Non-super-user must be able to copy from a
Date: 2005-08-19 00:57:04
Message-ID: veaag1h6abh3plbjiphssocn8a7k5jkadj@4ax.com (view raw or flat)
Thread:
Lists: pgsql-bugspgsql-general
Oliver and interested list members:

I was referring to the majority of users wanting to "bulk" load tables
not to the majority of all or whatever users who may or may not know
or care about the difference in performance between INSERT and COPY.

This difference of performance is the main reason for the COPY
command, and this is also the reason why bulk loading through the JDBC
interface will never match the performance of the COPY fith files
command.

The COPY command with STDIN or STDOUT is a speciality that the
majority of users would not normally ask for because they usually
think in terms of files and rightly so.

Comparable with a STDIN/STDOUT workaround would be to pipe input and
output to and from SQL stored procedures.

What I mean to say is that we want this to be strictly server side for
best performance and we don't want to get the client involved in the
raw processing which is in violation of any 3 tier client-server
architecture.

In addition to this, not only will the client and network be loaded
with additional processing demand, but the server load will also
increase because it has to service the JDBC interface for I/O.

The whole architectural setup for such "bulk" loading is a mess.

Regards,

Bernard


On Fri, 19 Aug 2005 12:27:01 +1200, you wrote:

>Bernard wrote:
>
>> The majority of JDBC users trying to bulk load tables would not want
>> to send the data through their connection. This connection is designed
>> to send commands and to transfer only as much data as necessary and as
>> little as possible.
>
>I don't understand why this is true at all -- for example, our
>application currently does bulk INSERTs over a JDBC connection, and
>moving to COPY has been an option I looked at in the past. Importing
>lots of data from a remote machine is hardly an uncommon case.
>
>> The need is only created by the limitations of the Postgres COPY
>> command.
>> 
>> I can't see why a workaround should be developed instead of or before
>> fixing the COPY command.
>> 
>> It works in other DB engines.
>
>I guess that other DB engines don't care about unprivileged DB users
>reading any file that the backend can access.
>
>-O
>
>---------------------------(end of broadcast)---------------------------
>TIP 2: Don't 'kill -9' the postmaster


In response to

Responses

pgsql-bugs by date

Next:From: Michael FuhrDate: 2005-08-19 01:41:41
Subject: Re: BUG #1831: plperl gives error after reconnect.
Previous:From: Oliver JowettDate: 2005-08-19 00:27:01
Subject: Re: BUG #1830: Non-super-user must be able to copy from a

pgsql-general by date

Next:From: Fernando LujanDate: 2005-08-19 02:02:07
Subject: Re: Generating random values.
Previous:From: Alvaro HerreraDate: 2005-08-19 00:52:17
Subject: Re: total db lockup

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group