Writing XML files to Operating System

From: <Atul(dot)Goel(at)globaldatapoint(dot)com>
To: <pgsql-general(at)postgresql(dot)org>
Subject: Writing XML files to Operating System
Date: 2010-07-13 12:54:54
Message-ID: 418EC9BD4D7125488CCAC5C7C559A24803C27B@MS10.lsc.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Atul Goel wrote:

> > We are a data based company and are migrating from Oracle to

> > Postgresql. For this purpose I am doing POC for the same. We have a

> > business requirement to send the Data in XML files to our clients.

> > The file size of XMLs is around 700MB and is growing.

> >

> > I have been able to generate sample XML files and store the

> > same(Still to test for full files).

> >

> > However I have the following problems

> >

> > 1) It took 35 minutes to store 1 GB data in the TEXT column (create

> > table test_text_nearly_1_GB as select repeat('1',

> > 1024*1024*1023):: text as nearly_1_GB)

That's a fairly inefficient way to generate the data to insert. You'll probably find that much of the time was spent in repeat() not anywhere interesting to your query.

I'd be more interested in how long COPY ... FROM took, personally.

> > 2) It took 30 Minutes to write the same to OS using COPY command

> > Copy test_text_nearly_1_GB to '$HOME/test_text_nearly_1_GB.xml'

That's a write rate of 34MB/min, or half a meg a second. Not pretty.

Where's the load during the COPY? Mostly CPU? Or mostly disk I/O?

Are you writing the output to the same disk the database is on? (Not that it should make this much difference).

> > 3) Find a solution when the file size become > 1GB

That's going to be interesting.

Do you really need to store whole XML documents this size in the database, rather than broken up into structures that can be worked with usefully in the database? If so, PostgreSQL might not be your best choice.

--

Craig Ringer

Hi All,

I am posting this request again with the correct subject.

We are a data based company and are migrating from Oracle to Postgresql. For this purpose I am doing POC for the same. We have a business requirement to send the Data in XML files to our clients. The file size of XMLs is around 700MB and is growing.

I have been able to generate sample XML files and store the same(Still to test for full files).

However I have the following problems

1) It took 35 minutes to store 1 GB data in the TEXT column (create table test_text_nearly_1_GB as select repeat('1', 1024*1024*1023):: text as nearly_1_GB)

2) It took 30 Minutes to write the same to OS using COPY command Copy test_text_nearly_1_GB to '$HOME/test_text_nearly_1_GB.xml'

However when I tested to write a table of size 3GB with 150000 rows it wrote to OS in 20 minutes.

I am looking for a solution to

1) generate and write the 1GB file in OS in less than 10 minutes.

2) write the file to OS the results of a query

3) Find a solution when the file size become > 1GB

Please help.

Atul Goel
SENIOR DEVELOPER

Global DataPoint
Middlesex House, 34-42 Cleveland Street
London W1T 4LB, UK
T: +44 (0)20 7079 4827
M: +44 (0)7846765098
www.globaldatapoint.com<http://www.globaldatapoint.com/>

This e-mail is confidential and should not be used by anyone who is not the original intended recipient. Global DataPoint Limited does not accept liability for any statements made which are clearly the sender's own and not expressly made on behalf of Global DataPoint Limited. No contracts may be concluded on behalf of Global DataPoint Limited by means of e-mail communication. Global DataPoint Limited Registered in England and Wales with registered number 3739752 Registered Office Middlesex House, 34-42 Cleveland Street, London W1T 4LB

Browse pgsql-general by date

  From Date Subject
Next Message Thomas Kellerer 2010-07-13 13:02:15 Re: Testing 9.0beta3 and pg_upgrade
Previous Message Craig Ringer 2010-07-13 12:52:37 Writing XML files to Operating System