Skip site navigation (1) Skip section navigation (2)

pg_dump for table with bytea takes a long time

From: "Sailer, Denis (YBUSA-CDR)" <Denis(dot)Sailer(at)Yellowbook(dot)com>
To: <pgsql-performance(at)postgresql(dot)org>
Subject: pg_dump for table with bytea takes a long time
Date: 2005-08-02 18:57:50
Message-ID: (view raw or flat)
Lists: pgsql-performance
Dumping a database which contains a table with a bytea column takes
approximately 25 hours and 45 minutes.  The database has 26 tables in
it. The other 25 tables take less than 5 minutes to dump so almost all
time is spent dumping the bytea table.


prd1=# \d ybnet.ebook_master;

     Table "ybnet.ebook_master"

    Column    |  Type   | Modifiers


 region_key   | integer | not null

 book_key     | integer | not null

 pub_sequence | integer | not null

 section_code | integer | not null

 pagenbr      | integer | not null

 pdffile      | bytea   |


    "ebook_master_pkey" PRIMARY KEY, btree (book_key, pub_sequence,
section_code, pagenbr, region_key)

Foreign-key constraints:

    "FK1_book_year" FOREIGN KEY (book_key, pub_sequence, region_key)
REFERENCES ybnet.book_year(book_key, pub_sequence, region_key)

    "FK1_ebook_section" FOREIGN KEY (section_code) REFERENCES

Tablespace: "ebook"


The tablespace ebook is 65504295 bytes in size and the ebook_master
table has 61-1GB files associated to it.


The command to dump the database is:


pg_dump --file=$DUMP_FILE --format=c --data-only --verbose
--host=ybcdrdbp01 $DATABASE


I also perform a hot backup of this database using pg_start_backup(),
tar, and pg_stop_backup().  It takes only 20 minutes to create a tar
ball of the entire 62GB.  I like the speed of this method, but it does
not allow me to restore 1 table at a time. 


The version of postgres is PostgreSQL 8.0.0 on i686-pc-linux-gnu,
compiled by GCC gcc (GCC) 3.2.2


The machine has 4 Xeon 3.00 GHz processors with hyper-threading on and
4GB of memory.  Postgres is supported by two file systems connected to
an EMC SAN disk array.  One 2 GB one for the log files and a second 500
GB one for the data and indexes.  All output files for the backup files
are placed onto the 500 GB volume group and then backed up to an
external storage manager.


Portions of the config file are:


shared_buffers = 16384

work_mem = 8192

maintenance_work_mem = 16384


max_fsm_pages = 512000

max_fsm_relations = 1000

fsync = true


# - Checkpoints -

checkpoint_segments = 20


# - Planner Cost Constants -

effective_cache_size = 262144

random_page_cost = 3



I am looking for ideas for making the backup of the above table much

pgsql-performance by date

Next:From: Tobias BroxDate: 2005-08-02 19:59:15
Subject: Re: "nice"/low priority Query
Previous:From: Jim C. NasbyDate: 2005-08-02 17:25:50
Subject: Re: "nice"/low priority Query

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group