Skip site navigation (1) Skip section navigation (2)

Re: filesystem performance with lots of files

From: David Roussel <pgsql-performance(at)diroussel(dot)xsmail(dot)com>
To: David Lang <dlang(at)invendra(dot)net>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: filesystem performance with lots of files
Date: 2005-12-20 13:26:00
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-performance
David Lang wrote:

>   ext3 has an option to make searching directories faster (htree), but 
> enabling it kills performance when you create files. And this doesn't 
> help with large files.
The ReiserFS white paper talks about the data structure he uses to store 
directories (some kind of tree), and he says it's quick to both read and 
write.  Don't forget if you find ls slow, that could just be ls, since 
it's ls, not the fs, that sorts this files into alphabetical order.

 > how long would it take to do a tar-ftp-untar cycle with no smarts

Note that you can do the taring, zipping, copying and untaring 
concurrentlt.  I can't remember the exactl netcat command line options, 
but it goes something like this

tar czvf - myfiles/* | netcat myserver:12345

netcat -listen 12345 | tar xzvf -

Not only do you gain from doing it all concurrently, but not writing a 
temp file means that disk seeks a reduced too if you have a one spindle 

Also condsider just copying files onto a network mount.  May not be as 
fast as the above, but will be faster than rsync, which has high CPU 
usage and thus not a good choice on a LAN.

Hmm, sorry this is not directly postgres anymore...


In response to


pgsql-performance by date

Next:From: Tom LaneDate: 2005-12-20 14:41:30
Subject: Re: High context switches occurring
Previous:From: Nicolas BarbierDate: 2005-12-20 13:06:15
Subject: Re: Read only transactions - Commit or Rollback

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group