Thanks for all the great responses on this(doing select * from large tables
and hanging psql).
Here is what I have:
--- psql uses libpq which tries to load everything into memory before
--- use cursors to FETCH selected amount of rows and then spool those.
--- use "select * from big_table limit 1000 offset 0;" for simple queries.
Sometimes you want to do a simple select * from mytable just to get a look
at the data, but you don't care which data.
I am about to go take my multiple broken up tables and dump them back into
one table(and then shut off all those BASH shell scripts I wrote which
checked the system date and created new monthly tables if needed...good
scripting practice but a waste of time).
However, there is still something bugging me. Even though many people
related stories of 7.5 Gb+ Dbs, I still can't make that little voice in me
quit saying "breaking things into smaller chunks means faster work"
There must exist a relationship between file sizes and DB performance.
This relationship can be broken into 3 parts:
1. How the hardware is arranged to pull in large files(fragmentation,
2. How the underlying OS deals with large files
3. How Postgres deals with(or is affected by) large files.
I imagine that the first two are the dominant factors in the relationship,
but does anyone have any experience with how small/removed of a factor the
internals Postgres are? Are there any internal coding concerns that have
had to deal with this(like the one mentioned about files being split at
pgsql-general by date
|Next:||From: Gilles DAROLD||Date: 2000-06-30 19:17:10|
|Subject: CURSOR problem|
|Previous:||From: Len Morgan||Date: 2000-06-30 18:38:18|
|Subject: NT Binaries|