Running Postgres 7.1.3 on RedHat 7.2, kernel 2.4.7 rel. 10.
My question is about table design strategies ...
I have a imported a table, broken up in such a fashion that
there are now 8 tables on Postgres. For example: table_north,
table_south, table_east and table_west originally comes from
another source on another database called 'table_direction'.
>From that, the tables span years, so, I have tables called
'table_north_2000' and 'table_north_2001', 'table_south_2000'
and table_south_2001' and so on ...
Now ... I was thinking that now that I have all 8 parts, I'd like
* create indices on the similar names in each table
* create a view that joins all 8 tables into ONE table again
I don't want to join all of the tables back into ONE table
because every month or so, I'll have to update and import
a fresh copy of the 2001 year tables ... and because they are
pretty big, pulling it all at once is totally not an option. Of course,
I'll have to start pulling 2002 data soon ...
I'm looking for creative suggestions. Otherwise, I think I'll have to
go the route I stated above ... it just seems like it's going to be
PS: Has anyone had a chance to test a Data Model / database
structure modeling tool (for creating pretty pictures and relational
info / documentation)?
From: Johnson, Shaunn
Sent: Monday, March 25, 2002 4:15 PM
Subject: Re: [GENERAL] file size issue?
--I think you've answered at least 1/2 of my question,
--I'd like to figure out if Postgres reaches a point where
it will no longer index or vacuum a table based on its size (your answer
tells me 'No' - it will continue until it is done, splitting each
table on 1Gig increments).
--And if THAT is true, then why am I getting failures when
I'm vacuuming or indexing a table just after reaching 2 Gig?
--And if it's an OS (or any other) problem, how can I factor
> Has anyone seen if it is a problem with the OS or with the way
> Postgres handles large files (or, if I should compile it again
> with some new options).
What do you mean "postgres handles large files"? The filesize
problem isn't related to the size of your table, because postgres
splits files at 1 Gig.
If it's an output problem, you could see something, but you said you
pgsql-general by date
|Next:||From: Joe Conway||Date: 2002-03-28 15:00:30|
|Subject: Re: Bytea vs. BLOB (what's the motivation behind the former?)|
|Previous:||From: Frank Hilliard||Date: 2002-03-28 14:50:47|
|Subject: pg_dump fails|