I'm working on a project that has a data set of approximately 6million rows
with about 12,000 different elements, each element has 7 columns of data.
I'm wondering what would be faster from a scanning perspective (SELECT
statements with some calculations) for this type of set up;
one table for all the data
one table for each data element (12,000 tables)
one table per subset of elements (eg all elements that start with
"a" in a table)
The data is static once its in the database, only new records are added on a
I'd like to run quite a few different formulated scans in the longer term so
having efficient scans is a high priority.
Can I do anything with Indexing to help with performance? I suspect for the
majority of scans I will need to evaluate an outcome based on 4 or 5 of the
7 columns of data.
Thanks in advance :-)
pgsql-performance by date
|Next:||From: Reece Hart||Date: 2003-07-23 00:40:01|
|Subject: slow table updates|
|Previous:||From: Jord Tanner||Date: 2003-07-22 19:18:52|
|Subject: Re: Dual Xeon + HW RAID question|