On Wed, 2005-02-09 at 11:01, Marek Lewczuk wrote:
> Shashi Gireddy napisał(a):
> > I recently migrated from MySql, The database size in mysql was 1.4GB (It is a static database). It generated a dump file (.sql) of size 8GB), It took 2days to import the whole thing into postgres. After all the response from postgres is a disaster. It took 40sec's to run a select count(logrecno) from sf10001; which generated a value 197569. And It took for ever time to display the table. How to optimize the database so that I can expect faster access to data.
> > each table has 70 colsX197569 rows (static data), like that I have 40 tables, Everything static.
> > system configuration: p4 2.8ghz 512mb ram os: xp postgres version: 8.0
> First of all you should make VACUUM FULL ANALYZE for the all tables
> (http://www.postgresql.org/docs/8.0/interactive/sql-vacuum.html) - this
> should solve the problem. However you should also think about changing
> table structure, because PostgreSQL needs different indexes than MySQL.
> A few months ago I had the same problem - but after vacuuming, making
> proper indexes everything is working like a charm. Believe me that you
> can achieve the same speed - it is only a matter of good db structure
> and environment settings
This user may also benefit from better normalization, as i remember him
having lots of fields in each table. Note that since PostgreSQL has to
hit the tables AFTER hitting the indexes due to MVCC, having wide,
denormalized tables with indexes is a sub-optimal solution, while
narrower tables are often much faster.
It sounded to me like what the OP was doing was more fittingly described
as "batch processing" and might well be better handled without a
database at all.
In response to
pgsql-admin by date
|Next:||From: Gavin M. Roy||Date: 2005-02-09 19:41:19|
|Subject: Re: PHP/PDO Database Abstraction Layer|
|Previous:||From: Marek Lewczuk||Date: 2005-02-09 17:01:37|
|Subject: Re: too slow|