Thank very much for answers to my preceding question. I have obtained a
plain CSV file from MySQL and I have loaded my PostgreSQL table with
this file using the COPY command.
I have another question. Now I have a table in PostgreSQL with about
35000 records. The table has the following fields (sorry, names are in
abi char(5) NOT NULL,
cab char(5) NOT NULL,
banca char(80) NOT NULL,
filiale char(60) NOT NULL,
indirizzo char(80) NOT NULL,
citta char(40) NOT NULL,
cap char(16) NOT NULL,
There is a primary key ('abi','cab') and an index for field 'banca'.
This table contains the list of all italian banks.
Note, I have the same table also on MySQL because my intention is to
test and understand better some SELECT benchmarks using both databases.
On PostgreSQL I have tried:
SELECT * FROM banche ORDER BY banca LIMIT 10 OFFSET 0;
Time: 10,000 ms
Then I have tried:
SELECT * FROM banche ORDER BY banca LIMIT 10 OFFSET 34000;
Time: 2433,000 ms
Why do I get this big timing??? I got similar timings also with MySQL. I
can think (or better I suppose) a database, in this situation, has to do
several filterings and seekings to reach the request offset. Is my
My final target is to create a graphical Java application which uses
databases using JDBC. I would like, for example, to use a JTable to show
a database table in a tabular form.
With this (long) timings I can't obtain good performances! Especially
when I am at the bottom of the table.
What do you think? Is my approach correct??
pgsql-novice by date
|Next:||From: Guido Barosio||Date: 2006-03-28 12:22:39|
|Subject: Re: Transfer from MySQL to PostgreSQL|
|Previous:||From: Christoph Della Valle||Date: 2006-03-28 11:20:09|
|Subject: Re: delete from joined tables|