I tried to implement a fdw module that is designed to utilize GPU
devices to execute
qualifiers of sequential-scan on foreign tables managed by this module.
It was named PG-Strom, and the following wikipage gives a brief
overview of this module.
In our measurement, it achieves about x10 times faster on
sequential-scan with complex-
qualifiers, of course, it quite depends on type of workloads.
A query counts number of records with (x,y) located within a particular range.
A regular table 'rtbl' and foreign table 'ftbl' contains same
contents; with 10 million of records.
postgres=# SELECT count(*) FROM rtbl WHERE sqrt((x-25.6)^2 + (y-12.8)^2) < 51.2;
Time: 10537.069 ms
postgres=# SELECT count(*) FROM ftbl WHERE sqrt((x-25.6)^2 + (y-12.8)^2) < 51.2;
Time: 744.252 ms
(*) Let's see the "How to use" section of the wikipage to reproduce my testcase.
It seems to me quite good result. However, I doubt myself whether the case of
sequential-scan on regular table was not tuned appropriately.
Could you tell me some hint to tune up sequential scan on large tables?
All I did on the test case is expansion of shared_buffers to 1024MB that is
enough to load whole of the example tables on memory.
KaiGai Kohei <kaigai(at)kaigai(dot)gr(dot)jp>
pgsql-hackers by date
|Next:||From: Julien Tachoires||Date: 2012-01-22 16:04:25|
|Subject: Re: patch : Allow toast tables to be moved to a different tablespace|
|Previous:||From: Kohei KaiGai||Date: 2012-01-22 14:54:08|
|Subject: Re: [v9.2] sepgsql's DROP Permission checks|