Re: Improve Seq scan performance

From: PFC <lists(at)peufeu(dot)com>
To: Lutischán Ferenc <lutischanf(at)gmail(dot)com>, pgsql-performance(at)postgresql(dot)org
Subject: Re: Improve Seq scan performance
Date: 2008-11-16 14:54:08
Message-ID: op.ukpvoi1lcigqcu@soyouz
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance


> Dear List,
>
> I would like to improve seq scan performance. :-)
>
> I have many cols in a table. I use only 1 col for search on it. It is
> indexed with btree with text_pattern_ops. The search method is: r like
> '%aaa%'
> When I make another table with only this col values, the search time is
> better when the data is cached. But wronger when the data isn't in cache.
>
> I think the following:
> - When there is a big table with many cols, the seq search is read all
> cols not only searched.
> - If we use an index with full values of a col, it is possible to seq
> scan over the index is make better performance (lower io with smaller
> data).
>
> It is possible to make an index on the table, and make a seq index scan
> on this values?

You can fake this (as a test) by creating a separate table with just your
column of interest and the row id (primary key), updated via triggers, and
seq scan this table. Seq scanning this small table should be fast. Of
course if you have several column conditions it will get more complex.

Note that btrees cannot optimize '%aaa%'. You could use trigrams.

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message PFC 2008-11-16 15:20:04 Re: Performance Question
Previous Message PFC 2008-11-16 14:50:28 Re: slow full table update