BUG #16220: FTS queries slow for large table when using low "gin_fuzzy_search_limit"

From: PG Bug reporting form <noreply(at)postgresql(dot)org>
To: pgsql-bugs(at)lists(dot)postgresql(dot)org
Cc: ade(dot)hey(at)gmail(dot)com
Subject: BUG #16220: FTS queries slow for large table when using low "gin_fuzzy_search_limit"
Date: 2020-01-20 22:33:52
Message-ID: 16220-1a0a4f0cb67cafdc@postgresql.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-bugs

The following bug has been logged on the website:

Bug reference: 16220
Logged by: Adé Heyward
Email address: ade(dot)hey(at)gmail(dot)com
PostgreSQL version: 12.1
Operating system: macOS Catalina
Description:

While testing using Postgres for full-text search on a website I'm making
(one I hope to be highly scalable), I noticed an issue with the
"gin_fuzzy_search_limit" configuration parameter when doing search queries.
Execution times for low "gin_fuzzy_search_limit" settings (e.g.
gin_fuzzy_search_limit = 10) are slower than they are for high
"gin_fuzzy_search_limit" settings (e.g. gin_fuzzy_search_limit = 10000) when
dealing with very large sets of results. I observed this issue on PostgreSQL
12.1.

Down below is SQL to test the issue for yourselves.

I believe I found the cause of the issue and I added a patch at the bottom
that I observed as fixing the issue after testing. I also have an
explanation below of what I think was causing the issue. Please excuse the
length of this report. Also, I debated sending this to
pgsql-hackers(at)lists(dot)postgresql(dot)org because I'm not sure this would be
considered a "bug".

For context, here's first an excerpt from the documentation describing the
purpose and meaning of the "gin_fuzzy_search_limit" configurable parameter
(Section 66.5, postgresql.org/docs/current/gin-tips.html):
`...GIN has a configurable soft upper limit on the number of rows returned:
the gin_fuzzy_search_limit configuration parameter. It is set to 0 (meaning
no limit) by default. If a non-zero limit is set, then the returned set is a
subset of the whole result set, chosen at random.

“Soft” means that the actual number of returned results could differ
somewhat from the specified limit, depending on the query and the quality of
the system's random number generator....`

----

-- Here's a table to place our test data. I include a "body" column and even
a "post_id" column to more simulate a real world scenario
-- Unlogged for speedier insertion of test data
CREATE UNLOGGED TABLE posts (post_id serial, body text, fts_doc tsvector);

-- May take a few minutes to run the following command for generating test
data.
-- Pretend the random hashes produced are words. For our test you can query
'c4ca4238a0b923820dcc509a6f75849b' which will almost certainly always be one
of the random "words" found in the posts
INSERT INTO posts (body, fts_doc)
SELECT body, to_tsvector(body)
FROM (
SELECT
(
SELECT string_agg(x.word, ' ')
FROM (
SELECT md5(trunc(random() * 100 + 1)::text) AS word -- For
random "word" out of set of 100 possible words
FROM generate_series(1, 10) -- For 10 word posts
WHERE series.val = series.val -- For sake of ensuring subquery
expression is considered volatile and returns different result for each row
generated by the outer generate_series
) x
) AS body
FROM generate_series(1, 1000000) AS series(val)
) z;

-- Create full-text search index
CREATE INDEX idx_posts_gin_fts_doc ON posts USING gin (fts_doc);

-- Disable sequential scans to ensure the GIN index's bitmap scan is always
used
SET enable_seqscan = OFF;

-- Run these SET and EXPLAIN ANALYZE SELECT commands together to observe
speed of execution when the soft limit is 10.
-- 'c4ca4238a0b923820dcc509a6f75849b' = (SELECT md5('1') AS word)
SET gin_fuzzy_search_limit = 10; EXPLAIN ANALYZE SELECT COUNT(*) FROM posts
WHERE fts_doc @@ to_tsquery('c4ca4238a0b923820dcc509a6f75849b');
 
-- Run these SET and EXPLAIN ANALYZE SELECT commands and compare the
execution speed to that of the previous pair of commands.
SET gin_fuzzy_search_limit = 10000; EXPLAIN ANALYZE SELECT COUNT(*) FROM
posts WHERE fts_doc @@ to_tsquery('c4ca4238a0b923820dcc509a6f75849b');

----

One would expect and hope to see the execution time when limiting to only
retrieving around ten rows (gin_fuzzy_search_limit = 10) to be faster than
when limiting to only retrieve around 10000 (gin_fuzzy_search_limit =
10000). Applying the patch at the very bottom of this report, you get
results one would more expect. Directly below is my attempt at an
explanation of what I think the issue was. I simplified some things and
omitted some nuances for the sake of it being easier to read (and even then
I'm not sure if it's easier to make sense of):

First, it's important to note that "gin_fuzzy_search_limit" not only
attempts to limit the returned result set for a text search, but also
attempts to randomize that returned result set. A high level, simplified
explanation of how it randomizes the result set is that when iterating
through all possible results in the do-while loop where the iteration takes
place, it randomly chooses to "drop" (basically skip) some results based on
a "dropItem" function called during each iteration. Essentially,
"dropped"/skipped items are those that are removed from the result set when
this "dropItem" function returns true, and after iterating through all
possible results it should result in a number of results close to the value
for "gin_fuzzy_search_limit".

The "dropItem" function's randomness is partly based on the ratio of
"gin_fuzzy_search_limit" to the amount of predicted total possible results
for a keyword/entry in a search (this ratio represented by
"(double)GinFuzzySearchLimit)/((double)((e)->predictNumberResult)" in that
function's definition, where "e" is the entry). When this ratio is extremely
low, "dropItem" not only increases in its chances of returning true, but it
also increases in its chances of being true successively. Sometimes it will
be true thousands of times in succession before there is finally an
occurrence of dropItem == false so that an item is not dropped and it can be
included in the result set.

Since the program doesn't load all items into memory at once, it calls the
"entryLoadMoreItems" function when it needs to get another page of items to
iterate through. The "entryLoadMoreItems" function calls are passed an
"advancePast" variable as an argument. This variable decides what leaf page
in the items/posting tree should more items be retrieved from. Usually when
iterating through all possible results, execution will exit the do-while
loop responsible for iteration in order to perform some important actions
(including the updating of the "advancePast" variable) before returning into
the loop again to iterate over more items. However, when "dropItem" returns
true in succession a great many times, the do-while loop can not be exited
for updating the "advancePast" variable until a non-drop finally occurs.
When this "advancePast" variable is not updated it leads to calls to
"entryLoadMoreItems" repeatedly returning the same items when stuck in the
do-while loop by a high succession of dropped items (because "advancePast"
is never updated to a value after items already iterated through).

Along with the issue of returning the same items, there's the issue of how
the "entryLoadMoreItems" function traverses the posting tree from the root
each time it's called while stuck in the do-while loop. This especially is
the cause for the bad performance seen for low "gin_fuzzy_search_limit"
values. In some cases, this situation is made even worse when "advancePast"
is set to a value that leads to loading a page of items that has relatively
few items actually past "advancePast", and so it must almost immediately
call "entryLoadMoreItems" again. But because "advancePast" never gets
updated, this results in a higher than usual succession of
"entryLoadMoreItems" function calls (the program loads the same page,
iterates over the same relatively few items until it goes and loads the same
page again), with each call requiring traversal from the root of the posting
tree down to the same leaf page as before.

My patch makes it so that when stuck in the do-while loop after many
successive "dropItems" returning true, the program instead now loads actual
new items by passing the last item dropped into the "entryLoadMoreItems"
function instead of the "advancePast" variable that can't be appropriately
updated while stuck in the do-while loop. This means "entryLoadMoreItems"
will instead load items ordered right after the last dropped item. This last
item dropped is also the current item ("curItem") and so the
"entryLoadMoreItems" can directly obtain the next page of items by making a
step right from the current page, instead of traversing from the root of the
posting tree, which is the most important fix for performance.

To summarize, the reason for the bad performance in the case where
gin_fuzzy_search_limit = 10 is that this case evokes more situations where
"entryLoadMoreItems" is called repeatedly but doesn't actually load new
items nor efficiently load the items it does get. Such situations are rare
except in the cases where both "gin_fuzzy_search_limit" is low and the index
has very large amounts of potential results. My patch makes sure that during
these situations "entryLoadMoreItems" will always load new items and load
them efficiently by using the right link to get the next page of items in
the posting tree, instead of doing a search for the next page starting from
the root.

When testing the patch I saw queries with gin_fuzzy_search_limit = 10 have
execution times below 8 milliseconds whereas before those same ones with the
same dataset could take as much as 800ms. That's two orders of magnitude
lower. It also seems that results more closely approach the
gin_fuzzy_search_limit (e.g. getting 200 results for gin_fuzzy_search_limit
= 10 pre-patch vs. getting results within the range of 4-15 post-patch),
though I haven't looked into exactly why that is. This all said, I could be
missing a reason why the patch shouldn't be applied (including any
unintended consequences).

--- /src/backend/access/gin/ginget.c
+++ /src/backend/access/gin/ginget.c
@@ -906,13 +906,20 @@
}
else
{
+ bool droppedItem = false;
/* A posting tree */
do
{
/* If we've processed the current batch, load more items */
while (entry->offset >= entry->nlist)
{
- entryLoadMoreItems(ginstate, entry, advancePast, snapshot);
+ /* Load items past curItem if item was dropped and curItem is greater
than advancePast */
+ if (droppedItem == true && ginCompareItemPointers(&entry->curItem,
&advancePast) > 0)
+ {
+ entryLoadMoreItems(ginstate, entry, entry->curItem, snapshot);
+ } else {
+ entryLoadMoreItems(ginstate, entry, advancePast, snapshot);
+ }

if (entry->isFinished)
{
@@ -923,8 +930,10 @@

entry->curItem = entry->list[entry->offset++];

+ droppedItem = false;
+
} while (ginCompareItemPointers(&entry->curItem, &advancePast) <= 0 ||
- (entry->reduceResult == true && dropItem(entry)));
+ (entry->reduceResult == true && (droppedItem = dropItem(entry))));
}
}

Browse pgsql-bugs by date

  From Date Subject
Next Message Michael Paquier 2020-01-21 02:43:03 Re: REINDEX CONCURRENTLY unexpectedly fails
Previous Message Tom Lane 2020-01-20 17:58:29 Re: BUG #16199: pg_restore stuck on interrupts