From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Markus Schiltknecht <markus(at)bluegap(dot)ch> |
Cc: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Join with an array |
Date: | 2006-02-23 16:49:21 |
Message-ID: | 22996.1140713361@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Markus Schiltknecht <markus(at)bluegap(dot)ch> writes:
> I'm trying to speed up a query with a lookup table. This lookup table
> gets very big and should still fit into memory. It does not change very
> often. Given these facts I decided to use an array, as follows:
> CREATE TABLE lookup_table (id INT PRIMARY KEY, items INT[] NOT NULL);
> I know this is not considered good database design, but it saves a lot
> of overhead for tuple visibility compared to a 1:1 table.
> To fetch an item via the lookup_table I tried to use the following
> query:
> SELECT i.id, i.title FROM item i
> JOIN lookup_table lut ON i.id = ANY(lut.items)
> WHERE lut.id = $LOOKUP_ID;
> Unfortunately that one seems to always use a sequential scan over items.
FWIW, "indexcol = ANY(array)" searches are indexable in CVS tip.
There's no hope in any existing release though :-(
> I tried to circumvent the problem with generate_series:
> SELECT i.id, i.title FROM generate_series(0, $MAX) s
> JOIN lookup_table lut ON s = ANY(lut.items)
> JOIN item i ON s = i.id
> WHERE lut.id = $LOOKUP_ID;
Seems like the hard way --- why aren't you searching over array subscripts?
SELECT i.id, i.title FROM generate_series(1, $MAX) s
JOIN lookup_table lut ON s <= array_upper(lut.items)
JOIN item i ON i.id = lut.items[s]
WHERE lut.id = $LOOKUP_ID;
$MAX need only be as large as the widest array in lookup_table.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2006-02-23 16:54:36 | Re: Zeroing damaged pages |
Previous Message | Markus Schaber | 2006-02-23 16:48:01 | Re: [PERFORM] Looking for a tool to "*" pg tables as ERDs |