I've 4 tables
create table items(
item_id serial primary key,
create table item_attributes(
item_id int references items (item_id)
create table baskets(
basket_id serial primary key,
create table basket_items(
item_id int references items (item_id),
basket_id int references baskets (basket_id),
I pick up the items in a basket with some of their attributes with
the join you may expect
select [list of columns] from baskets b
join basket_items bi on b.basket_=bi.basket_id
join items i on i.item_id=bi.item_id
join item_attributes a a.item_id=i.item_id
where the list of columns may change.
The above query or small variations of the above query are used
frequently in several functions and generally a group of function is
called on the same connection with the same $1...
The items table is around 600K records, but as you may expect the
items in a basket are just a bunch (1 to 20).
So I've a cluster of functions referring mostly to the same items but
that may require different attributes.
I was wondering what would be the best way to share the cost of such
retrieval among all function.
An idea could be to use a sort of temp table.
The first function that need the item list check if a table exists,
populate with most of the columns required and at the end of the
connection another function clean up... beside the fact I still
didn't come up with a solution that will avoid different connection
kill each other "cache"... I'm still wondering what would be the best
way to cache the above query or part of the efforts to generate it or
some of its variants across cluster of functions.
The information in items and items_attributes is quite static.
The items in a basket are static across the cluster of function.
Ivan Sergio Borgonovo
pgsql-general by date
|Next:||From: x asasaxax||Date: 2008-03-29 00:30:43|
|Subject: Primary Key with serial|
|Previous:||From: Tom Lane||Date: 2008-03-29 00:07:15|
|Subject: Re: Out of memory |