Re: caching query results

From: "alex b(dot)" <mailinglists1(at)gmx(dot)de>
To: Postgresql General <pgsql-general(at)postgresql(dot)org>
Subject: Re: caching query results
Date: 2003-05-21 18:26:55
Message-ID: 3ECBC4EF.1060201@gmx.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

hello dear people without shaved necks!

as many of you have already told me cursors are the way to go - now I know!

there it is, kindly provided my BILL G.:

BEGIN;
DECLARE <cursorname> FOR <query>;
FETCH <number_of_rows> FROM <cursorname>;
MOVE {FORWARD|BACKWARD} <number_of_rows> IN <cursorname>;

THANK YOU ALL VERY VIEL (much in german)!!!

I will now have to implement session ID's into my CGI's...

oh by the way... lets say a transaction has begun and was never
commited.. what will happen to it?

is there a automatic rollback after a certain time?
or would there be ton's of open transactions?

Darko Prenosil wrote:
> On Wednesday 21 May 2003 16:34, alex b. wrote:
>
>>hello all
>>
>>
>>I've been wondering, if it was possible to cache the query results...
>>
>>the reason I ask is because of a script I wrote recently... each request
>>takes up to 4 seconds... that ok, because its quite a bit of data... but
>>instead of always collecting the data again and again some kind of cache
>>wouldn't be all too bad.
>>
>>assuming that all queries are always the same - except for the OFFSET..
>>- LIMIT stays the same.
>>
>>cheers, alex
>>
>
>
> The only way is to use cursor or temp table.
>
>
>>---------------------------(end of broadcast)---------------------------
>>TIP 2: you can get off all lists at once with the unregister command
>> (send "unregister YourEmailAddressHere" to majordomo(at)postgresql(dot)org)
>
>
>

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Jim C. Nasby 2003-05-21 19:17:48 Re: ERROR: Memory exhausted in AllocSetAlloc(188)
Previous Message Dave Krieger 2003-05-21 18:19:53 Re: FIXED: Building 7.1.3 on Solaris 2.6