Skip site navigation (1) Skip section navigation (2)

Re: Out of memory error when doing an update with IN clause

From: Sean Shanny <shannyconsulting(at)earthlink(dot)net>
To: pgsql-general(at)postgresql(dot)org
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Subject: Re: Out of memory error when doing an update with IN clause
Date: 2003-12-29 19:17:32
Message-ID: 3FF07DCC.4070905@earthlink.net (view raw or flat)
Thread:
Lists: pgsql-general
Tom,

As you can see I had to reduce the number of arguments in the IN clause 
to even get the explain.

explain update f_commerce_impressions set servlet_key = 60 where 
servlet_key in (68,69,70,71,87,90,94);
                                                                                                                                                               
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Index Scan using idx_commerce_impressions_servlet, 
idx_commerce_impressions_servlet, idx_commerce_impressions_servlet, 
idx_commerce_impressions_servlet, idx_commerce_impressions_servlet, 
idx_commerce_impressions_servlet, idx_commerce_impressions_servlet on 
f_commerce_impressions  (cost=0.00..1996704.34 rows=62287970 width=59)
   Index Cond: ((servlet_key = 68) OR (servlet_key = 69) OR (servlet_key 
= 70) OR (servlet_key = 71) OR (servlet_key = 87) OR (servlet_key = 90) 
OR (servlet_key = 94))
(2 rows)


Tom Lane wrote:

>Sean Shanny <shannyconsulting(at)earthlink(dot)net> writes:
>  
>
>>There are no FK's or triggers on this or any of the tables in our 
>>warehouse schema.  Also I should have mentioned that this update will 
>>produce 0 rows as these values do not exist in this table.
>>    
>>
>
>Hm, that makes no sense at all ...
>
>  
>
>>Here is output from the /usr/local/pgsql/data/servlerlog when this fails:
>>...
>>DynaHashTable: 534773784 total in 65 blocks; 31488 free (255 chunks); 
>>534742296 used
>>    
>>
>
>Okay, so here's the problem: this hash table has expanded to 500+Mb which
>is enough to overflow your ulimit setting.  Some digging in the source
>code shows only two candidates for such a hash table: a tuple hash table
>used for grouping/aggregating, which doesn't seem likely for this query,
>or a tuple-pointer hash table used for detecting already-visited tuples
>in a multiple index scan.
>
>Could we see the EXPLAIN output (no ANALYZE, since it would fail) for
>the problem query?  That should tell us which of these possibilities
>it is.
>
>			regards, tom lane
>
>---------------------------(end of broadcast)---------------------------
>TIP 3: if posting/reading through Usenet, please send an appropriate
>      subscribe-nomail command to majordomo(at)postgresql(dot)org so that your
>      message can get through to the mailing list cleanly
>
>  
>


In response to

Responses

pgsql-general by date

Next:From: Keith C. PerryDate: 2003-12-29 19:31:43
Subject: Re: [GENERAL] Is my MySQL Gaining ?
Previous:From: Sean ShannyDate: 2003-12-29 19:09:48
Subject: Re: An out of memory error when doing a vacuum full

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group