Skip site navigation (1) Skip section navigation (2)

UPDATE fails on large table

From: Kostis <pgsql(at)pobox(dot)gr>
To: pgsql-bugs(at)postgresql(dot)org
Subject: UPDATE fails on large table
Date: 2001-11-22 15:04:50
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-bugs
                        POSTGRESQL BUG REPORT TEMPLATE

Your name		:	Kostis Pangalos
Your email address	:	pgsql(at)pobox(dot)gr

System Configuration
  Architecture (example: Intel Pentium)  	:	Dual AMD Athlon

  Operating System (example: Linux 2.0.26 ELF) 	:	SuSE 7.2 (upgraded kernel 2.4.12 SMP)

  PostgreSQL version (example: PostgreSQL-7.1.3):   PostgreSQL-7.1.3

  Compiler used (example:  gcc 2.95.2)		:   2.95.3

Please enter a FULL description of your problem:
A simple UPDATE on a largish table after taking a couple of hours fails to complete.
update order set customer_id = where = order.customer_name;

In general, the simplest UPDATEs even without WHERE clauses on large tables take unbelievably long to complete
This is a Dual Athlon machine with 512Mb DDR RAM, SCSI160 and a 4.9ms SCSI Drive for the DB alone!
I find that sometimes the only way to do UPDATEs on large tables is to 'COPY' the table out to a dump file process it 
with Perl and dump it in again. :-(

Please describe a way to repeat the problem.   Please try to provide a
concise reproducible example, if at all possible: 

Please  note the table names and column names have been changed for the sake of confidentiality but the statistics are from the real tables which basically have a couple more columns each which I do not list since the where being not UPDATEd nor included in the WHERE clause.
shop=# \d order 
                                       Table "order"
  Attribute  |           Type           |                        Modifier                        
id          | integer                  | not null default nextval('"order_id_seq"'::text)
customer_name        | character varying(50)    | 
customer_id     | integer                  | 
product_id | integer | not null
quantity | integer | not null
stamp       | timestamp with time zone | not null default timeofday()
Indices: order_customer_name_in,

shop=# \d customer
                                    Table "host"
 Attribute |         Type          |                    Modifier                     
id        | integer               | not null default nextval('"customer_id_seq"'::text)
name      | character varying(50) | 
email     | character varying(50) | 
Indices: customer_id_key,

shop=# select count(*) from order;
(1 row)

shop=# select count(*) from customer;
(1 row)

shop=# explain update order set customer_id = where = order.customer_name;

Hash Join  (cost=767.92..89821.05 rows=724104 width=140)
  ->  Seq Scan on order  (cost=0.00..33402.04 rows=724104 width=124)
  ->  Hash  (cost=545.74..545.74 rows=30074 width=16)
        ->  Seq Scan on customer  (cost=0.00..545.74 rows=30074 width=16)

shop=# update order set customer_id = where = order.customer_name;
(....a long time goes by.... then:)
ERROR:  Deadlock detected.
        See the lock(l) manual page for a possible cause.

If you know how this problem might be fixed, list the solution below:
Sorry. Not a clue. I am in trouble too. I upgraded to 7.1.3 'cause I really needed TOAST and now it's too late to go back.


pgsql-bugs by date

Next:From: Peter EisentrautDate: 2001-11-22 17:21:17
Subject: Re: [BUGS] Bug #513: union all changes char(3) column definition
Previous:From: Bruce MomjianDate: 2001-11-22 04:13:28
Subject: Re: Bug #513: union all changes char(3) column definition

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group