From: | mikeo <mikeo(at)spectrumtelecorp(dot)com> |
---|---|
To: | pgsql-sql(at)postgresql(dot)org |
Subject: | short query becomes long |
Date: | 2000-06-01 18:24:39 |
Message-ID: | 3.0.1.32.20000601142439.0094b320@pop.spectrumtelecorp.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-sql |
hi,
we have a weird situation here. we have a table of approx. 10k rows
representing accumulated activity by specific customers. as information
is gathered those customers rows are updated. the number of rows does not
increase unless we get a new customer so that is not a factor. the table
is defined as follows:
Table "account_summary_02"
Attribute | Type | Modifier
-------------+-------------+----------
bill_br_id | bigint | not null
cust_id | varchar(15) | not null
btn_id | varchar(15) | not null
ln_id | varchar(15) | not null
ct_key | float8 | not null
as_quantity | float8 | not null
as_charges | float8 | not null
as_count | float8 | not null
Index: account_summary_02_unq_idx
the index is on the first 5 columns. here's the situation. after about
50,000
updates, which fly right along, the process begins to really bog down. we
perform
a vacuum analzye and it speeds right up again. my question is, is there a way
to perform these updates, potentially 500k to 1 million in a day, without
having
to vacuum so frequently? maybe some setting or parameter to be changed?
the update
query is doing an index scan.
mikeo
From | Date | Subject | |
---|---|---|---|
Next Message | bill | 2000-06-01 19:53:29 | I'm missing outer joins |
Previous Message | Bruce Momjian | 2000-06-01 18:10:03 | Re: SQL'92 web resources |