Re: Suggestion for optimization

From: Jeff Davis <list-pgsql-hackers(at)dynworks(dot)com>
To: Justin Clift <justin(at)postgresql(dot)org>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: Suggestion for optimization
Date: 2002-04-05 23:22:50
Message-ID: 200204052323.PAA18857@smtp.ucsd.edu
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

> I don't think your idea would work for a concurrent user setup where
> people have different transactions started at different times with
> different amounts of changes inside each transaction.
>
> That's why it would have to be tracked on a "per connection" basis for
> all the tables.

I tried it out with concurrent connections and it seemed to hold up just
fine. I think MVCC took care of everything. Transactions got a different
count depending on whether they could see the inserted values or not. Once
committed all transactions could see the new table count.

Can you provide a case where it wouldn't?

I imagine this causes some major performance issues, not to mention the dead
tuples would pile up fast, but it seems to work just fine.

My SQL is below.

Regards,
Jeff

jdavis=> create table tuple_count(tuples int);
CREATE
jdavis=> create table c1(a int);
CREATE
jdavis=> create function f1() returns opaque as '
jdavis'> BEGIN
jdavis'> UPDATE tuple_count set tuples=tuples+1;
jdavis'> RETURN NEW;
jdavis'> END;
jdavis'> ' language 'plpgsql';
CREATE
jdavis=> create function f2() returns opaque as '
jdavis'> BEGIN
jdavis'> UPDATE tuple_count set tuples=tuples-1;
jdavis'> RETURN NEW;
jdavis'> END;
jdavis'> ' language 'plpgsql';
CREATE
jdavis=> create trigger t1 after insert on c1 for each row execute procedure
f1();
CREATE
jdavis=> create trigger t2 after delete on c1 for each row execute procedure
f2();
CREATE

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Tom Lane 2002-04-05 23:25:03 Re: PQescapeBytea is not multibyte aware
Previous Message Tom Lane 2002-04-05 23:18:19 Re: 16 parameter limit