Re: Implementing "thick"/"fat" databases

From: Chris Travers <chris(dot)travers(at)gmail(dot)com>
To: Karl Nack <karlnack(at)futurityinc(dot)com>
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: Implementing "thick"/"fat" databases
Date: 2011-07-23 23:51:02
Message-ID: CAKt_Zfu61xtgwpnOchkoH2brjSEvdUf0F+u4_AU9qjGB6+6FyA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Sat, Jul 23, 2011 at 3:51 PM, Karl Nack <karlnack(at)futurityinc(dot)com> wrote:
>> In LedgerSMB, we take this a step further by making the procedures
>> into discoverable interfaces, so the application logic itself is a
>> sort of thin glue between a UI layer and the database procedure layer.
>>  One thing I would suggest is to try to keep API calls as atomic as
>> possible.  You want to enforce consistency and so you need to have all
>> relevant inputs passed to the function.  See below for a suggested
>> change to your API.
>
> Would you say LedgerSMB follows the thick/fat database principle? If so,
> I shall have to spend some time with your source code. I'd be very
> curious to see how you handle this.

The current svn trunk (to be 1.3) does. Older code in trunk or 1.2
follows the "survival" principle (we inherited one heck of a codebase
when we forked) :-).
>
>> > BEGIN;
>> > SELECT create_transaction(1, current_date, 'Transaction 1');
>> > SELECT create_line_item(1, 1, 50);
>> > SELECT create_line_item(1, 1, -50);
>> > END;
>>
>> Now, if you are doing double-entry bookkeeping this doesn't provide
>> enough consistency, IMO.  You can't check inside the function to
>> ensure that the transaction is balanced.
>
> Yes, but I'd implement the constraint "all transactions must balance" as
> a trigger that fires when the transaction is complete. This would
> enforce data integrity regardless of whether or not the database API is
> used, which I think is also important.

That's problematic to do in PostgreSQL because statement-level
triggers don't have access to statement args, and I don't believe they
can be deferred. Note the GL stuff is not on the thick db system
yet, but we are working on it (for 1.4).

Here's an example:

CREATE OR REPLACE FUNCTION payment_bulk_post
(in_transactions numeric[], in_batch_id int, in_source text, in_total numeric,
in_ar_ap_accno text, in_cash_accno text,
in_payment_date date, in_account_class int, in_payment_type int,
in_exchangerate numeric, in_curr text)
RETURNS int AS
$$
DECLARE
out_count int;
t_voucher_id int;
t_trans_id int;
t_amount numeric;
t_ar_ap_id int;
t_cash_id int;
t_currs text[];
t_exchangerate numeric;
BEGIN
IF in_batch_id IS NULL THEN
-- t_voucher_id := NULL;
RAISE EXCEPTION 'Bulk Post Must be from Batch!';
ELSE
INSERT INTO voucher (batch_id, batch_class, trans_id)
values (in_batch_id,
(SELECT batch_class_id FROM batch WHERE id = in_batch_id),
in_transactions[1][1]);

t_voucher_id := currval('voucher_id_seq');
END IF;

SELECT string_to_array(value, ':') into t_currs
from defaults
where setting_key = 'curr';

IF (in_curr IS NULL OR in_curr = t_currs[0]) THEN
t_exchangerate := 1;
ELSE
t_exchangerate := in_exchangerate;
END IF;

CREATE TEMPORARY TABLE bulk_payments_in (id int, amount numeric);

select id into t_ar_ap_id from chart where accno = in_ar_ap_accno;
select id into t_cash_id from chart where accno = in_cash_accno;

FOR out_count IN
array_lower(in_transactions, 1) ..
array_upper(in_transactions, 1)
LOOP
EXECUTE $E$
INSERT INTO bulk_payments_in(id, amount)
VALUES ($E$ || quote_literal(in_transactions[out_count][1])
|| $E$, $E$ ||
quote_literal(in_transactions[out_count][2])
|| $E$)$E$;
END LOOP;
EXECUTE $E$
INSERT INTO acc_trans
(trans_id, chart_id, amount, approved, voucher_id, transdate,
source, payment_type)
SELECT id,
case when $E$ || quote_literal(in_account_class) || $E$ = 1
THEN $E$ || t_cash_id || $E$
WHEN $E$ || quote_literal(in_account_class) || $E$ = 2
THEN $E$ || t_ar_ap_id || $E$
ELSE -1 END,
amount * $E$|| quote_literal(t_exchangerate) || $E$,
CASE
WHEN $E$|| t_voucher_id || $E$ IS NULL THEN true
ELSE false END,
$E$ || t_voucher_id || $E$, $E$|| quote_literal(in_payment_date)
||$E$ , $E$ ||COALESCE(quote_literal(in_source), 'NULL') ||
$E$ , $E$ || coalesce(quote_literal(in_payment_type), 'NULL') || $E$
FROM bulk_payments_in where amount <> 0 $E$;

EXECUTE $E$
INSERT INTO acc_trans
(trans_id, chart_id, amount, approved, voucher_id, transdate,
source, payment_type)
SELECT id,
case when $E$ || quote_literal(in_account_class) || $E$ = 1
THEN $E$ || t_ar_ap_id || $E$
WHEN $E$ || quote_literal(in_account_class) || $E$ = 2
THEN $E$ || t_cash_id || $E$
ELSE -1 END,
amount * -1 * $E$|| quote_literal(t_exchangerate) || $E$,
CASE
WHEN $E$|| t_voucher_id || $E$ IS NULL THEN true
ELSE false END,
$E$ || t_voucher_id || $E$, $E$|| quote_literal(in_payment_date)
||$E$ , $E$ ||COALESCE(quote_literal(in_source), 'null')
||$E$ , $E$ || coalesce(quote_literal(in_payment_type), 'NULL') || $E$
FROM bulk_payments_in where amount <> 0 $E$;

IF in_account_class = 1 THEN
EXECUTE $E$
UPDATE ap
set paid = paid + (select amount from bulk_payments_in b
where b.id = ap.id)
where id in (select id from bulk_payments_in) $E$;
ELSE
EXECUTE $E$
UPDATE ar
set paid = paid + (select amount from bulk_payments_in b
where b.id = ar.id)
where id in (select id from bulk_payments_in) $E$;
END IF;
EXECUTE $E$ DROP TABLE bulk_payments_in $E$;
perform unlock_all();
return out_count;
END;
$$ language plpgsql;

in_transactions is a n by 2 array of numeric values. The first is an
integer representation of the invoice id to be paid. The second is
the numeric amount to be paid on that invoice. The temporary table
proved necessary because of cache misses when trying to loop through
the array when hundreds of invoices were paid to one vendor. The
code here is still far from ideal, as in the future we will probably
just query against the array using generate_series.

Hope this helps.
Chris Travers

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Chris Travers 2011-07-23 23:56:30 Re: Implementing "thick"/"fat" databases
Previous Message David Johnston 2011-07-23 23:24:15 Re: Implementing "thick"/"fat" databases