Re: Turning off transactions completely.

From: Francisco Reyes <lists(at)natserv(dot)com>
To: Arsalan Zaidi <azaidi(at)directi(dot)com>
Cc: Maarten(dot)Boekhold(at)reuters(dot)com, <pgsql-general(at)postgresql(dot)org>
Subject: Re: Turning off transactions completely.
Date: 2002-01-16 15:23:48
Message-ID: 20020116100849.X19885-100000@zoraida.natserv.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Tue, 8 Jan 2002, Arsalan Zaidi wrote:
> Just to re-iterate.
> 1. I've done quite a bit of tweaking with the WAL*, shared buffers and sort
> mem over several weeks. They're about as optimum as I can get them.

Could you expand on that. What makes you say they are "optimun".

> 3. The queries are just about as good as I can get them and have been
> throughly EXPLAIN'ed with live/large amounts of data in the tables. No IN's
> used, only EXISTS (where required).

I am far from an SQL expert, but it may not hurt to share with the list
what are these queries. Perhaps even post them.

> 4. Temp tables are used to simplify complex queries (and speed them up I
> hope).

Could you expand on that?

> 5. RAID-0 (SCSI/HW) + Dual Proc + 1GB RAM. Linux 2.4.17(smp) (pure Linus. No
> other patches). Swap is on a seperate IDE drive.

How about more memory? This would certainly help substantially.
What is the speed of the HDs? 10K RPM? 15K RPM?
Is the Swap ever hit? Hitting Swap always hurts, but given that you put an IDE for swap it would slow you
down even more.
What is the speed of the CPUs?
What type of memory PC100/PC133/Rambus/DDR???
How many disks on your Raid 0?

Although there is much you could perhaps do through software configuration
once you get to the levels you are discribing there is no substitute for
fast hardware. Probably the cheapest upgrade you can do is adding more
memory. This will likely help.

You also need to pay close attention to your schema. You can to have your
most heavily used data in a small table. To give you an example let me
make up an scenario simmilar to something I did.

Let's say I have a people's list(not particularly correct SQL just to give
you an idea)
id serial
name varchar(20)
last varchar(20)
addres varchar(30)
addres2 varchar(30)
state char(2)
zip int
comment varchar(30)
education char(1)

Now let's say that I use this table extremely heavily when doing joins
with other tables (i.e. OLAP type of analyses against other tables).
What I did was that I broke it off into two tables and kept only the
fields which I used %90+ of the time. something like
id
name
zip

Then did another table with the rest and linked by ID. I didn't do any
particular time difference analyses, but I got a substantial improvement.
On my actual case I had about 50 fields which totalled something on the
neighborhood of 300 bytes. After breaking it up the abbreviated
version of the table was less than 50 bytes.

> 7. Driver app was multi-threaded. It made things worse.

Expand on this. This is where the number of buffers vs the amount of
memory comes into play. If by firing multiple copies you ended up hitting
swap this would have made things much worse. Remember postgresql fires up
processes. It doesn't use threads.

>BTW, the apps jobs consists largely of firing off SQL queries
>in the correct sequence;

It may help if you tell us more about these queries. In particular the one
you mentioned that takes 36 hours.

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Holger Krug 2002-01-16 15:35:38 Re: Different views with same name for different users
Previous Message Darren Ferguson 2002-01-16 14:59:42 Re: Different views with same name for different users