Re: database tuning

From: "kelvan" <kicmcewen(at)windowslive(dot)com>
To: pgsql-performance(at)postgresql(dot)org
Subject: Re: database tuning
Date: 2007-12-11 00:29:14
Message-ID: fjki5a$uf1$1@news.hub.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance


""Scott Marlowe"" <scott(dot)marlowe(at)gmail(dot)com> wrote in message
news:dcc563d10712100858j7b55e68co5d0da0f8b82c19b1(at)mail(dot)gmail(dot)com(dot)(dot)(dot)
> On Dec 7, 2007 1:13 PM, kelvan <kicmcewen(at)windowslive(dot)com> wrote:
>
>> ok heres the thing i dont have a choice i just have to work with whats
>> given
>> whether it is good or not why i need these overheads is for block
>> calculations and and tablespace calculations i have to keep everything in
>> a
>> very very small area on the hdd for head reading speed as the server i am
>> forced to use is a peice of crap so i need to do my calculations to
>> resolve
>> this
>>
>> it is not that i dont know how to do my job i understand effective
>> indexing
>> materlized views and all other effects of database tuning is was my major
>> aspect in my study i just need to know the numbers to do what i have to
>> do.
>>
>> i am new to postgres i have used many other database management systems i
>> know the over heads for all of them just not this one if someone could
>> please be of assisstance.
>>
>> let me give a breef outlay of what i have without breaking my
>> confidentality
>> agreement
>>
>> mac server mac os 10.x
>> postgres 8.2.5 (appologies i just got updated documentation with errors
>> fixed in it)
>> 70gig hdd
>> 5 gig ram
>> 4 cpus (not that it matters as postgres is not multi threading)
>
> Uh, yeah it matters, postgresql can use multiple backends just fine.
> But this will be the least of your problems.
>
>> and i have to support approxmatally anywhere from 5000 - 30000 users all
>> using it concurentally
>
> You are being set up to fail. No matter how you examine things like
> the size of individual fields in a pg database, this hardware cannot
> possibly handle that kind of load. period. Not with Postgresql, nor
> with oracle, nor with teradata, nor with any other db.
>
> If you need to have 30k users actually connected directly to your
> database you most likely have a design flaw somewhere. If you can use
> connection pooling to get the number of connections to some fraction
> of that, then you might get it to work. However, being forced to use
> a single 70G hard drive on an OSX machine with 5 Gigs ram is sub
> optimal.
>
>> as you can see this server wouldnt be my first choice (or my last choice)
>> but as i said i have not choice at this time.
>
> Then you need to quit. Now. And find a job where you are not being
> setup to fail. Seriously.
>
>> the interface programmer and i have come up with ways to solve certian
>> problems in preformance that this server produces but i still need to
>> tune
>> the database
>
> You're being asked to take a school bus and tune it to compete at the indy
> 500.
>
> ---------------------------(end of broadcast)---------------------------
> TIP 5: don't forget to increase your free space map settings
>

look i know this wont work hell i knew that from day one in all regards this
is a temporary stand point after things start getting off i am going to blow
up that mac and burn postgres as i need a more powerful dbms one that can
handle multi threading.

as i have said not my choice i know 5 gigs of ram wouldnt start a hot air
balloon let alone support the user base i will have this is for me not a
perminate job but i take high regards in my work and want to do the best job
possible that and the money is good as i am in between jobs as it stands

for now i only need to support a few thousand and they are going to be
behind a web interface as it stands we cannot configure postgres on a mac to
go over 200 connections for god knows what reason but we have found ways
around that using the mac

i have already calculated that the hdd is no where up to what we need and
will die after about 6 months but in that time the mac server is going to be
killed and we will then have a real server ill do some data migration and
then a different dbms but until then i have to make a buffer to keep things
alive -_-

the 30000 is just the number of queries that the web interface will be
sending at its high point when there are many users in the database by users
i mean at the point of the web interface not the back end so treat them as
queries.

so as you can see ill need as fast a read time for every query as possible.
i am using alot of codes using small int and bit in my database and
de-normalising everying to keep the cnnections down and the data read
ammount down but that can only do so much.we have no problem supporting that
many users form a web stand point
my problem is read time which is why i want to compact the postgres blocks
as much as possible keeping the data of the database in as small a location
as possible.

regards
kelvan

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Greg Smith 2007-12-11 00:36:32 Re: database tuning
Previous Message Kevin Grittner 2007-12-11 00:19:20 Fwd: Re: database tuning