On Fri, Mar 30, 2001 at 10:18:56AM -0500,
Mitch Vincent <mitch(at)venux(dot)net> wrote:
> If you could post the schema of your tables that you do the query against
> and an EXPLAIN of the queries you're doing, perhaps we could further tune
> your queries in addition to beefing up the memory usage of the backend..
This is a bit more than I was expecting. People who do this kind of thing
generally paid lots of money.
However, if you really want, all of the information on queries and schema
is available at http://wolff.to/area/ . That is the old box which has a
lot less memory and a much slower processor. The database schema build
script is available as well as the source to the perl scripts that handle
the queries. The especially slow (about 20 seconds before rows are returned
- reduced to about 1 second on the new box) queries are the full lists of
people sorted by name or ID (the ID sort isn't as slow).
Almost all of the data is available. However the people data is accessed
through a view and there is one person whose name is anonymized.
At this point I wasn't as worried about inefficiencies in the queries
themselves, but rather how to tell the database server and/or linux to
best use the memory. The data in the database should easily fit into memory.
> Check this link out too.
I will look through that site again. I looked at that previously, but not
specifically looking for efficient use of memory.
In response to
pgsql-general by date
|Next:||From: Soma Interesting||Date: 2001-03-30 19:06:15|
|Subject: Re: Globally Unique IDs?|
|Previous:||From: Soma Interesting||Date: 2001-03-30 18:57:42|
|Subject: RE: dynamic field names in a function.|