Re: select count() out of memory

From: tfinneid(at)student(dot)matnat(dot)uio(dot)no
To: "Gregory Stark" <stark(at)enterprisedb(dot)com>
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: select count() out of memory
Date: 2007-10-26 15:56:31
Message-ID: 63953.134.32.140.234.1193414191.squirrel@webmail.uio.no
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

> "Sam Mason" <sam(at)samason(dot)me(dot)uk> writes:
>
>> On Fri, Oct 26, 2007 at 08:25:09AM +0200, Thomas Finneid wrote:
>>> Gregory Stark wrote:
>>> >Tom's point is that if you have 55k tables then just *finding* the
>>> newest
>>> >child table is fairly expensive. You're accessing a not
>>> insignificant-sized
>>> >index and table of tables. And the situation is worse when you
>>> consider the
>>> >number of columns all those tables have, all the indexes those tables
>>> have,
>>> >all the column keys those indexes the tables have have, etc.
>>>
>>> Yes, I got that. But I name the child tables so that I when my server
>>> receives read requests, I retreive details from the request to be able
>>> to figure out the exact child table name, without the system needing to
>>> do any internal searches to find the newest table.
>>
>> I think the lookup that is being referred to is the fact that if you've
>> got 55k (plus) files in a directory then the filesystem still has to
>> perform a search in the directory to locate the actual file associated
>> with the filename. There probably isn't going to be much difference
>> between the filesystem performing this lookup vs the database descending
>> a few levels of its index structure.
>
> That's true but it's in *addition* to the database having to find the
> catalog
> records for the table which involves an index lookup itself.
>
> Actually many index lookups since it has to look up the catalog record for
> the
> table, for all the columns of the table, for all indexes of the table, for
> all
> the index keys of those indexes, all constraints of the table, all
> triggers of
> the table, all dependencies on other objects and of other objects on this
> table (this latter is why I suggest not using inheritance).
>
> Each of these lookups is using an index to find the table out of 55k
> records
> which is just the same work that you're saving in the top level of the
> index
> tree.
>
> If you think there's one obvious solution then you just haven't analyzed
> the
> problem seriously. In serious engineering there are always tradeoffs. The
> cleanest prettiest solution is not necessarily -- and in cases where
> you're
> dealing with large numbers like this almost certainly isn't -- the optimal
> choice.

Serious engineering does not imply perfect engineering, I have analyzed it
and made my tradeoffs. What you are forgetting here is that you clearly
dont understand the enire solution, So I will try to explain it again. And
if you still think its bonkers, the I urge you to come up with a solution
that works with the requirements.

Every predefined X seconds (e.g. 3,6,9,12 etc ) a bunch of data arrives,
which must be stored by descrete time groups, e.g. second 3,6,9,12. The
data that arrives is approx 4MB per second, so in this case its 12MB. This
has to be processed by the server and written to the db, within 1 second.
There can be up to 5 writers at the same time. Within that same second, at
least 16 readers should be able to read all the data, *each*. Writers and
readers are only concerned with the latest data, i.e. data from the latest
time group, e.g. second 9.
This has to go on every predefined seconds for the next 6-12 weeks,
without stop, pause or any errors. These are the requirements.

When I performed performance tests I found several unwanted effects from
several test scenarios. Here are the most important ones:

- single large table, with indexes created when table is created.
- this leads to the performance of an insert degrading as more data is
added, when I get
to 1 billion rows it took 50 seconds to add the data.

My lesson from this is that
- single inserts can never be efficient enough
- indexes cause linear performance drop as data volume increases

So I tried a different approach, which would address both issues:

- separate tables for each bulk of data
- use of bulk insert through jdbc COPY.
- add indexes to the newly create table after the copy is finished.

My lesson from this is:
- insert take constant time, no matter how much data is in the base
- adding the indexes after insert takes constant time, i.e. some
milliseconds.

From this I realised that using either single tables or partitions is the
way to go, since I only need to access the latest data, i.e. the newest
table, in normal situations.

After thinking about it and discussing with this group, I found that using
partitions would be more practical for two reasons:
- changes to the parent table is automatically propagated to all child
tables, so the schema remains consistent and the server wont brake because
of differences in the tables.
- it is more maintainable to use "create with inheritance" sql in source
code than the entire ddl of the table.

So now I have tested the server 24/7 for a week and a half, with 1 writer
and 16 readers writing all the mentioned data, and everything works fine.
Expect for the select on the parent table, which now runs out of memory.
Which in it self is not a problem since I will never use the parent table
in production in any case.

regards

tom

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Erik Jones 2007-10-26 16:04:46 Re: cases in rules problem
Previous Message Pat Maddox 2007-10-26 15:56:27 Re: Selecting tree data