Skip site navigation (1) Skip section navigation (2)

Re: Equivalents in PostgreSQL of MySQL's "ENGINE=MEMORY" "MAX_ROWS=1000"

From: Hannes Dorbath <light(at)theendofthetunnel(dot)de>
To: Arnau <arnaulist(at)andromeiberica(dot)com>
Subject: Re: Equivalents in PostgreSQL of MySQL's "ENGINE=MEMORY" "MAX_ROWS=1000"
Date: 2007-04-18 20:04:56
Message-ID: 462679E8.8010202@theendofthetunnel.de (view raw or flat)
Thread:
Lists: pgsql-performance
Arnau wrote:
> Hi Thor,
> 
> Thor-Michael Støre wrote:
>> On 2007-04-04 Arnau wrote:
>>> Josh Berkus wrote:
>>>> Arnau,
>>>>
>>>>> Is there anything similar in PostgreSQL? The idea behind this
>>>>> is how I can do in PostgreSQL to have tables where I can query
>>>>> on them very often something like every few seconds and get
>>>>> results very fast without overloading the postmaster.
>>>> If you're only querying the tables every few seconds, then you
>>>> don't really need to worry about performance.
>>
>>> Well, the idea behind this is to have events tables, and a
>>> monitoring system polls that table every few seconds.  I'd like to
>>> have a kind of FIFO stack. From "the events producer" point of view
>>> he'll be pushing rows into that table, when it's filled the oldest
>>> one will be removed to leave room to the newest one. From "the
>>> consumer" point of view he'll read all the contents of that table.
>>
>>> So I'll not only querying the tables, I'll need to also modify that
>>> tables.
>>
>> Please try to refrain from doing this. This is the "Database as an
>> IPC" antipattern (Antipatterns are "commonly-reinvented bad solutions
>> to problems", I.E. you can be sure someone has tried this very thing
>> before and found it to be a bad solution)
>>
>> http://en.wikipedia.org/wiki/Database_as_an_IPC
>>
>> Best solution is (like Ansgar hinted at) to use a real IPC system.
>>
>> Ofcourse, I've done it myself (not on PostgreSQL though) when working
>> at a large corporation where corporate politics prevented me from
>> introducing any new interdependency between systems (like having two
>> start talking with eachother when they previously didn't), the only
>> "common ground" for systems that needed to communicate was a
>> database, and one of the systems was only able to run simple SQL
>> statements and not stored procedures.
> 
> 
>   First of all, thanks for your interested but let me explain what I 
> need to do.
> 
>   We have a web application where customers want to monitor how it's 
> performing, but not performing in terms of speed but how many customers 
> are now browsing in the application, how many have payed browsing 
> sessions, how many payments have been done, ... More or less is to have 
> a control panel. The difference is that they want that the information 
> displayed on a web browser must be "real-time" that is a query every 
> 1-10 seconds.


Though that has been suggested earlier, but why not use pgmemcache and 
push each event as a new key? As memcached is FIFO by design that is 
exacly what you ask for. Besides that memcached is so fast that your OS 
is more busy with handling all that TCP connections than running memcached.

And in case you'd like to display statistical data and not tailing 
events, let PG push that to memcached keys as well. See memcached as a 
materialized view in that case.

As middleware I'd recommend lighttpd with mod_magnet.

You should be able to delivery that admin page way more than 5000 times 
/ sec with some outdated desktop hardware. If that's not enough read up 
on things like 
http://blog.lighttpd.net/articles/2006/11/27/comet-meets-mod_mailbox


-- 
Best regards,
Hannes Dorbath

In response to

pgsql-performance by date

Next:From: Steven FlattDate: 2007-04-18 20:59:03
Subject: Re: Foreign Key Deadlocking
Previous:From: Jeff DavisDate: 2007-04-18 19:08:44
Subject: Re: Basic Q on superfluous primary keys

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group