Skip site navigation (1) Skip section navigation (2)

Re: Monitoring new records

From: "Sean Davis" <sdavis2(at)mail(dot)nih(dot)gov>
To: messias <messias(at)fw(dot)hu>
Cc: pgsql-novice(at)postgresql(dot)org
Subject: Re: Monitoring new records
Date: 2008-02-29 11:07:28
Message-ID: 264855a00802290307h3c400906i6975c478cbd90a62@mail.gmail.com (view raw or flat)
Thread:
Lists: pgsql-novice
2008/2/29 messias <messias(at)fw(dot)hu>:
> --- Sean Davis <sdavis2(at)mail(dot)nih(dot)gov> wrote (2008.02.28 22:56:08):
>  ---
>
>
>
>  > 2008/2/28 messias <messias(at)fw(dot)hu>:
>
>  > > Hello
>
>  > >
>
>  > >
>
>  > >
>
>  > >  I am new in SQL programing and I have a problem. I would like
>
>  to
>
>  > >
>
>  > >  monitor the new records which is inserted into an SQL table.
>
>  > >
>
>  > >  I found some solution which mentioned a queue (log) table which
>
>  can
>
>  > >
>
>  > >  be filled by a trigger. The client would read from that queue
>
>  table,
>
>  > >
>
>  > >  and delete the readed data.
>
>  > >
>
>  > >  I think this solution can work with only one client.
>
>  > >
>
>  > >  But I would have more than one client at the same time, and I
>  do
>
>  not
>
>  > >
>
>  > >  know how I can delete the data from the queue table.
>
>  > >
>
>  > >  I can make queue table for every client, but I think this is
>  not
>
>  a
>
>  > >
>
>  > >  nice solution.
>
>  > >
>
>  > >  I am affraid of lost client....
>
>  > >
>
>  > >  I am intrested in every solution which can solve my problem.
>
>  >
>
>  > Why not simply add a timestamp column to each row.  Then, your
>
>  client
>
>  > could query for all rows created or modified after a certain
>
>  > time/date.
>
>  Yes it can be a good solution, but that table can have more than
>
>  100000 records.

100000 records is not large by any means.  I am routinely using tables
with 500 million rows and sometimes even larger.


>  I don't think a trigger and a separate table would be
>
>  > necessary, but it is hard to know without more information.
>
>  >
>
>  > Sean
>
>  >
>
>
>
>  I think if I made a copy of the new records into a separate table
>  the
>
>  selection would be more efficient. However, somebody has to delete
>
>  from that table.

I think what you are suggesting with triggers, deletes, etc., is
probably "premature optimization" and is actually likely to be slower
for many operations than the single table; it is definitely more
complicated.  If I were you, I would simply make a single table, load
some test data, and then try running your queries.  I was talking
about a timestamp column as a way to get the most recent records, so
you will want to index that column.  The index for 100000 timestamps
is likely to be small enough to fit into memory on even a laptop, so I
would expect that performance is likely to be very good for doing
selects of the most recent records.

Sean

In response to

pgsql-novice by date

Next:From: messiasDate: 2008-02-29 12:16:00
Subject: Re: Re: Monitoring new records
Previous:From: messiasDate: 2008-02-29 07:06:41
Subject: Re: Re: Monitoring new records

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group