Skip site navigation (1) Skip section navigation (2)

[jeffery@CS.Berkeley.EDU: DB Seminar, Sept 8th, 380 Soda: Magdalena Balazinska]

From: elein <elein(at)varlena(dot)com>
To: sfpug(at)postgresql(dot)org
Cc: elein <elein(at)varlena(dot)com>
Subject: [jeffery@CS.Berkeley.EDU: DB Seminar, Sept 8th, 380 Soda: Magdalena Balazinska]
Date: 2006-09-07 17:51:19
Message-ID: (view raw, whole thread or download thread mbox)
Lists: sfpug
----- Forwarded message from Shawn Jeffery <jeffery(at)CS(dot)Berkeley(dot)EDU> -----

From: Shawn Jeffery <jeffery(at)CS(dot)Berkeley(dot)EDU>
To: dblunch(at)triplerock(dot)CS(dot)Berkeley(dot)EDU
Subject: DB Seminar, Sept 8th, 380 Soda: Magdalena Balazinska

New Directions in Database Research Seminar Series

Friday, September 8th, 2006
380 Soda Hall

Speaker: Magdalena Balazinska, University of Washington

Title: Quality Monitoring Systems


In a monitoring application (e.g., sensor-based environment
monitoring, RFID-based equipment tracking, computer system
monitoring), a user continuously observes the state of a system and
receives alerts when interesting combinations of events occur. Over
the past few years, these types of applications have grown in
popularity and a new class of data management systems, called stream
processing engines, have been developed to support their needs.

Because users rely on stream processing engines to assess the state of
a system and because they make decisions based on their assessment, it
is important that these engines provide high-quality information. This
goal is challenging for two reasons. First, many monitoring
applications rely on devices such as RFID antennas or sensors to
provide them information about the physical world. These devices,
however, are unreliable; they produce streams of information where
portions of data may be missing, duplicated, or erroneous. In order to
provide quality information, a stream processing engine must thus be
able to clean, at least probabilistically, the input data before
processing it. Second, real-time information about a system is useful,
but we can significantly improve the quality of that information if
the monitoring engine complements new results with relevant and timely
historical data. For example, every time an event occurs, a user might
need to see a set of k most similar events that occurred in the past.
The challenge is that continuous monitoring can quickly produce
terabyte-size data logs that are difficult to explore in

In this talk, we briefly review the goals and functionality of a
stream processing engine and discuss how cleaning input data and
exploiting history can improve the quality of results they produce. We
then present the approaches that we are investigating to address these


Magdalena Balazinska received a PhD from MIT in February 2006. She is
now an assistant professor in the Computer Science and Engineering
Department at the University of Washington. Magdalena's research
interests are broadly in the fields of databases and systems. Her
current work focuses on developing high quality monitoring engines,
experimenting with a building-wide RFID-based infrastructure, and
helping users on the Internet organize and share their data.

----- End forwarded message -----

sfpug by date

Next:From: David FetterDate: 2006-09-08 01:19:34
Subject: SFPUG Meeting: SQL on Streams
Previous:From: eleinDate: 2006-09-07 17:49:45
Subject: [hellerstein@CS.Berkeley.EDU: Fall Talk Series: New Directions in Database Research]

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group