> My point was that there are two failure cases --- one where the cache
> slightly out of date compared to the db server --- these are cases
> the cache update is slightly before/after the commit.
I was thinking about this and ways to minimize this even further. Have
memcache clients add data and have a policy to have the database only
delete data. This sets the database up as the bottleneck again, but
then you have a degree of transactionality that couldn't be previously
achieved with the database issuing replace commands. For example:
1) client checks the cache for data and gets a cache lookup failure
2) client beings transaction
3) client SELECTs data from the database
4) client adds the key to the cache
5) client commits transaction
This assumes that the client won't rollback or have a transaction
failure. Again, in 50M transactions, I doubt one of them would fail
(sure, it's possible, but that's a symptom of bigger problems:
memcached isn't an RDBMS).
The update case being:
1) client begins transaction
2) client updates data
3) database deletes record from memcache
4) client commits transaction
5) client adds data to memcache
> The second is
> where the cache update happens and the commit later fails, or the
> happens and the cache update never happens.
Having pgmemcache delete, not replace data addresses this second issue.
In response to
pgsql-performance by date
|Next:||From: Mike Mascari||Date: 2004-11-24 02:04:15|
|Subject: Re: Slow execution time when querying view with WHERE clause|
|Previous:||From: Dave Page||Date: 2004-11-23 20:12:21|
|Subject: Re: scalability issues on win32|