Re: stored proc and inserting hundreds of thousands of rows

From: Greg Smith <greg(at)2ndQuadrant(dot)com>
To: pgsql-performance(at)postgresql(dot)org
Subject: Re: stored proc and inserting hundreds of thousands of rows
Date: 2011-05-01 23:31:15
Message-ID: 4DBDED43.4050303@2ndQuadrant.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On 04/30/2011 09:00 PM, Samuel Gendler wrote:
> Some kind of in-memory cache of doc/ad mappings which the ad server
> interacts with will serve you in good stead and will be much easier to
> scale horizontally than most relational db architectures lend
> themselves to...Even something as simple as a process that pushes the
> most recent doc/ad mappings into a memcache instance could be
> sufficient - and you can scale your memcache across as many hosts as
> is necessary to deliver the lookup latencies that you require no
> matter how large the dataset.

Many of the things I see people switching over to NoSQL key/value store
solutions would be served equally well on the performance side by a
memcache layer between the application and the database. If you can map
the problem into key/value pairs for NoSQL, you can almost certainly do
that using a layer above PostgreSQL instead.

The main downside of that, what people seem to object to, is that it
makes for two pieces of software that need to be maintained; the NoSQL
solutions can do it with just one. If you have more complicated queries
to run, too, the benefit to using a more complicated database should
outweigh that extra complexity though.

--
Greg Smith 2ndQuadrant US greg(at)2ndQuadrant(dot)com Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us
"PostgreSQL 9.0 High Performance": http://www.2ndQuadrant.com/books

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Adarsh Sharma 2011-05-02 04:53:15 Re: The right SHMMAX and FILE_MAX
Previous Message Greg Smith 2011-05-01 19:38:51 Re: The right SHMMAX and FILE_MAX