Skip site navigation (1) Skip section navigation (2)

Re: about multiprocessingmassdata

From: "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov>
To: "superman0920" <superman0920(at)gmail(dot)com>, "pgsql-admin" <pgsql-admin(at)postgresql(dot)org>
Subject: Re: about multiprocessingmassdata
Date: 2012-04-04 14:11:04
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-admin
superman0920 <superman0920(at)gmail(dot)com> wrote:
> i have a table which has 8500000 rows record, i run 30 threads to
> update the record.
> i find the database of processing data speed so slow, per thread
> updating 1000 rows need take 260s
> How to configure the database to make processing speed faster ? 
Performance issues are best addressed on the pgsql-performance list,
not pgsql-admin.  Before posting there, please read the following
page so that you can post enough information for people to make
useful suggestions:
For perspective, in benchmarks on my own machines I have seen
complex data-modifying transactions running at 3000 transactions per
second, and we have production systems applying millions of complex
transactions per day against tables with hundreds of millions of
rows while serving web applications running tens of millions of
queries.  So, my first thought is to wonder what the differences are
in your environment, and which of them might be causing problems. 
To figure that out, I need to know more about your environment.

In response to

pgsql-admin by date

Next:From: Matheus de OliveiraDate: 2012-04-04 16:27:05
Subject: Re: for zabbix
Previous:From: Bèrto ëd SèraDate: 2012-04-04 13:12:35
Subject: Re: for zabbix

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group