Skip site navigation (1) Skip section navigation (2)

Re: Re(2): Re(2): db server is too slow, real slow

From: Stephan Szabo <sszabo(at)megazone23(dot)bigpanda(dot)com>
To: pgsql-admin <pgsql-admin(at)fc(dot)emc(dot)com(dot)ph>
Cc: pgsql-admin(at)postgresql(dot)org
Subject: Re: Re(2): Re(2): db server is too slow, real slow
Date: 2000-10-20 14:50:51
Message-ID: Pine.BSF.4.10.10010200739460.96077-100000@megazone23.bigpanda.com (view raw or flat)
Thread:
Lists: pgsql-admin
On Fri, 20 Oct 2000, pgsql-admin wrote:

> however i got one problem (see below)... but when i started the psql
> monitor again,
> it did not appear. i'm worried that it would happen again.
> 
> applieddata=> SELECT COUNT(*) FROM office, office_application
> applieddata-> WHERE office.code = office_application.office_code
> applieddata-> AND office_application.active
> applieddata-> AND NOT office.deleted;
> NOTICE:  Message from PostgreSQL backend:
>         The Postmaster has informed me that some other backend died
> abnormally and possibl
> y corrupted shared memory.
>         I have rolled back the current transaction and am going to
> terminate your database
>  system connection and exit.
>         Please reconnect to the database system and repeat your query.
> pqFlush() -- backend closed the channel unexpectedly.
>         This probably means the backend terminated abnormally before or
> while processing t
> he request.

This probably means another backend crashed while your connection was
going on.  In general, I'd suggest removing -S and redirecting
STDOUT/STDERR to a file to get the log information.  You can also
try looking in your database directory (<pgdata>/base/<database>)
for a core file.

> 
> >
> >> Here is just one query executed from Embperl pages with Perl DBI:
> >> officedata=> explain SELECT * FROM office, office_application
> >> officedata-> WHERE office.code = office_application.office_code
> >> officedata-> AND office_application.active
> >> officedata-> AND NOT office.deleted;
> >> NOTICE:  QUERY PLAN:
> >> 
> >> Hash Join  (cost=5524.86 rows=644 width=262)
> >>   ->  Seq Scan on office_application  (cost=4261.49 rows=9767 width=245)
> >>   ->  Hash  (cost=908.88 rows=854 width=17)
> >>         ->  Seq Scan on office  (cost=908.88 rows=854 width=17)
> >> 
> >> note: office.code and office_application.office_code are serial data
> >types.
> >>          office_application.active and office.deleted are boolean.
> >>          others are char, inet, oid (for lobj), and date data types.
> >
> >How many rows does the query actually give you, 
> applieddata=> SELECT COUNT(*) FROM office, office_application
> applieddata-> WHERE office.code = office_application.office_code
> applieddata-> AND office_application.active
> applieddata-> AND NOT office.deleted;
> count
> -----
> 12534
> 
> >and how many rows in office_application have active true 
> applieddata=> SELECT COUNT(*) FROM office_application
> applieddata-> WHERE office_application.active;
> count
> -----
> 12973
> 
> >and how many rows in office have deleted false?  
> applieddata=> SELECT COUNT(*) FROM office
> applieddata-> WHERE NOT office.deleted;
> count
> -----
> 12534

Well, it's misguessing the number of rows badly for office, although
it'd probably only make it *more* likely to do a seq scan if it
though 12000 rows were going to match.  I'm assuming that these
are the largest portion of these tables.  I guess one thing to try
would be a vacuum analyze to see if that makes any difference.



In response to

pgsql-admin by date

Next:From: Greg KelleyDate: 2000-10-20 16:48:20
Subject: v7 and ODBC Readonly
Previous:From: Michael AnsleyDate: 2000-10-20 14:18:59
Subject: RE: Automation/scheduling of Backup stratetgy

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group