Re: Error inserting a lot of records

From: Stephan Szabo <sszabo(at)megazone23(dot)bigpanda(dot)com>
To: Ronan Lucio <ronanl(at)melim(dot)com(dot)br>
Cc: <pgsql-general(at)postgresql(dot)org>
Subject: Re: Error inserting a lot of records
Date: 2002-08-01 17:36:19
Message-ID: 20020801103441.O29944-100000@megazone23.bigpanda.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Thu, 1 Aug 2002, Ronan Lucio wrote:

> We have a FreeBSD-4.3 box with Postgresql-7.0.
>
> We also have a program that reads a txt file and
> insert the datas into a postgres database.
>
> The system works fine but, many times, when I will
> insert a lot of records (about 500 recordes), it gives
> me an error and the system only accept to insert many
> records again after a vacuum.
>
> When it happen,the python script shows me the follow error:
>
> Traceback (innermost last): File
> "/usr/local/www/cgi-bin/admin/listalocimp.py", line 172, in ?
> foundfilme = fclib.query ("select cod, titulo from filme where
> lower(titpesq) like lower('%%%s%%')" % (linhaseek)) File "fclib.py",
> line 34, in query _pg.error: pqReadData() -- backend closed the
> channel unexpectedly. This probably means the backend terminated
> abnormally before or while processing the request.
>
> Looking for some thing weird in pgsql.log, I've found this:
>
> Sorry, too many clients already
> Sorry, too many clients already
> Sorry, too many clients already
> Sorry, too many clients already
> Sorry, too many clients already
> Sorry, too many clients already

That looks just like you're exceeding the specified number of backends.
You probably want to look for something about a backend exiting with a
signal (probably 11)

In general, you should probably consider upgrading and seeing if it still
occurs since 7.0 is 2 versions out of date.

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Bruce Momjian 2002-08-01 17:41:49 Re: getpid() function
Previous Message Ronan Lucio 2002-08-01 17:18:48 Error inserting a lot of records