ehm...

From: andrea gelmini <andrea(dot)gelmini(at)linux(dot)it>
To: pgsql-bugs(at)postgresql(dot)org
Subject: ehm...
Date: 2002-01-04 12:13:02
Message-ID: 20020104121302.GA22232@gelma.lugbs.linux.it
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-bugs

well, i will be a little verbose, but i am afraid this could be a big bug
in postgresql, so...

i've made a script in python that, by psycopg module, put freecddb archive
in a relational db. so, it does a few select and a lot of insert for each
file of freecddb (something like 500.000 files).
now, the process is slower (most of the time is spent reading small file,
parsing and generating the right query), so i have try it just 8 times, and
now i need to know if i am wrong or if is it a real postgresql problem
(having see the problem of duplicated key, and so on).

well, just to make this a little bit more a bug report:

os: linux 2.4.18pre1 (also tried with 2.4.17)
distribution: debian unstable
gcc: gcc version 3.0.3 (also tried with 3.0.2)
python: 2.1
psycopg: http://packages.debian.org/unstable/non-us/python2.1-psycopg.html
postgresql: different cvs version in last 8 days

now, what happen is simple:
i run my script and after a lot of insert (the error in the bottom appears
after 76242 file parsed), if i run vacuumdb i've got different error.
i.e.:

vacuumdb -a -v -f -z

NOTICE: Analyzing author
FATAL 2: open of /home/postgres/db/pg_clog/0000 failed: No such file or directo
ry
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
connection to server was lost
vacuumdb: vacuum freecddb failed

i've got errors like this using vacuum with and without '-f', with db
working and with db doing nothing else than vacuum.
sometimes i've got errors of duplicated key (already know problem), but
that time i didn't 'cut & paste' the error believing it was my fault by
some mistake in my schema.

now, i can investigate deeply, but i need your opinion if it is something
to do or not. as i said, it takes longer to reproduce this (and maybe i'm
doing something wrong).

thanks a lot for your work,
andrea

n.b.: well, if i try on the same postgres to run other db i've got no
problem, but they are a lot smaller and have a few read access and less write
access per day (3-4 insert, without foreing keys, and one hundred select).

Responses

Browse pgsql-bugs by date

  From Date Subject
Next Message pgsql-bugs 2002-01-04 15:49:56 Bug #552: src/backend/catalog/README
Previous Message pgsql-bugs 2002-01-04 11:58:28 Bug #551: alter table add field serial - does not create serial field