Skip site navigation (1) Skip section navigation (2)

Re: Problem retrieving large records (bytea) data from a table

From: pasman pasmański <pasman(dot)p(at)gmail(dot)com>
To: pgsql-admin(at)postgresql(dot)org
Subject: Re: Problem retrieving large records (bytea) data from a table
Date: 2011-07-20 14:31:45
Message-ID: CAOWY8=ZMJO_GQjk-+MsqKecD-e_XBnXHmwaDDboF+4Yd+3JSTA@mail.gmail.com (view raw or flat)
Thread:
Lists: pgsql-admin
You may do a backup of this table. Then with ultraedit search your
documents and remove them.

2011/7/5, jtkells(at)verizon(dot)net <jtkells(at)verizon(dot)net>:
> I am having a hang condition every time I try to retrieve a large
> records (bytea) data from  a table
> The OS is a 5.11 snv_134 i86pc i386 i86pc Solaris with 4GB memory
> running Postgresql 8.4.3 with a standard postgresql.conf file (nothing
> has been changed)
> I have the following table called doc_table
>       Column  |              Type              |  Modifiers     |
> Storage  | Description
> ------------------------+--------------------------------+---------------------------------------
>  id           | numeric                        | not null    | main |
>  file_n       | character varying(4000)        |             |
> extended |
>  create_date  | timestamp(6) without time zone | not null
>                 default (clock_timestamp())
>                 ::timestamp(0)without time zone              | plain |
>  desc         | character varying(4000)        |             |
> extended |
>  doc_cc       | character varying(120)         | not null    |
> extended |
>  by           | numeric                        | not null    | main |
>  doc_data     | bytea                          |             |
> extended |
>  mime_type_id | character varying(16)          | not null    |
> extended |
>  doc_src      | text                           |             |
> extended |
>  doc_stat     | character varying(512)         | not null
>                default 'ACTIVE'::character varying           |
> extended |
> Indexes:
>    "documents_pk" PRIMARY KEY, btree (document_id)
>
>
> A while ago the some developers inserted several records with a
> document (stored in doc_Data) that was around 400 - 450 MB each. Now
> when you do a select * (all) from this table you get a hang and the
> system becomes unresponsive.  Prior to these inserts, a select * (all,
> no where clause) worked.  I'm also told a select * from doc_table
> where id = xxx still works.  I haven't seen any error message in the
> postgresql log files.
> So I'm not sure how to find these bad records and why I am getting a
> hang.  Since this postgresql is running with the default config files
> could I be running out of a resource?  If so I'm not sure how to or
> how much to add to these resources to fix this problem since I have
> very little memory on this system.  Does anyone have any ideas why I
> am getting a hang.  Thanks
>
> --
> Sent via pgsql-admin mailing list (pgsql-admin(at)postgresql(dot)org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-admin
>


-- 
------------
pasman

In response to

Responses

pgsql-admin by date

Next:From: Thomas KellererDate: 2011-07-20 14:53:51
Subject: Re: Problem retrieving large records (bytea) data from a table
Previous:From: Cédric VillemainDate: 2011-07-20 13:32:43
Subject: Re: 9.0.4 Data corruption issue

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group