Re: Problem retrieving large records (bytea) data from a table

From: Achilleas Mantzios <achill(at)matrix(dot)gatewaynet(dot)com>
To: pgsql-admin(at)postgresql(dot)org
Subject: Re: Problem retrieving large records (bytea) data from a table
Date: 2011-07-20 15:08:49
Message-ID: 201107201808.49293.achill@matrix.gatewaynet.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin

he could use smth like this to know the size like:
SELECT count(*),CASE WHEN length(doc_data)<50000000 THEN '<=50 MB' WHEN length(doc_data)<100000000 THEN '<=100 MB' ELSE '>100MB' END from doc_table GROUP by 2;

and then based on the above, to do finer queries to find large data.

However, i dont know if cursor based queries (like the ones used by JDBC) should be affected by the size.
Normally they would not. However, querying by psql this way will definitely be slow.

Στις Wednesday 20 July 2011 17:31:45 ο/η pasman pasmański έγραψε:
> You may do a backup of this table. Then with ultraedit search your
> documents and remove them.
>
> 2011/7/5, jtkells(at)verizon(dot)net <jtkells(at)verizon(dot)net>:
> > I am having a hang condition every time I try to retrieve a large
> > records (bytea) data from a table
> > The OS is a 5.11 snv_134 i86pc i386 i86pc Solaris with 4GB memory
> > running Postgresql 8.4.3 with a standard postgresql.conf file (nothing
> > has been changed)
> > I have the following table called doc_table
> > Column | Type | Modifiers |
> > Storage | Description
> > ------------------------+--------------------------------+---------------------------------------
> > id | numeric | not null | main |
> > file_n | character varying(4000) | |
> > extended |
> > create_date | timestamp(6) without time zone | not null
> > default (clock_timestamp())
> > ::timestamp(0)without time zone | plain |
> > desc | character varying(4000) | |
> > extended |
> > doc_cc | character varying(120) | not null |
> > extended |
> > by | numeric | not null | main |
> > doc_data | bytea | |
> > extended |
> > mime_type_id | character varying(16) | not null |
> > extended |
> > doc_src | text | |
> > extended |
> > doc_stat | character varying(512) | not null
> > default 'ACTIVE'::character varying |
> > extended |
> > Indexes:
> > "documents_pk" PRIMARY KEY, btree (document_id)
> >
> >
> > A while ago the some developers inserted several records with a
> > document (stored in doc_Data) that was around 400 - 450 MB each. Now
> > when you do a select * (all) from this table you get a hang and the
> > system becomes unresponsive. Prior to these inserts, a select * (all,
> > no where clause) worked. I'm also told a select * from doc_table
> > where id = xxx still works. I haven't seen any error message in the
> > postgresql log files.
> > So I'm not sure how to find these bad records and why I am getting a
> > hang. Since this postgresql is running with the default config files
> > could I be running out of a resource? If so I'm not sure how to or
> > how much to add to these resources to fix this problem since I have
> > very little memory on this system. Does anyone have any ideas why I
> > am getting a hang. Thanks
> >
> > --
> > Sent via pgsql-admin mailing list (pgsql-admin(at)postgresql(dot)org)
> > To make changes to your subscription:
> > http://www.postgresql.org/mailpref/pgsql-admin
> >
>
>
> --
> ------------
> pasman
>

--
Achilleas Mantzios

In response to

Responses

Browse pgsql-admin by date

  From Date Subject
Next Message Bob Lunney 2011-07-20 16:39:32 Re: Problem retrieving large records (bytea) data from a table
Previous Message Thomas Kellerer 2011-07-20 14:53:51 Re: Problem retrieving large records (bytea) data from a table