Skip site navigation (1) Skip section navigation (2)

column size too large, is this a bug?

From: Qing Zhao <qzhao(at)quotefx(dot)net>
To: pgsql-performance(at)postgresql(dot)org
Subject: column size too large, is this a bug?
Date: 2004-03-25 17:58:33
Message-ID: 082D0538-7E86-11D8-8B9C-000A95AB8896@quotefx.net (view raw or flat)
Thread:
Lists: pgsql-performance
  I have a query which get's data from a single table.
When I try to get data from for an RFQ which has around 5000 rows, it 
is breaking off at 18th row.
If i reduce some columns , then it returns all the rows and not so slow.
  I have tried with different sets of column and there is no pattern 
based on columns.

  But one thing is sure one size of the rows grows more than some bytes, 
the records do not get returned. Now the following query returns me all 
5001 rows to me pretty fast


   select
   _level_ as l,
    nextval('seq_pk_bom_detail') as bom_detail,
    prior nextval('seq_pk_bom_detail') as parent_subassembly,
   parent_part_number,
    customer_part_number,
   /* mfr_name,
    mfr_part,
    description,*/
    commodity,
    needs_date,
    target_price,
    comments,
    case qty_per
       when null then 0.00001
       when 0 then 0.00001
       else qty_per
    end,
    qty_multiplier1,
    qty_multiplier2,
    qty_multiplier3,
    qty_multiplier4,
    qty_multiplier5
  from bom_detail_work_clean
  where (0=0)
   and bom_header=20252
   and file_number = 1
   start with customer_part_number = 'Top Assembly 1'
   connect by parent_part_number = prior customer_part_number;


But if I uncomment the description then it returns me only 18 rows.

   select
   _level_ as l,
    nextval('seq_pk_bom_detail') as bom_detail,
    prior nextval('seq_pk_bom_detail') as parent_subassembly,
   parent_part_number,
    customer_part_number,
   /* mfr_name,
    mfr_part,*/
    description,
    commodity,
    needs_date,
    target_price,
    comments,
    case qty_per
       when null then 0.00001
       when 0 then 0.00001
       else qty_per
    end,
    qty_multiplier1,
    qty_multiplier2,
    qty_multiplier3,
    qty_multiplier4,
    qty_multiplier5
  from bom_detail_work_clean
  where (0=0)
   and bom_header=20252
   and file_number = 1
   start with customer_part_number = 'Top Assembly 1'
   connect by parent_part_number = prior customer_part_number;

Now these 18 rows are level 2 records  in heirarchical query. I have a 
feeling the server has some memory paging mechanism
and if it can not handle beyond certain byets, it just returns what it 
has.
  During your investigation of optimization of postgreSQL did you come 
across any setting that might help us ?

Thanks!

Qing

PS: I just reload the file while reducing the content in the 
description column.
The file got uploaded. So looks like the problem is size of the record 
being inserted.

Responses

pgsql-performance by date

Next:From: Tom LaneDate: 2004-03-25 18:20:32
Subject: Re: column size too large, is this a bug?
Previous:From: markwDate: 2004-03-25 17:16:40
Subject: Re: [HACKERS] fsync method checking

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group