From: | Charles Martin <ssappeals(at)gmail(dot)com> |
---|---|
To: | Daniel Verite <daniel(at)manitou-mail(dot)org> |
Cc: | Adrian Klaver <adrian(dot)klaver(at)aklaver(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, pgsql-general <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Trouble Upgrading Postgres |
Date: | 2018-11-06 14:29:55 |
Message-ID: | CAFw6=U0945LmJ2VDM0=uVZv-8VwVkWrG-=KYrYSNA7nphKco2w@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Thanks, Daniel.
Using your idea, I found the records over 400MB, and deleted them in the
application. The largest two were in inactive matters, and the third is
still available elsewhere if needed. I'll try pg_dump again after work
hours and see if it works now. Hopefully it will, now that I've
Adrian, I'll try changing shared_buffers the next time I can restart
postgres, at least if deleting the largest records and adding VM hasn't
worked.
On Tue, Nov 6, 2018 at 6:47 AM Daniel Verite <daniel(at)manitou-mail(dot)org>
wrote:
> Charles Martin wrote:
>
> > but the second one returned this:
> >
> > 0 "623140"
> > 1 "53"
> > 2 "12"
> > 3 "10"
> > 4 "1"
> > 5 "1"
> > 7 "1"
> > [null] "162"
> >
> > Not quite sure what that means, but if there is just a small number of
> > overly-large records, I might be able to delete them. If I can find them.
>
> The query was:
>
> SELECT octet_length(docfilecontents)/(1024*1024*100),
> count(*)
> FROM docfile
> GROUP BY octet_length(docfilecontents)/(1024*1024*100);
>
> The results above show that there is one document weighing over 700 MB
> (the first column being the multiple of 100MB), one between 500 and
> 600 MB, one between 400 MB and 500 MB, 10 between 300 and 400 MB, and
> so on.
>
> The hex expansion performed by COPY must allocate twice that size,
> plus the rest of the row, and if that resulting size is above 1GB, it
> will error out with the message you mentioned upthread:
> ERROR: invalid memory alloc request size <some value over 1 billion>.
> So there's no way it can deal with the contents over 500MB, and the
> ones just under that limit may also be problematic.
>
> A quick and dirty way of getting rid of these contents would be to
> nullify them. For instance, nullify anything over 400MB:
>
> UPDATE docfile SET docfilecontents=NULL
> WHERE octet_length(docfilecontents) > 1024*1024*400;
>
> Or a cleaner solution would be to delete them with the application if
> that's possible. You may turn the above query into a SELECT that
> retrieve the fields of interest (avoid SELECT * because of the huge
> column).
>
>
> Best regards,
> --
> Daniel Vérité
> PostgreSQL-powered mailer: http://www.manitou-mail.org
> Twitter: @DanielVerite
>
From | Date | Subject | |
---|---|---|---|
Next Message | Adrian Klaver | 2018-11-06 14:39:05 | Re: Trouble Upgrading Postgres |
Previous Message | Adrian Klaver | 2018-11-06 14:10:23 | Re: Running pg_upgrade Version 11 |