|From:||Michael Paquier <michael(at)paquier(dot)xyz>|
|To:||Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>|
|Cc:||Mark Lai <mark(dot)lai(at)integrafec(dot)com>, pgsql-bugs(at)lists(dot)postgresql(dot)org|
|Subject:||Re: BUG #15333: pg_dump error on large table -- "pg_dump: could not stat file...Unknown error"|
|Views:||Raw Message | Whole Thread | Download mbox | Resend email|
On Fri, Aug 17, 2018 at 10:53:11AM -0400, Tom Lane wrote:
> Mark Lai <mark(dot)lai(at)integrafec(dot)com> writes:
>> I ran the dump on the large table with no jobs flag and got the same error.
>> The dump was successful on a small table.
> Weird indeed. Can any Windows developers reproduce this and poke into it?
> I have a sneaking suspicion that this is related to Windows' known issues
> with concurrently-opened files, but it's pretty hard to see why there
> would be a dependency on the size of the file.
When it comes to pg_dump, the error message reported seems to come from
src/common/file_utils.c, in walkdir when processing links. On Windows
we map lstat() to stat(), which is itself pgwin32_safestat().
If you use pg_dump --no-sync, the error could be bypassed but that's
hardly a fix. That could be a failure on GetFileAttributeEx(). Which
file system are you using?
|Next Message||Amit Langote||2018-08-20 01:00:46||Re: BUG #15334: Partition elimination not working as expected when using enum as partition key|
|Previous Message||David Steele||2018-08-19 21:42:12||Re: BUG #15335: Documentation is wrong about archive_command and existing files|