From: | Vladimir Sitnikov <sitnikov(dot)vladimir(at)gmail(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | Jaime Soler <jaime(dot)soler(at)gmail(dot)com>, Dave Cramer <pg(at)fastcrypt(dot)com>, Brad DeJong <bpd0018(at)gmail(dot)com>, List <pgsql-jdbc(at)postgresql(dot)org> |
Subject: | Re: ResultSet storing all rows with defaulftFetchSize to 5000 |
Date: | 2018-02-12 19:42:25 |
Message-ID: | CAB=Je-EWAhFWnogg3f7eMLRJThuPW4xaF6SZkB3dRmeFOe0kpQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-jdbc |
Tom>So if that's what you want, don't commit the transaction.
What are the options then?
I would argue, users expect cursors to work across commits, and they expect
the database to stream the data instead of materializing.
Tom>so that the rows the cursor still needed to fetch
Tom>would not get cleaned by VACUUM
That is understandable.
Tom>If we didn't materialize, then we'd
Tom>have to retain the locks held by the transaction
It is not clear why locks have to retain. Transaction commit should release
the locks, shouldn't it?
Tom>nor would the tables containing them get
Tom>dropped or significantly altered
One does not usually drop a table being selected, but when (s)he does, he
does that in production. What I mean is I do not see why "committing a
transaction" involves restrictions on table drop/alter.
It looks like all the drop/alter can happen with current implementation
(e.g. one connection starts "select", and another connection tries to
drop/alter the table). I expect "fetch across commit" to work exactly like
a cursor from a separate connection with its own transaction.
----
What if 1000 sessions perform an innocent "select * from big_table" WITH
HOLD cursor (e.g. to process the table in a stream fashion)? It would
easily fill all the disk space at the server-side.
In my experience, "fetch across commit" is typically used to process data
in batches. That is cursor provides IDs to be processed, and the
transaction is committed from time to time to avoid the need of doing
everything from scratch in case the transaction fails.
Commit is needed, well, to commit, the (partial) work. Of course, locks are
released, and I just do not expect for the cursor to dissappear.
Long-running cursor enables to process data serially, and it enables to use
just a simple table with no additional indices.
I do understand it is a bad idea to keep long-running "with hold" cursor
for a table that is actively changing. However, that would be awful anyway,
so I don't expect sane applications to do that kind of thing (even for
Oracle DB).
Vladimir
From | Date | Subject | |
---|---|---|---|
Next Message | Gopal Muriki | 2018-02-12 20:11:48 | unsubsribe |
Previous Message | Tom Lane | 2018-02-12 18:57:50 | Re: ResultSet storing all rows with defaulftFetchSize to 5000 |