From: | Vladimir Ryabtsev <greatvovan(at)gmail(dot)com> |
---|---|
To: | psycopg(at)postgresql(dot)org |
Subject: | Memory |
Date: | 2024-12-21 10:45:45 |
Message-ID: | CAMqTPqn8y8Y+uDY0FPvX6ghD1DftLyz2nD6n6HhGOg-gHP4JdA@mail.gmail.com |
Views: | Whole Thread | Raw Message | Download mbox | Resend email |
Thread: | |
Lists: | psycopg |
Hi community,
I am reading a big dataset using code similar to this:
query = '''
SELECT timestamp, data_source, tag, agg_value
FROM my_table
'''I
batch_size = 10_000_000
with psycopg.connect(cs, cursor_factory=psycopg.ClientrCursor) as conn:
with conn.cursor('my_table') as cur:
cur = cur.execute(query)
while True:
rows = cur.fetchmany(batch_size)
# ...
if not rows:
break
The code is executed on a Databricks node, if that matters. The library
version is the latest.
I found that despite fetching in batches, memory consumption grows
continuously throughout the loop iterations and eventually the node goes
OOM. My code does not save any references, so it might be something
internal to the library.
If I change the factory to ServerCursor, the issue fixes, memory does not
grow after the first iteration.
I looked the documentation, but did not find specifics related to
performance differences between Server and Client cursors.
I am fine with ServerCursor, but I need to ask, is it by design that with
ClientCursor the result set is copied into memory despite fetchmany()
limit? ClientCursor is the default class, so may be worth documenting the
difference (sorry, if I missed that).
Thank you.
From | Date | Subject | |
---|---|---|---|
Next Message | Adrian Klaver | 2024-12-21 16:34:51 | Re: Memory |
Previous Message | Adrian Klaver | 2024-12-09 16:53:21 | Re: Best Practices for Checking PostgreSQL Server Mode with Multiple Connections |