BUG #15700: PG 10 vs. 11: Large increase in memory usage when selecting BYTEA data (maybe memory leak)

From: PG Bug reporting form <noreply(at)postgresql(dot)org>
To: pgsql-bugs(at)lists(dot)postgresql(dot)org
Cc: mo(at)otterbach(dot)eu
Subject: BUG #15700: PG 10 vs. 11: Large increase in memory usage when selecting BYTEA data (maybe memory leak)
Date: 2019-03-18 09:44:34
Message-ID: 15700-8c408321a87d56bb@postgresql.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-bugs

The following bug has been logged on the website:

Bug reference: 15700
Logged by: Matthias Otterbach
Email address: mo(at)otterbach(dot)eu
PostgreSQL version: 11.2
Operating system: Ubuntu 18.04.2 LTS
Description:

I am currently testing an application for which I previously used PostgreSQL
10 with the current PostgreSQL 11.2 release. During the tests I experienced
out of memory errors of my PostgreSQL database which I could not explain
(actually one session runs out of memory which causes the server to
terminate all connections). They seem to affect tables containing BYTEA data
(many rows, some have no content at all in the BYTEA column, some use up to
25 MB).

I was able to narrow them down to a small example query where I also
experienced a much higher memory usage with PostgreSQL 11 compared to
previous versions (also tested with 10.7).

For my comparison I worked on a local VM with only 4 GB memory configured, a
plain Ubuntu 18.04.2 installation (including all updates as of last week)
and the current PostgreSQL 10.7 resp. 11.2 installation installed from
http://apt.postgresql.org/pub/repos/apt/dists/bionic-pgdg/. Actually almost
all configuration options should be pretty much the default (at least for my
testing VM), e.g. work_mem = 4MB, shared_buffers = 128MB. I created a table
containing a large amount of BYTEA data using these statements:

> create table public.a (b bytea);
> insert into public.a select repeat('0', 1024 * 1024 * 100)::bytea from
generate_series(1, 300);
> select pg_size_pretty(sum(length(b))) from public.a;

In total I now have ~ 29 GB of data (actually 300 rows à 100 MB) and start
selecting all data using a small Java program (current JDBC driver
42.2.5).

String sql = "SELECT b FROM public.a";
try (Connection connection =
DriverManager.getConnection("jdbc:postgresql://127.0.0.1:5432/postgres?currentSchema=public",
"username", "password")) {
connection.setAutoCommit(false);
try (PreparedStatement ps = connection.prepareStatement(sql)) {
ps.setFetchSize(5); // we do not want to run out of Java heap space
try (ResultSet rs = ps.executeQuery()) {
int i = 0;
while (rs.next()) {
// just loop over all data, get the data and do something with it
(actually we print a line every 10 rows containing the length, the other
argument is never true with my test data
i++;
byte[] b = rs.getBytes(1);
if (i % 10 == 0 || b == null || b.length <= i) {
System.err.println("Row " + i + ": " + (b != null ? b.length :
null));
}
}
}
}

The Java program actually just executes "SELECT b FROM public.a" and keeps
streaming more rows doing something with the content, also the execution
plan for my query is fairly simple - actually it seems to be just a
sequential scan (with both versions).

With PostgreSQL 10.7 the program went through fine (with plenty of free
memory on my database VM, actually including the OS there was never used
more than 1 GB on the VM). With PostgreSQL 11.2 the memory of my postgres
process (pid of my session) keeps increasing and finally crashes after I
fetched only about 8 GB of the data (about 80 rows):

2019-03-13 10:05:03.386 UTC [1303] LOG: database system is ready to accept
connections
2019-03-13 10:05:03.927 UTC [1311] [unknown](at)[unknown] LOG: incomplete
startup packet
TopMemoryContext: 67424 total in 5 blocks; 12656 free (10 chunks); 54768
used
TableSpace cache: 8192 total in 1 blocks; 2096 free (0 chunks); 6096
used
TopTransactionContext: 8192 total in 1 blocks; 7744 free (1 chunks); 448
used
pgstat TabStatusArray lookup hash table: 8192 total in 1 blocks; 1456 free
(0 chunks); 6736 used
RowDescriptionContext: 8192 total in 1 blocks; 6896 free (0 chunks); 1296
used
MessageContext: 8192 total in 1 blocks; 6752 free (1 chunks); 1440 used
Operator class cache: 8192 total in 1 blocks; 560 free (0 chunks); 7632
used
smgr relation table: 16384 total in 2 blocks; 4600 free (2 chunks); 11784
used
TransactionAbortContext: 32768 total in 1 blocks; 32512 free (0 chunks);
256 used
Portal hash: 8192 total in 1 blocks; 560 free (0 chunks); 7632 used
TopPortalContext: 8192 total in 1 blocks; 7392 free (0 chunks); 800 used
PortalContext: 1024 total in 1 blocks; 552 free (0 chunks); 472 used:
C_1
ExecutorState: 4294976384 total in 17 blocks; 4080 free (0 chunks);
4294972304 used
printtup: 314581120 total in 3 blocks; 7936 free (8 chunks);
314573184 used
ExprContext: 8192 total in 1 blocks; 7936 free (0 chunks); 256
used
PortalContext: 1024 total in 1 blocks; 744 free (1 chunks); 280 used:
Relcache by OID: 16384 total in 2 blocks; 3512 free (2 chunks); 12872
used
CacheMemoryContext: 524288 total in 7 blocks; 133056 free (5 chunks);
391232 used
index info: 2048 total in 2 blocks; 680 free (1 chunks); 1368 used:
pg_toast_16384_index
index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used:
pg_index_indrelid_index
CachedPlan: 2048 total in 2 blocks; 200 free (0 chunks); 1848 used:
SELECT b FROM public.a
index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used:
pg_statistic_ext_relid_index
CachedPlanSource: 4096 total in 3 blocks; 2296 free (5 chunks); 1800
used: SELECT b FROM public.a
unnamed prepared statement: 8192 total in 1 blocks; 3664 free (10
chunks); 4528 used
CachedPlan: 1024 total in 1 blocks; 264 free (0 chunks); 760 used:
BEGIN
index info: 2048 total in 2 blocks; 728 free (2 chunks); 1320 used:
pg_db_role_setting_databaseid_rol_index
index info: 2048 total in 2 blocks; 704 free (3 chunks); 1344 used:
pg_opclass_am_name_nsp_index
index info: 1024 total in 1 blocks; 16 free (0 chunks); 1008 used:
pg_foreign_data_wrapper_name_index
index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used:
pg_enum_oid_index
index info: 2048 total in 2 blocks; 680 free (2 chunks); 1368 used:
pg_class_relname_nsp_index
index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used:
pg_foreign_server_oid_index
index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used:
pg_publication_pubname_index
index info: 2048 total in 2 blocks; 592 free (3 chunks); 1456 used:
pg_statistic_relid_att_inh_index
index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used:
pg_cast_source_target_index
index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used:
pg_language_name_index
index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used:
pg_transform_oid_index
index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used:
pg_collation_oid_index
index info: 3072 total in 2 blocks; 1216 free (3 chunks); 1856 used:
pg_amop_fam_strat_index
index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used:
pg_index_indexrelid_index
index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used:
pg_ts_template_tmplname_index
index info: 2048 total in 2 blocks; 704 free (3 chunks); 1344 used:
pg_ts_config_map_index
index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used:
pg_opclass_oid_index
index info: 1024 total in 1 blocks; 16 free (0 chunks); 1008 used:
pg_foreign_data_wrapper_oid_index
index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used:
pg_event_trigger_evtname_index
index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used:
pg_statistic_ext_name_index
index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used:
pg_publication_oid_index
index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used:
pg_ts_dict_oid_index
index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used:
pg_event_trigger_oid_index
index info: 3072 total in 2 blocks; 1216 free (3 chunks); 1856 used:
pg_conversion_default_index
index info: 3072 total in 2 blocks; 1216 free (3 chunks); 1856 used:
pg_operator_oprname_l_r_n_index
index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used:
pg_trigger_tgrelid_tgname_index
index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used:
pg_enum_typid_label_index
index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used:
pg_ts_config_oid_index
index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used:
pg_user_mapping_oid_index
index info: 2048 total in 2 blocks; 704 free (3 chunks); 1344 used:
pg_opfamily_am_name_nsp_index
index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used:
pg_foreign_table_relid_index
index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used:
pg_type_oid_index
index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used:
pg_aggregate_fnoid_index
index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used:
pg_constraint_oid_index
index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used:
pg_rewrite_rel_rulename_index
index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used:
pg_ts_parser_prsname_index
index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used:
pg_ts_config_cfgname_index
index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used:
pg_ts_parser_oid_index
index info: 2048 total in 2 blocks; 728 free (1 chunks); 1320 used:
pg_publication_rel_prrelid_prpubid_index
index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used:
pg_operator_oid_index
index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used:
pg_namespace_nspname_index
index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used:
pg_ts_template_oid_index
index info: 2048 total in 2 blocks; 704 free (3 chunks); 1344 used:
pg_amop_opr_fam_index
index info: 2048 total in 2 blocks; 672 free (3 chunks); 1376 used:
pg_default_acl_role_nsp_obj_index
index info: 2048 total in 2 blocks; 704 free (3 chunks); 1344 used:
pg_collation_name_enc_nsp_index
index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used:
pg_publication_rel_oid_index
index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used:
pg_range_rngtypid_index
index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used:
pg_ts_dict_dictname_index
index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used:
pg_type_typname_nsp_index
index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used:
pg_opfamily_oid_index
index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used:
pg_statistic_ext_oid_index
index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used:
pg_class_oid_index
index info: 2048 total in 2 blocks; 704 free (3 chunks); 1344 used:
pg_proc_proname_args_nsp_index
index info: 1024 total in 1 blocks; 16 free (0 chunks); 1008 used:
pg_partitioned_table_partrelid_index
index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used:
pg_transform_type_lang_index
index info: 2048 total in 2 blocks; 680 free (2 chunks); 1368 used:
pg_attribute_relid_attnum_index
index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used:
pg_proc_oid_index
index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used:
pg_language_oid_index
index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used:
pg_namespace_oid_index
index info: 3072 total in 2 blocks; 1136 free (2 chunks); 1936 used:
pg_amproc_fam_proc_index
index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used:
pg_foreign_server_name_index
index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used:
pg_attribute_relid_attnam_index
index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used:
pg_conversion_oid_index
index info: 2048 total in 2 blocks; 728 free (1 chunks); 1320 used:
pg_user_mapping_user_server_index
index info: 2048 total in 2 blocks; 728 free (1 chunks); 1320 used:
pg_subscription_rel_srrelid_srsubid_index
index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used:
pg_sequence_seqrelid_index
index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used:
pg_conversion_name_nsp_index
index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used:
pg_authid_oid_index
index info: 2048 total in 2 blocks; 728 free (1 chunks); 1320 used:
pg_auth_members_member_role_index
index info: 1024 total in 1 blocks; 48 free (0 chunks); 976 used:
pg_subscription_oid_index
index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used:
pg_tablespace_oid_index
index info: 2048 total in 2 blocks; 704 free (3 chunks); 1344 used:
pg_shseclabel_object_index
index info: 1024 total in 1 blocks; 16 free (0 chunks); 1008 used:
pg_replication_origin_roname_index
index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used:
pg_database_datname_index
index info: 2048 total in 2 blocks; 760 free (2 chunks); 1288 used:
pg_subscription_subname_index
index info: 1024 total in 1 blocks; 16 free (0 chunks); 1008 used:
pg_replication_origin_roiident_index
index info: 2048 total in 2 blocks; 728 free (1 chunks); 1320 used:
pg_auth_members_role_member_index
index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used:
pg_database_oid_index
index info: 2048 total in 2 blocks; 952 free (1 chunks); 1096 used:
pg_authid_rolname_index
WAL record construction: 49768 total in 2 blocks; 6368 free (0 chunks);
43400 used
PrivateRefCount: 8192 total in 1 blocks; 2624 free (0 chunks); 5568 used
MdSmgr: 8192 total in 1 blocks; 7720 free (0 chunks); 472 used
LOCALLOCK hash: 8192 total in 1 blocks; 560 free (0 chunks); 7632 used
Timezones: 104120 total in 2 blocks; 2624 free (0 chunks); 101496 used
ErrorContext: 8192 total in 1 blocks; 7936 free (3 chunks); 256 used
Grand total: 4610628736 bytes in 192 blocks; 315312 free (137 chunks);
4610313424 used
2019-03-13 10:11:54.628 UTC [1350] postgres(at)postgres ERROR: out of memory
2019-03-13 10:11:54.628 UTC [1350] postgres(at)postgres DETAIL: Failed on
request of size 268435456 in memory context "ExecutorState".
2019-03-13 10:11:54.628 UTC [1350] postgres(at)postgres STATEMENT: SELECT b
FROM public.a

I know that I'm also using the JDBC driver here which of course could also
cause the bug, it might be a JDBC driver bug for which I should report a bug
elsewhere. But as I used the same driver to test with PostgreSQL 10.7 and
11.2 I actually suspect a change in PostgreSQL itself and not the JDBC
driver which causes this memory usage. I also suspect a bug in PostgreSQL
itself because it is the database which runs out of memory (which in my
opinion should not happen if I only have one connection and configured
work_mem pretty low).

Responses

Browse pgsql-bugs by date

  From Date Subject
Next Message PG Bug reporting form 2019-03-18 11:09:57 BUG #15701: oracle fdw issue
Previous Message Mithun Cy 2019-03-18 07:04:18 Re: BUG #15641: Autoprewarm worker fails to start on Windows with huge pages in use Old PostgreSQL community/pgsql-bugs x