|From:||Merlin Moncure <mmoncure(at)gmail(dot)com>|
|To:||Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com>|
|Cc:||Pietro Pugni <pietro(dot)pugni(at)gmail(dot)com>, "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org>|
|Subject:||Re: Postgres not using all RAM (Huge Page activated on a 96GB RAM system)|
|Views:||Raw Message | Whole Thread | Download mbox|
On Fri, Mar 24, 2017 at 2:47 PM, Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com> wrote:
> On Fri, Mar 24, 2017 at 3:58 AM, Pietro Pugni <pietro(dot)pugni(at)gmail(dot)com> wrote:
>> Hi there,
>> I’m running PostgreSQL 9.6.2 on Ubuntu 16.04.2 TLS (kernel
>> 4.4.0-66-generic). Hardware is:
>> - 2 x Intel Xeon E5-2690
>> - 96GB RAM
>> - Software mdadm RAID10 (6 x SSDs)
>> Postgres is used in a sort of DWH application, so all the resources are
>> assigned to it and the aim is to maximize the single transaction performance
>> instead of balancing between multiple connections.
>> The configuration variables I changed are the following ones:
>> checkpoint_completion_target = 0.9
>> data_directory = '/mnt/raid10/pg_data_9.6.2'
>> default_statistics_target = 1000
>> effective_cache_size = 72GB
>> effective_io_concurrency = 1000
>> listen_addresses = '127.0.0.1,192.168.2.90'
>> maintenance_work_mem = 1GB
>> shared_buffers = 24GB
>> work_mem = 512MB
>> The kernel configuration in /etc/sysctl.conf is:
>> # 24GB = (24*1024*1024*1024)
>> kernel.shmmax = 25769803776
>> # 6MB = (24GB/4096) dove 4096 e' uguale a "getconf PAGE_SIZE"
>> kernel.shmall = 6291456
>> kernel.sched_migration_cost_ns = 5000000
>> kernel.sched_autogroup_enabled = 0
>> vm.overcommit_memory = 2
>> vm.overcommit_ratio = 90
>> vm.swappiness = 4
>> vm.zone_reclaim_mode = 0
>> vm.dirty_ratio = 15
>> vm.dirty_background_ratio = 3
>> vm.nr_hugepages = 12657
>> Huge pages are being used on this machine and Postgres allocates 24GB
>> immediately after starting up, as set by vm.nr_hugepages = 12657.
>> My concern is that it never uses more than 24GB. For example, I’m running 16
>> queries that use a lot of CPU (they do time series expansion and some
>> arithmetics). I estimate they will generate a maximum of 2.5 billions of
>> rows. Those queries are running since 48 hours and don’t know when they will
>> finish, but RAM never overpassed those 24GB (+ some system).
>> Output from free -ht:
>> total used free shared buff/cache
>> Mem: 94G 28G 46G 17M 19G
>> Swap: 15G 0B 15G
>> Total: 109G 28G 61G
> Looks normal to me. Note that the OS is caching 19G of data.
> Postgresql is only going to allocate extra memory 512MB at a time for
> big sorts. Any sort bigger than that will spill to disk. GIven that
> top and vmstat seem to show you as being CPU bound I don't think
> amount of memory postgresql is using is your problem.
> You'd be better off to ask for help in optimizing your queries IMHO.
+1 this. Absent evidence, there is no reason to believe the memory is
needed. Memory is not magic pixie dust that makes queries go faster;
good data structure choices and algorithms remain the most important
determiners of query performance.
|Next Message||Riaan Stander||2017-03-27 23:43:37||Best design for performance|
|Previous Message||Merlin Moncure||2017-03-27 13:21:13||Re: Performance issue after upgrading from 9.4 to 9.6|