|From:||"Ideriha, Takeshi" <ideriha(dot)takeshi(at)jp(dot)fujitsu(dot)com>|
|Subject:||Global shared meta cache|
|Views:||Raw Message | Whole Thread | Download mbox|
My customer created hundreds of thousands of partition tables and tried to select data from hundreds of applications,
which resulted in enormous consumption of memory because it consumed # of backend multiplied by # of local memory (ex. 100 backends X 1GB = 100GB).
Relation caches are loaded on each backend local memory.
To address this issue I'm trying to move meta caches like catcache or relcache into shared memory.
This topic seems to have been discussed several times.
For instance this thread:
In my understanding, it discussed moving catcache and relcache to shared memory rather than current local backend memory,
and is most concerned with performance overhead.
Robert Haas wrote:
> I think it would be interested for somebody to build a prototype here
> that ignores all the problems but the first and uses some
> straightforward, relatively unoptimized locking strategy for the first
> problem. Then benchmark it. If the results show that the idea has
> legs, then we can try to figure out what a real implementation would
> look like.
> (One possible approach: use Thomas Munro's DHT stuff to build the shared cache.)
I'm inspired by this comment and now developing a prototype (please see attached),
but I haven't yet put cache structure on shared memory.
Instead, I put dummy data on shared memory which is initialized at startup,
and then acquire/release lock just before/after searching/creating catcache entry.
I haven't considered relcache and catcachelist either.
It is difficult for me to do everything at one time with right direction.
So I'm trying to make small prototype and see what I'm walking on the proper way.
I tested pgbench to compare master branch with my patch.
- RHEL 7.4
- 16 cores
- 128 GB memory
1) Initialized with pgbench -i -s10
2) benchmarked 3 times for each conditions and got the average result of TPS.
|master branch | prototype | proto/master (%)
pgbench -c48 -T60 -Msimple -S | 131297 |130541 |101%
pgbench -c48 -T60 -Msimple | 4956 |4965 |95%
pgbench -c48 -T60 -Mprepared -S |129688 |132538 |97%
pgbench -c48 -T60 -Mprepared |5113 |4615 |84%
This result seems to show except for prepared protocol with "not only SELECT" it didn't make much difference.
What do you think about it?
Before I dig deeper, I want to hear your thoughts.
|Next Message||Kyotaro HORIGUCHI||2018-06-26 07:10:26||Let's remove DSM_IMPL_NONE.|
|Previous Message||Prabhat Sahu||2018-06-26 06:42:58||"Access privileges" is missing after pg_dumpall|