Re: PG Sharding

From: Matej <gmatej(at)gmail(dot)com>
To: Thomas Boussekey <thomas(dot)boussekey(at)gmail(dot)com>
Cc: Rakesh Kumar <rakeshkumar464(at)aol(dot)com>, pgsql-general(at)lists(dot)postgresql(dot)org
Subject: Re: PG Sharding
Date: 2018-01-31 11:59:57
Message-ID: CAJB+8mbjN6jSnjj84Qgnzuda7r5j12_9S56zCHybcumz_rNKTQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Thanks Thomas.

Still fancying the manual approach litlle bit more.

Will probably go with 8 database and 32 schema per machine. This way, will
keep in limits for administration tools as well as autovacuum, also will be
ready for connection pooling, as 8 databases is not to much.

This will give us 256 shard per machine, but will be tunable. The lower
number will also prevent to much memory/disk fragmentation and with this
bad cache hit ratios.

Will also use monthly partitioning per shard, to reduce the change of big
tables forming.

Thanks everyone.

2018-01-30 15:26 GMT+01:00 Thomas Boussekey <thomas(dot)boussekey(at)gmail(dot)com>:

> Using citusdb enterprise, you can replicate the table shards.
>
> Here is the link to the documentation:
> https://docs.citusdata.com/en/v7.2/reference/user_defined_
> functions.html#replicate-table-shards
>
> Regards,
> Thomas
>
>
> 2018-01-30 12:18 GMT+01:00 Matej <gmatej(at)gmail(dot)com>:
>
>> As already said. It's missing 2 level sharding and is restricted with
>> SPOF.
>>
>> BR
>>
>> Matej
>>
>> 2018-01-30 12:05 GMT+01:00 Rakesh Kumar <rakeshkumar464(at)aol(dot)com>:
>>
>>>
>>>
>>>
>>> >We are looking for multi tenancy but at scale. That's why the sharding
>>> and partitioning. It depends how you look at the distributed part.
>>>
>>> Citusdb.
>>>
>>
>>
>

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Thiemo Kellner 2018-01-31 12:21:08 Re: [solved] Setting up streaming replication problems
Previous Message Pavel Suderevsky 2018-01-31 10:17:17 Understanding Huge Pages