Sorry, I think I 've described my case not precisely enough.
"Randomly" is not pure random in my case.
My solution is planning to be used on different servers with different DBs. The initial data of the base table depends on the DB. But I know that the key value of the new rows is increasing. Not monotonously, but still.
I need a common solution for all DBs.
The size of a base table could be very different (from millions to hundreds of billions). For tests I've used 2 different dumps.
Ranges that were suitable for the first dump show for the second the situation like I've described (2-3 partitions with 95% of data). And vice versa.
Besides, permanent increasing of key value of new rows means that some ranges will be permanently increasing
meanwhile others will have the same amount of data or even less (outdated data is clearing).
Hash partitioning shows that I will have partitions with not exactly the same size of data but similar enough. And this result is actual for both dumps.
So that I've decided to use hash partitioning.
Thank you,
Iana Golubeva