I ran some simulations for four cases of coefficients (I called them Ci).

Case A: C = {2 5 1 18 3.5 7 7 5 15 2 1 2.5 12 16 13 1 3.5 1.5 0.5 0.5}

Case B: C = {1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20}

Case C: C = {0.5 0.6 0.7 0.8 0.9 1.0 1.1 1.2 1.3 1.4 9.6 9.7 9.8 9.9 10.0 10.1 10.2 10.3 10.4 10.5}

Case D: C = {1 1 1 1 1 1.5 1.5 1.5 1.5 1.5 2 2 2 2 2 10 10 10 10 10}

For each case, I ran ten simulations of 100000 tests each and plotted the results. I generated the X values the following way:

Let xi be 20 random values pulled from a flat distribution on [0, 1).

Let Xi = xi/(x1 + x2 + â€¦ + x20)

In each simulation, the distribution of the 100000 sums looked superficially like a normal distribution (although at N=20, a normal and a binomial distribution are hard to tell apart).

Also in each simulation, as expected, the mean of the sums was very close to the mean of the Ci coefficients.

A little bit more of a mystery for the moment, in each simulation, the standard deviation of the 100000 sums was very close to 13.2 % or 13.3 % of the population standard deviation of the 20 Ci coefficients.

I may think on that last point for a while, since I do teach introductory Gaussian distribution statistics in one of my classes and it may be insightful to figure out how to predict that.