Unit 11 - Study Design and Simple Sample Size

Study Planning and Design

The very first thing you must have for any study, no matter in what area of research, is a question. Let me repeat: Every research project must start with A QUESTION. In all of my career, every time anyone came to me for my statistical input, the first thing they hear was "What is the question?"

I hope this is clear. If not, please read the first paragraph again.

Here are the major topics that will be discussed in the video:

The number of observations you make will also determine the validity and accuracy of your conclusions, in any case.

So, this unit will look at some basic concepts of study design (the plan to collect data) and sample size (how many observations we need to achieve a certain amount of confidence in the conclusions subsequently drawn).

Now we have to ask about the characteristics of the larger selected group. This question is answered by the sampling technique used. Clearly, if all our subjects are recruited at a hospital, there will be a bias towards ill people, or people associated with ill people. However selected, our results from treating our two groups are only generalizable to the "parent population" that was randomized.

Remember! So far, our data have all been independently selected. Thus, we cannot (yet) analyse data that include several measurements on the same patient.

Sample Size

First exercise:

We'll start the sample size discussion with a game. In the section of the page below, you may replace the 100 flips of the coin by any number you wish. Your first objective is to approximate the probability of heads (it's not 50%). If each flip represented the treatment of a patient with a really expensive procedure, the laboratory would obviously want you to minimize the number of "flips".

What is your strategy to reduce the number of flips to a minimum? What risks are you willing to take?

The Game:

Second exercise:

As you probably deduced from the first exercise, the probability of success in the game is 0.30, or 30%. Now, choose a number of flips (small number, say, under pressure from your boss to minimize costs of your study). Decide on an accuracy that you would like to report (like 0.02) and repeat the game 10 times with the same number of flips you chose. Count the number of results that are within your desired precision. How many were good?

The coin toss is equivalent to, say, a survey where we try to estimate the percentage of the population who will vote for a democratic socialist in the next election. It is a binary "yes/no" outcome:

If I question n=80 people and 37 say they will vote democratic socialist, then I estimate that 37/80 or 46.25% is the predicted percentage, let's call this p̂. But this number has a margin of error. How can we estimate it?

The formula for the error is:

Putting p̂=0.4625 in the formula, we get error = 0.0622. So, our estimate is 0.4625 +0.0622, which as an interval would be between 0.4003 and 0.5247. Not good enough for the newspapers!

In political television programmes, we like for the estimation to be +0.03 . If we solve the equation above for n, we get a formula for calculating the number of individual to survey to estimate the percentage within a given error:

So, taking an error of 0.03 and supposing we are estimating a value around 0.4625, the formula above tells us to question n=1062 people.

The video will give more information, along with R programmes.

Review of and comments on the video:

Here is a zip file with the R-code for our examples, including the simulations, and the outline for this unit's video.
Please download it and have a look.

Programmes in R were explored for plotting the bell curve, the bell curve with vertical lines marking limits to demark the tails at alpha level of probability.

Programmes were also proposed for calculating the error in estimating a percentage after n trials (flips of a coin or people questionned about voting, etc.).

Finally, a Forest Plot programme was shown that allows a comparison of a measure across several studies.

Click "more info" to get more explanation about the relation to the normal curve:

The key is to first look at our old friend the bell curve:

For any population that follow a normal curve with mean μ and standard deviation σ the transformation z = (x-μ)/σ yields the "standard normal", that is to say, normal with mean 0 and standard deviation equal to 1.

So, when we take a sample and calculate its mean and standard deviation (in R: mean(x), sd(x)), the calculation:

(x - mean)/sd > 1.96 or (x - mean)/sd < -1.96 would mean that my sample is too unexpected (alpha=5% traditionally yields the 1.96) to come from a standard normal distribution.

So, either x < mean - 1.96*sd, or

x > mean + 1.96*sd

If the population standard deviation is σ, the standard deviation for a sample of size n, is σ divided by the square root of n-1.

Since the division by sqrt(n-1) decreases the sample standard deviation, the more the number of observations the smaller the sample standard deviation. Here is an animation showing the effect of increasing numbers of observations:

Here is a zip file with the details from the More info button.

Contact me at: dtudor@germinalknowledge.com

© Germinal Knowledge. All rights reserved