Sampling and Validity

Statistics is as much art as science. For many of our clients, this is most evident when drawing a sample from a larger population of clients, target audiences, or stakeholders, or when determining what research design is best for eliciting the information you need. For each of these steps, one must always must work within a set of real-world constraints – budgetary, temporal, legal, sociopolitical, human capital – to develop the best information you can to inform your organization’s decision making.

At Precision, we help our clients bridge many of these constraints by harnessing our elite team of statisticians. Over more than a decade, we’ve cultivated a team of analysts who are creative and innovative, if not truly artistic, when it comes to developing the most rigorous sample and study design possible for your budget, timeline, and legal or sociopolitical parameters.

There are 3 ways to initiate contact with us:

  1. Please review and submit the following form. Someone from our team will contact you within 1 hour (during business hours), or at your requested time.
  2. Please enable JavaScript in your browser to complete this form.
    Name
  3. Alternatively, our consulting team is available via telephone Monday through Friday from 8:00 A.M. to 8:00 P.M Eastern Time (5:00 A.M. to 5:00 P.M Pacific Time), and from 8:00 A.M. to 7:00 P.M. Eastern Time on Saturday (5:00 A.M. to 4:00 P.M Pacific Time). Feel free to call us on (702) 708-1411!
  4. We also pride ourselves on our very prompt and in-depth e-mail responses, 365 days per year. We normally answer all urgent queries very promptly, including late-night and weekend requests. You can email us at

Please be prepared to discuss the specifics of your project, your timeline for assistance, and any other relevant information regarding your proposed consultation. We respect the confidentiality of your project and will, at your request, supply you with a Non-Disclosure Agreement before discussing specifics.

Whether you’re just starting, already have a sample and research design in mind, or have your data and results in-hand, Precision’s statistical consulting team can provide detailed, customized reports on optimal sampling, design, methodology, and measurement, including expert commentary and recommendations on the following:

  • Sampling frame development
  • Random sampling
  • Stratified random sampling
  • Weighted samples
  • Nonprobability sampling
  • Snowball and convenience sampling
  • Purposive sampling
  • Sample validity
  • Sample reliability
  • Sampling bias
  • Non-response bias
  • Statistical power sample testing
  • Post hoc power analysis
  • Experimental research design
  • Quasi-experimental research design
  • Pre-experimental research design
  • Non-experimental research design
  • Internal validity
  • External validity
  • Statistical conclusion validity
  • Theoretical validity
  • Measurement validity
  • Measurement reliability
  • Convergent validity
  • Divergent validity
  • Inter-item validity
  • Inter-coder reliability

We provide expertise across the gamut of research designs and quantitative modeling, analytic, and measurement techniques, alongside the right PhD-level subject matter expertise in fields including finance, macroeconomics, microeconomics, psychology, business, education, law, and public policy. Our team includes not only dedicated advanced degree methodologists, but also academics and subject matter experts who can bridge the gap between statistics and real life, providing the modeling and measurement critiques that reflect both statistical and intuitive strengths and shortfalls.

After working closely with clients to finalize the basic structure and general aims of a planned research design and methodology, we always advise statistical tests on model reliability and statistical power. At Precision, we ensure our clients’ analysis rests upon a good statistical methodology—with broad reliability and validity, robust results under changes in underlying assumptions, and high statistical power (without sacrificing confidence level).

Just as important, our statistical consulting team emphasizes strong experimental and quasi-experimental research designs to enable valid causal inferences, so that you do not have to rely solely on the statistics and can avoid skepticism related to simultaneity bias and omitted variable bias. Precision can assist you with ensuring that each of these critical elements is present in your research design and methodology, as well as provide you with a full report outlining potential weaknesses and improvements to your model, and (via our SEET Division) defend this opinion in court.

Sample Size and Selection Validity: Precision’s team has also provided specific assistance to dozens of firms with preemptive subsample selection as well as post hoc analysis for firms mediating disputes over subsample validity. One of the issues that we see repeatedly in our statistical consulting work is the issue of minimum required sample size. For clients seeking to estimate the needed sample size for a prospective study, we conduct an a priori statistical power analysis to determine a minimum sample size, and then randomly select the sample from the target population to meet these criteria (with or without stratification).

For clients who have already completed statistical analysis and need to resolve a dispute about the validity of the subsample, model, and/or measures, our statisticians can perform a post hoc power analysis. We’ll rigorously compare your sample demographic and descriptive characteristics to the population to check for discrepancies, and compute and report the effective power of the results.

If you have specific questions about how we might provide statistical consulting services for your current sampling or validity needs, don’t hesitate to contact us for an initial consultation. (For clients seeking our assistance with statistical consulting of validity and reliability for new or modified survey tools, specifically, please visit our Survey Development page.) With twelve years of experience providing highly complex statistical analysis and consulting across a broad range of industries, we can provide you with tailored support on virtually any timeline.

lock

Let’s keep it a secret…

Before sharing your materials with us, we will send you our Non-Disclosure Agreement, which guarantees that your work materials, and even your identity as a client, will never be shared with a third party.
ShowHide video script

In this tutorial, we’ll talk a lot about doing a lot, and with very little. Sampling and extrapolation is a common statistical methodology used in our consulting work, and it has many applications in research settings where there is a need to detect an effect from large amounts of data.

It may be costly and time-consuming to scrutinize over datasets covering millions of rows in length, and it may be a lot more efficient in terms of both money and time to consider a subset of the data in order to make an informed decision.

One prime example of sampling and extrapolation in practice would be in the setting of fraud detection, where our interest is in estimating the cost of fraudulent activities across a large number of items through statistical analysis. This is very common in audits done by the United States federal government to verify whether contractors of social security services have been inflating their costs in order to increase the funds received from the government.

Consider this real-world example from one of our clients. Medicare and Medicaid patients go to hospitals to get their medical diagnoses and treatments. Doctors at those hospitals determine the optimal treatment, apply it, and then they bill the government for the cost of that treatment. Here, we have what is called an agency problem, in which the hospital has an incentive to use expensive treatments and then bill them to Medicaid or Medicare.

Given this negative incentive, the government conducts audits on those bills to determine whether the treatment used was indeed the best for the patient and not just the most expensive one. However, there is an important issue, which is that the government receives millions of these bills corresponding to millions of treatments. Determining whether a bill is fraudulent requires a lot of time and care; for example, by having a medical expert review the patient’s clinical history, assess the symptoms, determine the correct treatment, and compare that to the treatment used by the hospital.

Data analytics assistance just makes sense here: Verifying all claims one by one would thus be extremely expensive for the government in this example, or any company in another. Fortunately, it is possible to use statistical extrapolation to simplify this process. This involves analysis of only a small sample of the claims, with often only 30 to 50 claims from what could be a population of thousands, being enough for a statistically valid result.

In this video, I am going to go into some detail regarding how to select this sample in order to ensure that the extrapolation is as cost-effective and accurate as possible, and how to obtain an estimate of the total cost of the fraudulent activities.

To recap, sampling and statistical extrapolation in this setting applies generally in cases where there are a large number of items to review and where some of those items might have problems of some sort. Reviewing each item to detect problems is costly, making a review of the entire population of items impractical.

I gave an example with Medicaid and Medicare, but these statistical methods apply in any situation in which these issues apply. We’ve consulted on credit card fraud detection, in which each charge might require an investigation to determine if it’s legitimate or not. Yet another application could be a company that wishes to conduct an internal audit on its purchasing department, to determine whether they are minimizing costs properly.

Let’s now see how extrapolation actually works. Conducting a statistical extrapolation involves three basic analysis steps.

First is the sampling part, in which we need to take a relatively small random sample of items from the population. For example, out of two thousand hospital bills to Medicare, we could randomly select fifty of them for review.

Second, we would examine each of the items in the sample and determine the corresponding error, if any. This would require an expert review. For instance, in the Medicare bills example, a doctor would review the clinical history of the patient to determine if the treatment used was actually called for. Or, for example, a credit card would likely use an investigator to talk with all parties involved in a charge, to see if it was legitimate.

The final statistical assessment of each item generally comes in two varieties, and one is typically measured in terms of money. In Medicare-related consulting, for instance, we might find that a bill could be for a treatment worth two thousand dollars. The auditor here could determine that a cheaper treatment should have been used, and could conclude that there was an overpayment of fifteen hundred dollars. Or, they could find that the treatment was correct, resulting in an overpayment of zero dollars.

The other variety of audit result for each item in the sample could be a “yes or no” type of conclusion. For example, if we are interested in assessing the proportion of fraudulent items, and don’t care about the specific dollar amount, it would suffice for the audit to state whether the item is indeed fraudulent, or if it is not.

The final step in the extrapolation is to, well, extrapolate the results from the previous step into the full population. We will go over the statistics formulas and some specifics of our consultant work later in this video, but the overall idea is quite simple. Suppose we had one thousand bills and that we randomly drew a sample of fifty bills. Next, suppose we the auditor reviewed those fifty bills, and the average overpayment across those claims was one hundred dollars. The key is that we now expect the average overpayment to be a hundred dollars in *all* one thousand bills, not just the sampled ones. Therefore, the extrapolated value would be one hundred thousand dollars. That’s a lot!

Obviously, it is not as simple as that, and a statistician can help take into account the fact that, well, some precision is lost by looking at a small sample rather than the whole population. But the main idea here is that we expect our sample findings to match whatever happened with the population.

Let’s now go into some detail on how random sampling works. Generally, it is extremely important to set up the sample selection procedure very carefully, so that it accurately represents the population. Note that I’m talking about setting up a procedure carefully, rather than choosing the sample carefully. A sample should not be chosen in the sense of looking at all items and specifically picking ones that look relevant. Knowing the difference is tricky but critically important, and in our statistical consulting, we often find the need to help setting up these procedures for clients.

An appropriate sampling procedure involves randomization in statistical terms to ensure that the auditor is not purposefully picking items that could bias the analysis results in one direction or the other. Still, there are various types of random sampling procedures, and a lot of care should be put in choosing one that ensures that the final sample satisfies the following conditions: first, the sample must be as representative as possible of the population, so that the results derived from it are as accurate as possible.

Second, we want the sample to be replicable. The audited party can always claim that the auditor chose or manipulated the sample in order to get a specific result. Therefore, making sure that the sample selection process is truly random, and can itself be audited and replicated, is important to avoid these potential issues. A statistician might do this by using some software package that generates random numbers, documenting the code properly so that the same set of random numbers can be obtained at a later time if someone wished to review that code.

Finally, the sample size must be large enough for the extrapolation to be as accurate as possible. Statistically, it is possible to conduct extrapolations with a sample size as low as 3 or 4 items. However, as we shall see when we examine the formulas for analysis, this is not desirable as it has very negative effects on the accuracy of the extrapolation, potentially rendering it useless.

I said before that there are various possible types of sampling, to improve the representativeness of the sample. In our data analytics work, the most common ones are simple random sampling and stratified random sampling. One of the most common questions from our clients, both in academia and for companies, is when it is best to use each of these.

The short answer is that you can never go wrong by using simple random sampling, but stratified sampling offers advantages in terms of accuracy in some cases where the data are not normally distributed. The tradeoff is that stratified sampling typically requires a larger sample, and thus it is more expensive to conduct.

Consider this simple example. Let’s assume that we have medical claims for three hundred dollars on average. Let’s also assume that they happen to all be fraudulent, so that the overpayment we are interested in is also three hundred dollars on average. Finally, let’s assume that the dollar amounts of the claims are more or less normally distributed, or look like a nice bell curve.

Now, If we get a random sample from this distribution, we would also expect the average overpayment in that sample to be three hundred dollars, and we would also expect the sample to follow the same normal distribution. Now, to assess the accuracy of our sample estimate, we would need to compare our calculated average with the overpayment value of each individual claim we sampled. This represents the error we incur in by using an average.

As we can see, the error is relatively small, because in a normal distribution, all observations cluster relatively tightly around their average value. This is a case in which using simple random sampling would be fine.

Now consider a different case we encounter often in our consulting, and in statistical terms, this gets a little bit more complicated. Let’s say there’s two types of medical claims: small and big ones. Small claims are around one hundred dollars, and big claims are around five hundred. Let’s also assume that half of the claims are small and half are big. Finally, like before, for simplicity let’s assume that they are all fraudulent. In such case, the distribution of overpayments would be something like this.

Note that the average overpayment is still three hundred dollars like before. However, look at what happened with our sampling error. The difference between this average of three hundred and each individual claim is now quite high compared to the previous case.

The effect of these large errors is that the extrapolation will be quite inaccurate, because the standard deviation will be relatively high. As we will see later, this generally results in recouping a smaller amount of the overpayments, which is costly for the auditing party, particularly given the expense of the expert review and statistical analysis.

The best alternative if the data are distributed in this way would be to use stratified sampling.

When using stratified sampling, we consider various sub-samples separately, calculating the associated dispersion separately for each stratum. After extrapolating each sub-sample separately, we would then aggregate the results through statistical formulas defined for that end.

Again, this sounds complex, and a consultant can help to clarify how this works for your situation. But the important thing to keep in mind is that stratifying greatly reduces the dispersion around the mean.

So, even though the aggregated average will still be the correct amount of three hundred dollars, the dispersion will be much lower when using stratified sampling, improving the accuracy of the estimate.

Of course, although this example involved only big and small claims, this same argument applies when you have three or more types of claims as well. In general, when you have a multi-modal distribution with two or more clearly defined groups, stratified sampling is a good alternative to keep in mind to improve the extrapolation accuracy. This all presupposes that you know that a multi-model distribution is what you have, and that can be difficult to sort out as a first step. A statistical consultant, especially with experience within your industry, can help to demystify this and all your other steps.

Besides the sample selection procedure, remember I also mentioned that sample size is an important factor as well. Generally speaking, because reviewing each item in the sample is costly, you want the smallest sample size possible subject to the constraint that the accuracy of the extrapolation is lower than some pre-defined level.

There are numerous mathematical formulas to compute the optimal sample size, which depend on the distribution of the data, the sampling procedure used, the size of the population, and other factors. Let me show you a common one so you can see intuitively what the sample size depends on.

This is one of the simplest sample size formulas. It applies for data that are normally distributed, and for cases with a very large population. As you can see, the sample size increases if there is a large variability in the individual overpayments. Likewise, if we expect all overpayments to be roughly the same size, then the sample size will tend to be lower.

Also, importantly, the sample size depends inversely on M, the target margin of error. If we want the statistical analysis to have a smaller margin of error; that is, to be more accurate, then we would require a larger sample size.

We have now seen how to define the sampling procedure and how to choose the sample size. Once the sampling type and size has been decided, we can create a random sample through analysis using any suitable statistics software, such as SPSS or R. Next, we would take those sample claims and have the auditor review them one by one to determine the overpayment in each. Finally, once the audits have been conducted on those sample claims, it is time to conduct the extrapolation itself.

The extrapolation begins by computing a confidence interval on the average overpayments observed in the sample. This is the standard formula for a confidence interval, when using simple random sampling, which is the simplest possible case.

In most auditing applications, we typically care only about the lower bound of the confidence interval. This is done so that the outcome is as benevolent as possible to the audited party.

Once the lower bound of the average overpayment per claim has been determined, we multiply this value by the total amount of claims in the universe. This will be the final extrapolated amount.

The formula demonstrates the importance of the sample size (n). If it is too low, the second term in the formula becomes large, and the lower bound might be way too low. The auditor will thus be able to recoup a relatively small amount of money. A larger sample size will allow for a lower bound that is more similar to the average observed overpayments, allowing the auditing party to recoup more money. In consulting with the government and private businesses, we’ve found that recouping more money is certainly the ideal.

This statistical formula also illustrates the importance of setting up the sampling procedure so as to improve accuracy. The (sd) term in the confidence interval equation represents the standard deviation of the sample overpayments. Remember my previous example with simple random sampling versus stratified sampling? Generally speaking, the standard deviation will be lower when using stratified sampling, if the strata have been defined properly. As you can see from the formula, a lower standard deviation results in a tighter confidence interval. This will result in a higher lower bound of the interval, meaning that the auditing party will be able to recover a larger portion of the overpayments.

In conclusion, sampling and extrapolation is a very powerful tool that can be used when it’s impractical to conduct a manual review of all items. As long as great care is used to choose the sampling type, and as long as the correct sample size is used to conduct the analysis, it is possible to get accurate conclusions about the entire universe of items based on a small fraction of them, minimizing the costs of obtaining the data.

Again, this is complex stuff, but as we’ve talked about, it’s incredibly important, especially in financial terms. Give us a call to talk about that and about your particular needs further, and we look forward to it. Thanks!

Sampling and Validity | Precision Consulting, LLC