Author: Harvey J. Motulsky

Edition: 3rd (2014)

Website: http://www.intuitivebiostatistics.com/

The Twelve Most Important Concepts in Statistical Inference

[Formulated in the following way, statistical inference covers estimation, regression and hypothesis testing. Model selection may also be included.]

Statistics is not intuitive.

People frequently see patterns in random data and often jump to unwarranted conclusions.

Decisions about how to analyze data should be made in advance.

Beware of HARKing (hypothesizing after results are known).

Statistical inference lets you make general conclusions from limited data. Statistical conclusions are always presented in terms of probability.

A confidence interval quantifies precision, and is easy to interpret.

All statistical tests are based on assumptions.

If your data are not representative of a larger set of data you could have collected (but didn't), then statistical inference makes no sense.

A p-values tests a null hypothesis, and is hard to understand at first. "Statistically significant" does not mean the effect is large or scientifically important. "Not significantly different" does not mean the effect is absent, small or scientifically irrelevant.

The concept of statistical significance is designed to help make a decision based on one result.

If a difference is not statistically significant, you can conclude that the observed results are not inconsistent with the null hypothesis. You cannot conclude that the null hypothesis is true.

Multiple comparisons [tests] make it hard to interpret statistical results.

Correlation does not mean causation.

Observational studies can be useful but are rarely definitive.

Published statistics tend to be optimistic.

Experimental Design

  1. Sample size
  2. Parametric/Nonparametric test
  3. Outlier handling
  4. Data transformation
  5. Normalization to external control values
  6. Adjust for regressors
  7. Weighting factors in regression

Observational Study

Compared to most experiments, observational studies often require more complicated analyses and yield less certain results.

Part I: Puting it all together

Chap. 45 Statistical Traps

  1. Don't get distracted by p-values and conclusions about statistical significance without also thinking about the effect size.

Think of statistical significance (and p-value) as resolution. It only determines if your sample size is sufficiently large to distinguish an effect. Effect size (the size of the difference, association, or correlation) needs to be compared to some pre-determined reference value for the effect to be nontrivial.

[The null is typically the hypothesis to be rejected. And hypothesis testing is useful only in this way.]

  1. Don't fall for ecological fallacy.

Conclusions cannot be made on individuals when data are at group-level.

  1. Don't focus only on mean values while ignoring variation and outliers.

Although classic regression only cares about Conditional Expectation Function.

  1. Statistical significance of differences is not transitive.

Inference about the difference between two differences needs to be based on a single test on that exact quantity, not tests on the component differences.

  1. Don't pool data from different populations (whenever distinguishable).

Two populations are distinct if they have distinct population-level attributes or unit-level attributes homogenous with either population, or they have different mechanism that generates observed attributes. Combining samples from these populations may confound population-specific trends or mechanism.


🏷 Category=Statistics