Abstract
The measured genotype approach can be used to estimate the variance contributions of specific candidate loci to quantitative traits of interest. We show here that both the naive estimate of measured-locus heritability, obtained by invoking infinite-sample theory, and an estimate obtained from a bias-corrected variance estimate based on finite-sample theory, produce biased estimates of heritability. We identify the sources of bias, and quantify their effects. The two sources of bias are: (1) the estimation of heritability from population samples as the ratio of two variances, and (2) the existence of sampling error. We show that neither heritability estimator is less biased (in absolute value) than the other in all situations, and the choice of an ideal estimator is therefore a function of the sample size and magnitude of the locus-specific contribution to the overall phenotypic variance. In most cases the bias is small, so that the practical implications of using either estimator are expected to be minimal.