IQ scores based on the​ Stanford-Binet IQ test are normally distributed with a mean of 100 and standard deviation 15. If you were to obtain 100 different simple random samples of size 20 from the population of all adult humans and determine​ 95% confidence intervals for each of​ them, how many of the intervals would you expect to include​ 100? One would expect 95 of the 100 intervals to include the mean 100.