Hypothsis Testing refers to the ability
to make statistical statements of certainty or precision about
parameter estimates based on the assumptions of statistical distributions
such as normality. This is one of the main advantages of regression
analysis. Let's say sales revenue is a function of advertising
dollars and the price of your product. You could estimate the
following regression equation if you collected sufficient data:
Sales $$ = Constant + B1(Advertising $$)  B2(Price)
Sales $$ = 12212 + .00133 * Advertising  .02 * Price
How sure can we be .02 is really a good estimate for price,
and not just a result of sampling error or data collection? Regression
packages compute the standard errors (precision measures) of each
parameter estimate. By using the concept of hypothesis testing,
we can test to see if those errors are sufficiently large or small
enough to make confidence statements about our results.
One common hypothesis test, the ttest, tells us if the .02
price estimate is significantly different than zero. In other
words, if the test indicates that the .02 estimate is really
no different than zero, than the sampling error was so big that
we should have little confidence that fluctuations in price over
time had anything to do with trends or shifts in sales revenue.
This particular test is computed by dividing the estimated price
coefficient by its standard error. This ratio can be compared
to a value in a statistical table (given the sample size and degree
of certainty you wish) to determine if sampling error played too
big a role in coming up with your estimate. For this type of test,
the rule of thumb is that the ratio should be greater than 2.0
(in absolute value) to be of much value.
