The Wald test applies to the set of coefficients for each interaction term taken together, that is, it tests whether all the coefficients are zero simultaneously. Since the Wald is significant, the interaction between the two variables does increase the predictability of the model. See below for an example taken from http://www.ats.ucla.edu/stat/mult_pkg/faq/general/nested_tests.htm : The Wald test The Wald test approximates the lr test, but with the advantage that it only requires estimating one model. The Wald test works by testing the null hypothesis that a set of parameters is equal to some value. In the model being tested here, the null hypothesis is that the two coefficients of interest are simultaneously equal to zero. If the test fails to reject the null hypothesis, this suggests that removing the variables from the model will not substantially harm the fit of that model, since a predictor with a coefficient that is very small relative to its standard error is generally not doing much to help predict the dependent variable. The formula for a Wald test is a bit more daunting than the formula for the lr test, so we won't write it out here (see Fox, 1997, p. 569, or other regression texts if you are interested). To give you an intuition about how the test works, it tests how far the estimated parameters are from zero (or any other value under the null hypothesis) in standard errors, similar to the hypothesis tests typically printed in regression output. The difference is that the Wald test can be used to test multiple parameters simultaneously, while the tests typically printed in regression output only test one parameter at a time. Returning to our example, we will use a statistical package to run our model and then to perform the Wald test. Below we see output for the model with all four predictors (the same output as model 2 above). Logistic regression Number of obs = 200 LR chi2(4) = 105.99 Prob > chi2 = 0.0000 Log likelihood = -84.419842 Pseudo R2 = 0.3857 ------------------------------------------------------------------------------ hiwrite | Coef. Std. Err. z P>|z| [95% Conf. Interval] -------------+---------------------------------------------------------------- female | 1.805528 .4358101 4.14 0.000 .9513555 2.6597 read | .0529536 .0275925 1.92 0.055 -.0011268 .107034 math | .1319787 .0318836 4.14 0.000 .069488 .1944694 science | .0577623 .027586 2.09 0.036 .0036947 .1118299 _cons | -13.26097 1.893801 -7.00 0.000 -16.97275 -9.549188 ------------------------------------------------------------------------------ After running the logistic regression model, the Wald test can be used. The output below shows the results of the Wald test. The first thing listed in this particular output (the method of obtaining the Wald test and the output may vary by package) are the specific parameter constraints being tested (i.e., the null hypothesis), which is that the coefficients for math and science are simultaneously equal to zero. Below the list of constraints we see the chi-squared value generated by the Wald test, as well as the p-value associated with a chi-squared of 27.53 with two degrees of freedom. The p-value is less than the generally used criterion of 0.05, so we are able to reject the null hypothesis, indicating that the coefficients are not simultaneously equal to zero. Because including statistically significant predictors should lead to better prediction (i.e., better model fit) we can conclude that including math and science results in a statistically significant improvement in the fit of the model. ( 1) math = 0 ( 2) science = 0 chi2( 2) = 27.53 Prob > chi2 = 0.0000