Visualizing the regression function - the conditional mean
It is now instructive to go over an example to understand that even the plain-old mean squared error (MSE), the objective that is common in the regression setting, falls under the same umbrella - it’s the cross entropy between and a Gaussian model. Please follow the discussion associated with Section 5.5.1 of Ian Goodfellow’s Deep Learning book or section 20.2.4 of Russell & Norvig’s book and consider the following figure for assistance to visualize the relationship of and .
Key Insight: MSE as Cross-Entropy
When we assume the model distribution is Gaussian: The negative log-likelihood becomes: This is proportional to the mean squared error (MSE). Therefore, minimizing MSE is equivalent to maximum likelihood estimation under a Gaussian noise assumption.References
- Marginal Maximum Likelihood - Introduction to MLE for marginal distributions
- MLE of Gaussian Parameters - Detailed derivation of MLE for Gaussian parameters
- Section 5.5.1 - Conditional Log-Likelihood - Deep Learning Book (Goodfellow, Bengio, Courville)
- Section 20.2.4 of Artificial Intelligence: A Modern Approach (Russell & Norvig)

