Ordinary Least Squares (OLS)
→ Supervised Learning- Context:
→ Fit a line of the form y = mx + c or y = b0 + b1x
→ Concept of actual y (𝐲i) and estimated y (ỷi)
→ Minimize the squared deviation between actual and estimate.
The Derivation :-
Derivation :-
- Our goal is to minimize SSE:
- We use basic ideas from calculus: Take the first derivative and equate it to 0.
Derivation for b0
Derivation of b1
No comments:
Post a Comment