We need to predict a variable $\mathbf{y}$ in which a model needs to be defined which shows the relationship between output $\mathbf{y}$ and input $\mathbf{x}$. Once the parameters $\{\theta\}$ of the model is refined iteratively the output $\mathbf{y}$ is inferred.
$\mathbf{y}$ → tensor/vector (depending on the model) of output (inference/prediction)
$\mathbf{x}$ → tensor/vector (depending on the model) of input
“model” → relationship between $\mathbf{x}$ and $\mathbf{y}$
$$ \mathbf{y} = f(\mathbf{x},\{\theta\}) $$
“parameters” → how the $\mathbf{x}$ is scaled using set of $\{\theta\}$
Here $\mathbf{x}$ and $\mathbf{y}$ values can range from single valued variable to multidimensional tensors depending on the problem at work.
$$
L(\{\theta\}) = \sum_{i=1}^N [f(x_i,\{\theta\}) - y_i]^2
$$
So before tackling a machine learning algorithm we need to divide it in 2 parts
The discussion for dataset gathering is beyond this lecture, so we would confine our discussion to model and optimization algorithm
I hope you all have done least squares fit in your school or colleges. So one might try to think what the difference between regression and machine learning. In least squares fit we mainly consider closed form of solution. See the following appendices for details.
<aside> 🗒️
What is a closed form solution?
→ A closed form solution is an exact solution to an equation using a finite number of mathematical operations and functions.
</aside>
Appendices :
So, in this case our model or the relationship between $\mathbf{x}$ and $\mathbf{y}$ is