LEAST SQUARE APPROXIMATION

From the high school we have learned in our linear algebra classes to solve the equation Ax = b by various techniques like Echelon form reduction, Cramer’s rule etc. In which if there are no free variable ( columns of A are independent) then there exists unique solution and similarly when the (columns of A are dependent) then elimination would reach an impossible equation ( 1 = 0) which implies there is No solution.

Let’s talk from CFD perspective in which the governing equation are converted from strong to weak form and written in discrete FVM formulation the resulting equation can be written as Ax = b , specifically in which the x-vector is variable we are solving for and b-vector is called source term while matrix A depends upon the discretization scheme being used to obtain those equation. Because of the presence of the different errors like (Truncation, Round off) and approximations the equation Ax \neq b  instead some error is there e = Ax - b and when e = 0  then we get exact solution whose solution can be written as x = A^{-1} \cdot b . i.e., We can’t have exact measurements there always exists some noise which induces this error.

In conclusion the vector b does not lies on the column space of matrix A so we want a method that approximates by projecting b onto the column space of A and that is what the methos of least square is about.

PROJECTIONS :

The error vector (e) can be written as e = b - p  by triangle law of vector addition and then using the definition of orthogonality we can write a \cdot (b - a\hat{x} ) which gives us \hat{x} = \frac{a^Tb}{a^Ta}  from this we get p = \hat{x}a  which is the projection of b onto the plane of vector a.

P 3 Least Square Method 724x1024

Now the aim is to make the error as least as possible and this can be done by choosing the best possible vector in the column space of matrix A x = \hat{x}  . for which the error becomes perpendicular to the column space or geometrically we are searching for the closest point to b in the column space which is point p. Hence the vector b (output vector) can be written into two parts error (e) + projection (p) from which it implies Ax = b = p + e  is not solvable and the part A\hat{x} = p  is solvable.

The least square method exploits orthogonality to minimize the error and to best approximate the output vector b.

There is one of the commonly used method in CFD to calculate the gradient at the cell faces which is called ” Least Square Gradient Scheme” which is based on same principle of error minimization.

Leave a Reply

Scroll to Top

Discover more from The Intuition.Blog

Subscribe now to keep reading and get access to the full archive.

Continue reading