This section is aimed at students in upper secondary education in the Danish
school system, some objects will be simplified some details will be omitted.
Polynomials
A polynomial is probably the class of functions that has the most
interesting growth pattern so far.
Laurent Polynomials
The most generalized version of a
polynomial in the current setting is the so-called Laurent polynomial. It is
defined as the sum of power functions with integer exponents, i.e.
$$p(x)=\sum_{k\in\mathbb{Z}}a_kx^k$$
where \(a_k\in\mathbb{C}\) are the coefficients.
Consider the following Laurent polynomial
$$p(x)=2+2x^{-1}+\sum_{k=-2}^{-\infty}\frac{x^k}{(-k)!}+\sum_{i=k}^{\infty}
\frac{x^k}{(k+1)!}$$
It has a nasty singularity at \(x=0\) since it has infinite terms that blow up
there, but everywhere else, it will be equal to the function
$$p(x)=\frac{e^x}{x}+e^{1/x}$$
These are relatively
complex objects so I will restrict myself to real coefficients, with most of
them being zero, including all the negative index ones. In fact, what's left
is the "normal" polynomial
Polynomials
When a Laurent polynomial has \(a_k=0\) for \(i< 0\) and \(i>n\) and
\(a_k\in\mathbb{R}\) for \(0\leq k\leq n\) we omit
the "Laurent" and just call it a polynomial with the following expression
$$p(x)=\sum_{k=0}^n a_kx^k=a_0+a_1x+\cdots+a_nx^n$$
where some of these coefficients might still be zero, just not the last one.
The so-called degree of this polynomial is the highest exponent, \(n\).
There is an important theorem, named the fundamental theorem of algebra,
that states that the polynomial always has \(n\) roots, with
so-called "multiplicity". With respect to the roots the polynomial can
always be written in a factorized form as
$$p(x)=a_n\prod_{k=0}^n(x-r_k)$$
where
$$p(0)=a_0=a_n\prod_{k=1}^nr_k$$
where \(r_k\) are all the roots, and their multiplicity is how many times
they appear in this product. If a polynomial has more than one real root
then its graph must intersect the x-axis at least twice, which means that
the slope of the graph must change sign at least once. This means that there
are polynomials that are not strictly monotonous unlike power- and
exponential functions.
Now consider the following polynomial
$$p(x)=1+5x+10x^2+10x^3+5x^4+x^5$$
It can actually be factorized into
$$p(x)=(x+1)^5$$
with
$$a_5\prod_{k=1}^5 r_k=1\prod_{k=1}^5 1=1\cdot1^5=1=a_0=p(0)$$
Derivatives
We can differentiate the polynomial term-wise and get
$$p'(x)=\sum_{k\in Z}ka_kx^{k-1}$$
We can also use the product rule on the factorized form
$$p'(x)=a_n\sum_{k=1}^n\prod_{j\neq k}(x-r_j)$$
with
$$p'(0)=a_1=(-1)^{n-1}a_n\sum_{k=1}^n\prod_{j\neq k}r_j$$
Lets continue with the polynomial from the previous example
$$p(x)=1+5x+10x^2+10x^3+5x^4+x^5$$
Then differentiating it would yield
\begin{align}
p'(x)=&5+20x+30x^2+20x^3+5x^4\\
=&5(1+4x+6x^2+4x^3+x^4)\\
=&a_5\sum_{k=1}^5\prod_{j\neq k}(x+1)\\
=&1\sum_{k=1}^5(x+1)^4=5(x+1)^4
\end{align}
which means that
$$(x+1)^4=1+4x+6x^2+4x^3+x^4$$
Binomial Coefficients
The two polynomials from the previous example can be generalized as
$$p(x)=(x+1)^n=\sum_{k=0}^n{n\choose k}x^k$$
which has the root \(x=-1\) with multiplicity \(n\). The expression
comes from expanding the parenthesis, since a power \(x^k\) must take an
\(x\) from \(k\) factors and a 1 from the rest. This in effect means
that we have to choose \(k\) out of \(n\) factors, which is done in
\(n\choose k\) ways by combinatorics. If we
subtract an \(x\) from the left hand parenthesis, we get
$$(x+1-x)^n=1^n=1=\sum_{k=0}^n{n\choose k}x^k(1-x)^{n-p}$$
which verifies that the binomial
distribution is in fact a probability distribution, since this
expression calculates the probability of all the random values for \(x=p\).
If we consider
$$(1-1)^n=0^n=0=\sum_{k=0}^n{n\choose k}1^k(-1)^{n-k}=\sum_{k=0}^n
{n\choose k}(-1)^{n-k}$$
we can observe that if we alternately add and subtract a line in Pascal's
triangle, the result adds up to 0. This is obvious for the rows with an even
number of numbers, by symmetry, but not equally obvious for the odd rows.
Quadratics
The one class of polynomials we will spend most of our time investigating
are the ones of degree 2, which are known as quadratics, with the
convention of writing \(a_2=a\), \(a_1=b\) and \(a_0=c\), i.e.
$$p(x)=ax^2+bx+c$$
These three coefficients have interesting graphical interpretations but
only the one for \(c\) is immediately obvious.
$$f(0)=a⋅0^2+b⋅0+c=c$$
so it's simply the initial value or the y-intercept. For \(b\) consider
the derivative of both the normal and factorized form
$$\boxed{p'(x)=2ax+b}=a(x-r_2+x-r_1)=\boxed{2ax-(r_1+r_2)}$$
which means that
$$p'(0)=2a⋅0+b=b$$
So the interpretation of \(b\) is that it is the slope of the graph at the
y-axis and related the sum of the roots, \(b=-a(r_1+r_2)\). The last part
of that is the first half of Viète's formula for quadratics, the full
formula following directly from the factorization form
$$p(x)=a(x-r_1)(x-r_2)=ax^2-2a(r_1+r_2)x+r_1r_2$$
so
$$-\frac{b}{a}=r_1+r_2$$
and
$$\frac{c}{a}=r_1r_2$$
This is extremely useful for finding roots, especially when \(a=1\) since
finding two whole numbers who's product is one number and sum is another
is extremely easy in a number of important cases.
An example of this is the following polynomial and it's immediate factorization
$$p(x)=x^2-6x-27=(x+3)(x-9)$$
by Viete's method, so the roots are \(r_1=-3\) and \(r_2=9\).
For example let
The Roots and Vertex Theorem
Every quadratic can be written as
$$p(x)=a(x-r_-)(x-r_+)=a(x-V_x)^2+V_y$$
where the first expression is just the factorized form and the second I
will call the vertex form, which represents parallel-shifting the
function \(ax^2\) by \(V_x\) horizontally and \(V_y\) vertically.
Furthermore the roots can be calculated as
$$r_\pm=\frac{-b\pm\sqrt{Δ}}{2a}$$
with the so-called "discriminant"
$$Δ=b^2-4ac$$
Additionally the graph has an extremum, or vertex, at
\(V=(V_x,V_y)\) which is a minimum for positive \(a>0\), a maximum for
negative \(a< 0\), and its coordinates can be calculated as follows
$$V=\left(-\frac{b}{2a},-\frac{Δ}{4a}\right)$$
The graph is also symmetric about the line \(x=V_x\).
It may be weird to include so many things in the theorem, but all the
statements follow from the same argument.
This theorem provides us with a graphical interpretation of \(a\) namely
that its sign determines whether the graph branches upwards, \(a>0\),
or downwards, \(a< 0\), and how "steep" the "branches" are.
Lets take the polynomial from the previous example
$$p(x)=x^2-6x-27$$
and assume we don't know Viète's method and apply the theorem. Let's start
by calculating the discriminant
$$\Delta=(-6)^2-4⋅1⋅(-27)=36+108=144$$
followed by the roots
\begin{align}
r_\pm=&\frac{-(-6)\pm\sqrt{\Delta}}{2⋅1}\\
=&\frac{6\pm12}{2}=3\pm6\\
=&-3\vee9
\end{align}
yielding the factorization
$$p(x)=1⋅(x-(-3))(x-9)$$
as previously noted. Lets now calculate the vertex
\begin{align}
V=&\left(-\frac{b}{2a},-\frac{\Delta}{4a}\right)\\
=&\left(-\frac{-6}{2⋅1},-\frac{144}{4⋅1}\right)\\
=&(3,-36)
\end{align}
with
$$p(x)=1⋅(x-3)^2-36$$
This makes sense since the the parabola is symmetric about the vertex,
and 3 is halfway between -3 and 9, and since \(b< 0\) we know that it is
going downward at the y-axis while intercepting it at \(y=-27\) so it makes
sense that it might reach \(y=-36\) at the vertex.
Proof
First off, we start by diving both sides by \(a\) and complete the
square
\begin{align}
&&p(x)=&ax^2+bx+c\\
\implies&&\frac{p(x)}{a}=&x^2+\frac{b}{a}x+\frac{c}{a}\\
&&=&x^2+2\frac{b}{2a}x+\left(\frac{b}{2a}\right)^2
-\left(\frac{b}{2a}\right)^2+\frac{4ac}{4a^2}\\
&&=&\left(x+\frac{b}{2a}\right)^2+\frac{4ac-b^2}{4a^2}\\
\implies&&p(x)=&a\left(x+\frac{b}{2a}\right)^2-\frac{Δ}{4a}
\end{align}
From this point, I will branch the proof into two directions, starting
with the vertex.
Vertex
The x-value is only present in the square, so the polynomial only
varies with the square term. The square is always positive, so the
sign on the square term is determined by the sign of the a-value.
For \(a< 0\) the first term is negative, which means that for any
non-zero value of the square, the value of the polynomial
decreases, which makes the x-value that makes the square zero a
maximum. Similarly, for \(a>0\) the same value is a minimum. In
other words
$$V_x+\frac{b}{2a}=0\implies V_x=-\frac{b}{2a}$$
and
$$V_y=p(V_x)=\cancel{a\left(-\frac{b}{2a}+\frac{b}{2a}\right)}
-\frac{Δ}{4a}$$
Roots
A root is an x-value, \(r\) such that \(p(r)=0\), i.e.
\begin{align}
&&a\left(r+\frac{b}{2a}\right)^2=&\frac{Δ}{4a}\\
\implies&&r+\frac{b}{2a}=&\pm\sqrt{\frac{Δ}{4a^2}}\\
\implies&&r=&-\frac{b}{2a}+\pm\frac{\sqrt{Δ}}{2a}
\end{align}
now all I have to show, by Viéte, is that the sum of these is \(-\frac{b}{a}\)
$$\frac{-b\cancel{-\sqrt{Δ}}}{2a}+\frac{-b\cancel{+\sqrt{Δ}}}{2a}=\frac{-\cancel{2}b}{\cancel{2}a}$$
and that their product is \(-\frac{c}{a}\)
\begin{align}
\frac{-b-\sqrt{Δ}}{2a}\frac{-b+\sqrt{Δ}}{2a}=&\frac{(-b)^2-\sqrt{\Delta}^2}{4a^2}\\
=&\frac{\cancel{b^2}-(\cancel{b^2+4a}c)}{\cancel{4}a^{\cancel{2}}}
\end{align}
where the numerator is simplified by a square rule.
∎
There is a slight issue with these roots. If we want real roots, which
is what we can usually reasonably interpret, we have to require the
discriminant to be nonnegative, i.e. \(Δ\geq0\), since we're taking the
square root. If the determinant is \(Δ=0\) then we have only one root
with multiplicity 2.
Three Point Theorem
Given three points in the plane, you can find the coefficients of the
unique quadratic that passes through them by the following formulas
\begin{align}
a=&\frac{a_{ij}-a_{ik}}{\Delta x_{jk}}\\
b=&a_{ij}-a(x_i+x_j)\\
c=&y_i-ax_i^2-bx_i
\end{align}
where
$$a_{ij}=\frac{\Delta y_{ij}}{\Delta x_{ij}}=\frac{y_i-y_j}{x_i-x_j}$$
is the slope of the line from point i to point j.
Proof
Like any other point theorem, we start by putting the points into the
function expression which yields
$$y_i=ax_i^2+bx_i+c$$
We remove c by subtracting these two-by-two, which yields
\begin{align}
&&y_i-y_j=&a(x_i^2-x_j^2)+b(x_i-x_j)\\
&&=&a(x_i+x_j)(x_i-x_j)+b(x_i-x_j)\\
\iff&&a_{ij}=\frac{\Delta y_{ij}}{\Delta x_{ij}}=&a(x_i+x_j)+b
\end{align}
Once I have a I can use this to determine b. Lastly I subtract these to remove b,
\begin{align}
a_{ij}-a_{ik}=a(\cancel{x_i}+x_j\cancel{-x_i}-x_k)
\end{align}
∎
Ballistics
One application of quadratics is the ballistics of projectiles, i.e. the path
they take through the air, ironically assuming there is no air for simplicity.
Although the gravitational force varies slightly with height, and position
for that matter, on the scale of most projectiles it is a reasonable approximation
to assume that it is constant. The same goes for the associated accelleration,
named the gravitational accelleration
$$a(t)=g≈10\frac{m}{s^2}$$
This number tells us that any object that is only acted on my the gravitational
force will have its velocity increased by \(10\frac{m}{s}\) each second towards
the ground, i.e.
$$v(t)=gt+v_0≈10t+v_0$$
This in turn describes the rate at which the position changes towards the
ground. Since the velocity is not constant, we have to consider the following
relation
\begin{align}
&&v(t)=&s'(t)\\
\implies&&s(t)=&\int v(t)dt\\
&&≈&\int 10t+v_0dt\\
&&=&\frac{10}{2}t^2+v_0t+s_0
\end{align}
which tells us how to calculate the vertical position using the gravitational
acceleration, time, and initial vertical velocity and position. Not only that
but it turns out to be a quadratic in time, which means we can quickly calculate
roots, which are the times where the vertical position is zero, i.e. when the
object is on the ground. Often an object is launched off the ground so that
\(s_0=0\), and in the opposite direction of the acceleration, so \(v_0< 0\),
which leaves us with
$$s(t)=5t^2-v_0t=(5t-v_0)t$$
where \(v_0\) now represents the speed and not the velocity. This obviously
has the roots \(t=0\vee\frac{v_0}{5}\) which in turn gives us the vertex \(V_t=\frac{v_0}{10}\),
since the vertex is halfway between the roots. This makes sense since the
projectile loses \(10\frac{m}{s}\) per second, so after \(V_t\) seconds, it
has lost all it's vertical velocity and is starting to fall down. If we draw
the graph of this quadratic it will have time on the first axis, but often
we would be interested in the actual path the projectile takes, i.e. in a
traditional coordinate system. To do this, I use the
parametric function which represents a vector quadratic
\begin{align}
\underline{r}(t)=&\frac{1}{2}\underline{a}t^2-\underline{v}_0t+\underline{s}_0\\
=&\frac{1}{2}\begin{pmatrix}a_x\\a_y\end{pmatrix}t^2-\begin{pmatrix}
{v_0}_x\\{v_0}_y\end{pmatrix}t+\begin{pmatrix}x_0\\y_0\end{pmatrix}\\
=&\begin{pmatrix}
v_xt+x_0\\5t^2-v_yt+y_0\end{pmatrix}
\end{align}
where the x-function is the previously mentioned quadtratic with \(a_x=0\)
and the y-function has \(a_y=10≈g\).
Angle of projectile
Given an initial velocity v, and target position \((x,y)\), the angle
with which you should launch an unassisted projectile can be calculated with
Cartesian coordinates as
$$2\theta=\left(\arcsin\left(\frac{v^2y-ax^2}{v^2\sqrt{x^2+y^2}}\right)+\arctan\left(\frac{y}{x}\right)\right)/2$$
and in polar coordinates as
$$\theta=\left(\arcsin\left(\sin\varphi-\cos^2\varphi\frac{ar}{v^2}\right)+\varphi\right)/2$$
Proof
Lets start by isolating the time in the first coordinate function
$$v\cos\theta t=x\iff x=\frac{x}{v\cos\theta}$$
Now i put that into the second coordinate function
\begin{align}
&&y=&\frac{1}{2}at^2+v\cos\theta t\\
\iff&&2y=&a\left(\frac{x}{v\cos\theta}\right)^2+2\cancel{v}\cos\theta\frac{x}{\cancel{v}\cos\theta}\\
\iff&&2y\cos^2\theta=&a\frac{x^2}{v^2}+2x\cos\theta\sin\theta\\
\iff&&y(\cos(2\theta)+1)=&a\frac{x^2}{v^2}+x\sin(2\theta)\\
\iff&&y-a\frac{x^2}{v^2}=&x\sin(2\theta)-y\cos(2\theta)\\
&&=&\det\left(\begin{pmatrix}x\\y\end{pmatrix},\begin{pmatrix}\cos(2\theta)\\\sin(2\theta)\end{pmatrix}\right)\\
&&=&\left|\begin{pmatrix}x\\y\end{pmatrix}\right|\cdot1\sin\varphi\\
\iff&&2\theta-\arctan\left(\frac{y}{x}\right)=\varphi=&\arcsin\left(\left(\frac{v^2y-ax^2}{v^2\sqrt{x^2+y^2}}\right)\right)\\
\end{align}
where I use the two trigonometric identities \(2\cos\theta\sin\theta=\sin(2\theta)\) and \(\cos^2\theta=\cos(2\theta)+1\) and
\(\phi\) is the angle between the vector from the origin to the target point.
∎
The sine obviously yields two values, and the second value can be calculated by the less stylish formulas
$$\theta=\left(\pi+\arctan\left(\frac{y}{x}\right)-\arcsin\left(\frac{v^2y-ax^2}{v^2\sqrt{x^2+y^2}}\right)\right)/2$$
and
$$\theta=\left(\pi+\varphi-\arcsin\left(\sin\varphi-\cos^2\varphi\frac{ar}{v^2}\right)\right)/2$$
∎
Quadratic Regression
For a dataset \((x_i,y_i)\) for \(i=0,1,2,\ldots,n\) the coefficients for the best fit quadratic are as follows
\begin{align}
a=&\frac{x_{21}yx^2-x_{32}yx}{x_{32}^2-x_{21}x_{42}}\\
=&30\frac{n(n-1)\Sigma y-6n\Sigma yk+6\Sigma yk^2}{\Delta x^2(n-1)n(n+1)(n+2)(n+3)}\\
b=&\frac{yx}{x_{21}}-an\Delta x\\
=&6\frac{-3n(n-1)(2n+1)\Sigma y+2(2n+1)(8n-3)\Sigma yk-30n\Sigma yk^2}{\Delta x(n-1)n(n+1)(n+2)(n+3)}\\
c=&\frac{\Sigma y-a\Sigma x^2-b\Sigma x}{n}\\
=&3\frac{3(3n^2+3n+2)\Sigma y-6(2n+1)\Sigma yk+10\Sigma yk^2}{(n+1)(n+2)(n+3)}\\
\end{align}
where
$$x_{ij}=n\sum_{k=0}^n x_k^i-\sum_{k=0}^nx_k^j\sum_{k=0}^nx_k^{i-j}$$
and
$$yx^i=n\sum_{k=0}^ny_k\sum_{k=0}^nx_k^i-\sum_{k=0}^ny_kx_k^i$$
The last equality in each coefficient assumes the x-values are equally spaced by \(\Delta x\), which also implies that
\begin{align}
x_{21}=&\Delta x^2\frac{n(n+1)^2(n+2)}{12}\\
x_{32}=&\Delta xx_{21}n\\
x_{42}=&\Delta x^2x_{21}\frac{(2n+1)(8n-3)}{15}\\
\end{align}
Proof
Lets minimize the sum of squared residuals
\begin{align}
0=\frac{\partial K}{\partial a}(a,b,c)=&\sum_{i=1}^n2(ax_i^2+bx_i+c-y_i)x_i^2\\
=&2a\Sigma x^4+2b\Sigma x^3+2c\Sigma x^2-\Sigma yx^2\\
0=\frac{\partial K}{\partial b}(a,b,c)=&\sum_{i=1}^n2(ax_i^2+bx_i+c-y_i)x_i\\
=&2a\Sigma x^3+2b\Sigma x^2+c\Sigma x-\Sigma yx\\
0=\frac{\partial K}{\partial c}(a,b,c)=&\sum_{i=1}^n2(ax_i^2+bx_i+c-y_i)\\
=&2a\Sigma x^2+2b\Sigma x+nc-\Sigma y
\end{align}
Now I use the equal coefficients method to eliminate \(c\)
\begin{align}
0=&a(n\Sigma x^3-\Sigma x\Sigma x^2)+b(n\Sigma x^2-(\Sigma x)^2)+\Sigma x\Sigma y-n\Sigma yx\\
=&ax_{32}+bx_{21}-yx\\
0=&a(n\Sigma x^4-(\Sigma x^2)^2)+b(n\Sigma x^3-\Sigma x^2\Sigma x)+\Sigma x^2\Sigma y-n\Sigma yx^2\\
=&ax_{42}+bx_{32}-yx^2
\end{align}
where
$$x_{ij}=n\Sigma x^i-\Sigma x^j\Sigma x^{i-j}$$
and
$$yx^k=\Sigma yx^k-n\Sigma y\Sigma x^k$$
We do it again to eliminate \(b\)
\begin{align}
&&0=&a(x_{32}^2-x_{21}x_{42})-x_{32}yx+x_{21}yx^2\\
\iff&&a=&\frac{x_{32}yx-x_{21}yx^2}{x_{32}^2-x_{21}x_{42}}
\end{align}
Then we isolate \(b\)
$$b=\frac{yx^2-ax_{32}}{x_{21}}$$
and \(c\)
$$c=\frac{\Sigma y-a\Sigma x^2-b\Sigma x}{n}$$
∎
If you space out the x-values carefully, e.g. in regular time intervals, so that \(x_k=k\Delta x\) then many of these terms
simplify by the following formulas
\begin{align}
\sum_{k=1}^nk=&\frac{n(n+1)}{2}\\
\sum_{k=1}^nk^2=&\frac{n(n+1)(2n+1)}{6}\\
\sum_{k=1}^nk^3=&\left(\frac{n(n+1)}{2}\right)^2\\
\sum_{k=1}^nk^4=&\frac{n(n+1)(2n+1)(3n^2+3n+1)}{30}
\end{align}
and
$$\sum_{k=1}^nx_k^i=\Delta x^i\sum_{k=1}^nk^i$$
so
\begin{align}
x_{21}=&(n+1)\Sigma x^2-(\Sigma x)^2\\
=&(n+1)\Delta x^2\frac{n(n+1)(2n+1)}{6}-\Delta x^2\left(\frac{n(n+1)}{2}\right)^2\\
=&\Delta x^2\frac{n(n+1)^2}{2}\left(\frac{2n+1}{3}-\frac{n}{2}\right)\\
=&\Delta x^2\frac{n(n+1)^2}{2}\frac{4n+2-3n}{6}\\
=&\Delta x^2\frac{n(n+1)^2}{12}(n+2)\\
x_{32}=&\Delta x^3\left((n+1)\left(\frac{n(n+1)}{2}\right)^2-\frac{n(n+1)(2n+1)}{6}\frac{n(n+1)}{2}\right)\\
=&\Delta x^3\left(\frac{n(n+1)}{2}\right)^2\frac{3n+3-2n-1}{3}\\
=&\Delta x^3\left(\frac{n(n+1)}{2}\right)^2\frac{n+2}{3}\\
=&x_{21}\Delta xn\\
x_{42}=&\Delta x^4\left((n+1)\frac{n(n+1)(2n+1)(3n^2+3n-1)}{30}-\left(\frac{n(n+1)(2n+1)}{6}\right)^2\right)\\
=&\Delta x^4\frac{n(n+1)^2(2n+1)}{6}\frac{18n^2+18n-6-10n^2-5n}{30}\\
=&\Delta x^4\frac{n(n+1)^2(2n+1)}{180}(8n^2+13n-6)\\
=&\Delta x^4\frac{n(n+1)^2(n+2)(2n+1)(8n-3)}{180}\\
=&\Delta x^2x_{21}\frac{(2n+1)(8n-3)}{15}\\
x_{32}^2-x_{21}x_{42}=&\Delta x^2x_{21}^2\left(n^2-\frac{(2n+1)(8n-3)}{15}\right)\\
=&\frac{\Delta x^2x_{21}^2}{15}(15n^2-16n^2-2n+3)\\
=&\frac{\Delta x^2x_{21}^2}{15}(-n^2-2n+3)\\
=&\frac{\Delta x^2x_{21}^2}{15}(1-n)(n+3)
\end{align}
moreover
\begin{align}
yx=(n+1)\Delta x\Sigma yk-\frac{\Delta xn(n+1)}{2}\Sigma y=&\frac{\Delta x(n+1)}{2}(2\Sigma yk-n\Sigma y)\\
yx^2=(n+1)\Delta x^2\Sigma yk^2-\frac{\Delta x^2n(n+1)(2n+1)}{6}\Sigma y=&\frac{\Delta x^2(n+1)}{6}(6\Sigma yk^2-n(2n+1)\Sigma y)\\
n\Delta xyx-yx^2=\frac{\Delta x^2(n+1)}{6}\left(\Sigma y(6kn-3n^2-6k^2+2n^2+n)\right)=&\frac{\Delta x^2(n+1)}{6}(n(1-n)\Sigma y+6n\Sigma yk-6\Sigma yk^2)
\end{align}
All-in-all, this means that
\begin{align}
a=&\frac{x_{32}yx-x_{21}yx^2}{x_{32}^2-x_{21}x_{42}}\\
=&15\frac{\cancel{x_{21}}}{x_{21}^\cancel{2}}\frac{\Delta xnyx-yx^2}{\Delta x^2(1-n)(n+3)}\\
=&15\frac{12\cancel{\Delta x^2}\cancel{(n+1)}(n(1-n)\Sigma y+6n\Sigma yk-6\Sigma yk^2)}{6\cancel{\Delta x^2}(1-n)n(n+1)^\cancel{2}(n+2)(n+3)}\\
=&30\frac{n(n-1)\Sigma y-6n\Sigma yk+6\Sigma yk^2}{(n-1)n(n+1)(n+2)(n+3)}\\
b=&\frac{yx-ax_{32}}{x_{21}}\\
=&\frac{yx}{x_{21}}-an\Delta x\\
=&6\frac{2\Sigma yk-n\Sigma y}{\Delta xn(n+1)(n+2)}-30n\Delta x\frac{n(1-n)\Sigma y+6n\Sigma yk-6\Sigma yk^2}{\Delta x^2(1-n)n(n+1)(n+2)(n+3)}\\
=&\frac{6}{\Delta x(1-n)n(n+1)(n+2)(n+3)}\Sigma y((2k-n)(1-n)(n+3)-5n^2(1-n)-30n^2k+30nk^2)\\
=&\frac{6}{\Delta x(1-n)n(n+1)(n+2)(n+3)}\Sigma y(3n(n-1)(2n+1)+2k(16n^2-2n+3)+30nk^2)\\
=&6\frac{-3n(n-1)(2n+1)\Sigma y+2(2n+1)(8n-3)\Sigma yk-30n\Sigma yk^2}{\Delta x(n-1)n(n+1)(n+2)(n+3)}\\
c=&\frac{\Sigma y-a\Sigma x^2-b\Sigma x}{n+1}\\
=&\frac{\Sigma y}{n+1}-5\frac{(2n+1)(n(n-1)\Sigma y-6n\Sigma yk+6\Sigma yk^2)}{\Delta x^2(n-1)(n+1)(n+2)(n+3)}-3\frac{-3n(n-1)(2n+1)\Sigma y+2(2n+1)(8n-3)\Sigma yk-30n\Sigma yk^2}{\Delta x(n-1)(n+1)(n+2)(n+3)}\\
=&\frac{(n-1)((n+2)(n+3)+4n(2n+1))\Sigma y+6(2n+1)(5n-8n+3)\Sigma yk+6(15n-10n-5)\Sigma yk^2}{(n-1)(n+1)(n+2)(n+3)}\\
=&\frac{(n-1)((n+2)(n+3)+4n(2n+1))\Sigma y+6(2n+1)(5n-8n+3)\Sigma yk+6(15n-10n-5)\Sigma yk^2}{(n-1)(n+1)(n+2)(n+3)}\\
=&3\frac{3(3n^2+3n+2)\Sigma y-6(2n+1)\Sigma yk+10\Sigma yk^2}{(n+1)(n+2)(n+3)}
\end{align}
In the following p5.js program a cannon with an initial speed as indicated by the first text box, and
"scale" in the second box. The program tracks red objects by their center of mass and does a regression
on each of the coordinates. When you press space, the cannon will shoot towards the red object, if it can reach it, and
simultanuously starts gathering a new dataset and displays the result from the last period.
Now imagine we have an object on a parabolic path, and we want the cannon to track and shoot it after a short period of time.
This problem can be formulated as
$$s(t)=r(t+\Delta t)$$
where \(s(t)\) is the vector-polynomial describing the movement of the cannonball, and \(r(t)\) is the vector-polynomial describing
the path of the object. This leads to the following equations
\begin{align}
&&\frac{1}{2}\ul{a}_1t^2+\ul{b}_1t+\ul{c}_1=&\frac{1}{2}\ul{a}_2(t+\Delta t)^2+\ul{b}_2(t+\Delta t)+\ul{c}_2\\
&&=&\frac{1}{2}\ul{a}_2t^2+\ul{a}_2t\Delta t+\frac{1}{2}\ul{a}_2\Delta t^2+\ul{b}_2t+\ul{b}_2\Delta t+\ul{c}_2\\
\iff&&0=&\Delta\ul{a}t^2+2(\ul{a}_2\Delta t+\Delta\ul{b})t+\ul{a}_2\Delta t^2+2\ul{b}_2\Delta t+2\Delta\ul{c}
\end{align}
Lets try taking the scalar product with the CO-vectors of the coefficients to the quadratic and constant term.
\begin{align}
0=&\begin{pmatrix}a_{1y}-a_{2y}\\a_{2x}-a_{1x}\end{pmatrix}\left(2\begin{pmatrix}b_{2x}-b_{1x}+a_{2x}\Delta t\\b_{2y}-b_{1y}+a_{2y}\Delta t\end{pmatrix}t
+\begin{pmatrix}+a_{2x}\Delta t^2+2b_{2x}\Delta t+2c_{2x}-2c_{1x}\\a_{2y}\Delta t^2+2b_{2y}\Delta t+2c_{2y}-2c_{1y}\end{pmatrix}\right)\\
=&2a_{1y}b_{2x}t-a_{1y}b_{1x}t+a_{1y}a_{2x}\Delta tt\\-2a_{2y}b_{2x}t+2a_{2y}b_{1x}t-a_{2y}a_2x\Delta tt
+a_{1y}a_{2x}\Delta t^2+2a_{1y}b_{2x}\Delta t+2a_{1y}c_{2x}-2a_{1y}c_{1x}-a_{2y}a_{2x}\Delta t^2
-2a_{2y}b_{2x}\Delta t-2a_{2y}c_{2x}+2a_{2y}c_{1x}+2a_{2x}
\end{align}