Non-linear Analysis. Preliminary

by Nirakar Neo

Hello everyone. I’ll be posting a series of lecture notes of a course that I am credited for this semester. The course is named Non-Linear Analysis, and is being taught by Prof PC Das. The structure for this course is as follows –

Calculus in Banach spaces, inverse and multiplicit function theorems, fixed point theorems of Brouwer, Schauder and Tychonoff, fixed point theorems for non-expansive and set-valued maps, pre-degree results, compact vector fields, homotopy, homotopy extension, invariance theorems and applications.

In this preliminary post, I’ll outline some prerequisites that’ll be needed. Prerequisites include a basic course in functional analysis, where one learns the four important theorems – Hahn-Banach theorem, open mapping theorem, closed graph theorem and uniform boundedness principle; some Hilbert space theory, and some results of compact operators. Though, I’ll state the theorem, when I’ll use somewhere. One should of course be familiar with real analysis. Multi-variable calculus is not necessary but one will definitely benefit understanding the theorems in more general context if one has read. Also, it is assumed that the reader knows basic concepts (like, completeness, compactness) in point set topology.

The story starts from calculus in {\mathbb{R}}, that is differentiation and integration of functions {f:\mathbb{R}\rightarrow\mathbb{R}}, which is familiar to everyone. For scalar valued functions {f:\mathbb{R}^{n}\rightarrow\mathbb{R}}, we may define the derivative as the gradient vector; whereas for a vector valued function {f:\mathbb{R}\rightarrow\mathbb{R}^{n}}, we may define the derivative as the vector {(\frac{df_{1}}{dt},...,\frac{df_{n}}{dt})}, where {f_{1},...,f_{n}} are the components of {f}. However it is not immediately clear how to define the derivative for a function {f:\mathbb{R}^{m}\rightarrow\mathbb{R}^{n}}, where {m>1,n>1}. It turns out that the derivative of such functions is actually a linear transformation from {\mathbb{R}^{m}} to {\mathbb{R}^{n}}, and this fact generalises all the previous cases. After one digests this fact, it is then routine to prove the extensions of the results in one variable to multivariable case. Like, we then prove the chain rule (which is now composition of two linear transformations), inverse and implicit function theorems(which are a lot complicated), and Taylor series(no one bothers to actually try hands on). A good book to refer for these stuffs is [Rudin,PoMA].

It is then obvious to get a thought to extend calculus to more general topological vector spaces. The first thing we’ll handle is, how to generalize the calculus from {R^{n}} to arbitrary Banach spaces. A Banach space {X} is a vector space equipped with a norm, which makes {X} into a complete metric space. Completeness is necessary if we wish to handle limit operations on the space. The most important thing that fails to hold for Banach spaces is that now compact sets are no longer characterised by closed and bounded sets, which tends to make the proofs complicated. As we shall see that the derivative is indeed a linear transformation which now comes with a fancy name – Frechet derivative. Again, we’ll routinely complete the proofs of the basic theorems listed above.

If one has followed the proof of inverse function theorem [Rudin, PoMA, Pg-220], one notices that a fixed point theorem is used. In general, a fixed point theorem asserts the existence and uniqueness of {x_{0}\in X} such that {f(x_{0})=x_{0}}. These theorems come with different conditions which can be applied to different situations. These theorems are very important in many areas of analysis to the fact that many books have entirely been devoted to such results. We shall also investigate few such results.

I would like any comments, suggestions for improvement or any question that pops. Also please point out any typographical\factual errors.