Sep 19, 2014
0 notes

Qual problems from today:

Let \(A \in M_{n}(\mathbb{C})\); show there is a \(B\) such that \(A + B\) and \(B\) have different eigenvalues.

Solution. Pick \(B\) lower-triangular so it annihilates the lower triangular portion of \(A\); \(B\) has diagonal entries \(\lambda_i - (a_{ii})\) for each \(i\); pick these \(\lambda\) so that \(\lambda_i \neq \mathrm{conj}(\lambda_j - a_{jj})\) for all \(i,j\). Then since the eigenvalues of the conjugate transpose are the conjugates of the eigenvalues, and the eigenvalues of an upper-triangular matrix are precisely its diagonal entries, we are done. \(_\square\)

Let \(A,B\) closed subsets of Euclidean space. If \(A \cup B\) and \(A \cap B\) are connected, show \(A\) is connected.

Solution. Recall connectedness is equivalent to path-connectness in Euclidean space. Suppose \(A\) is not topologically connected; then there exist \(U_1, U_2\) disjoint such that we can nontrivially cover \(A\) with their union; WLOG we see that \(A \cap B\) must lie in \(U_2\) since otherwise \(U_1 \sqcup U_2\) would induce a disjoint open covering of \(A \cap B\). Now, any path that begins in \(U_1\) but ends in \(B\) must go through \(A \cap B\), an impossibility, so \(A \cup B\) is not path connected and hence not connected, a contradiction. \(_\square\)

Let \((f_i) \to f\), all monotone, on a closed interval \([a,b]\). Show that if the convergence in the previous statement is pointwise, it is also monotone.

Solution. The difference of monotone functions is also monotone, hence it suffices to look at the endpoints. Fix \(\epsilon > 0\) and let \(N_a, N_b\) answer the epsilon-challenge for the pointwise convergence \(f_i(a) \to f(a)\) and \(f_i(b) \to f(b)\). I claim \(N = \min(N_a, N_b)\) gives uniform continuity for the same \(\epsilon\). Indeed, let \(x \in [a,b]\), then \(d(f_i(x), f(x)) \leq \min(d(f_i(a), f(a)), d(f_i(b),f(b)))\) which is less than \(\epsilon\) by our choice of \(N\). \(_\square\)

Sep 18, 2014
1 note

Any continuous function on a compact metric space into another metric space is uniformly continuous

Let \(f:K \to L\) be as stated above. Fix \(\epsilon > 0\). By continuity, \(\forall x \in K \exists \delta_x : y \in B(x, \delta_x) \implies f(y) \in B(f(x), \epsilon)\). Now,

\[\bigcup B(x, 1/2 \cdot \delta_x) \supseteq K \implies \exists n : \bigcup_{i \leq n} B(x_i, 1/2 \cdot \delta_i) \supseteq K\]

by compactness. I claim \(\delta = 1/2 \cdot \min_{i \leq n} \{\delta_i\}\) works for uniform continuity. Let \(a,b \in K\) such that \(d(a,b) < \delta\). Then \(a \in B(x_i, 1/2 \cdot \delta_i), b \in B(x_j, 1/2 \cdot \delta_j)\) for some \(i\) and \(j\). Observe

\[d(a,x_i) < \delta_i, d(b,x_i) \leq d(a,x_i) + d(a,b) < \delta_i \implies d(f(a),f(x_i)) < \epsilon, d(f(b),f(x_i)) < \epsilon\]

and so

\[d(f(a),f(b)) \leq d(f(a),f(x_i)) + d(f(b),f(x_i)) < 2\epsilon\]

and since epsilon is arbitrary, we are done. \(_\square\)

Sep 18, 2014
0 notes

Another random basic problem:

Let \(f:[0,1] \to \mathbb{R}\) infinitely differentiable with the property that \(\forall x \in [0,1], \exists m\) an integer such that \(f^{(m)} \neq 0\). Prove the following stronger property: \(\exists M\) such that \(\forall x \in [0,1], \exists m \leq M : f^{(m)}(x) \neq 0\).

Assume towards a contradiction the negation of what is to be proved, i.e.

\[\forall M \exists x \in [0,1] : \forall m \in [1, \dots, M], f^{(m)}(x) = 0.\]

Let \((M_i) = 0, 1, 2, 3, \dots\); we then obtain a sequence \((x_i) = x_0, x_1, x_2, \dots\) such that for each \(n\),

\[f^{(0)}(x_n) = 0, \dots, f^{(n)}(x) = 0.\]

Now, (recall I use the notation \(\iota(x_i)\) to denote a subsequence of \((x_i)\)) by compactness, there must exist a convergent subsequence \(\iota(x_i) \to x\), some \(x \in [0,1]\). I claim \(\forall n \geq 0, f^{(n)}(x) = 0\), which would then contradict the given property of \(f\).

Indeed, suppose for some \(k, |f^{(k)}(x)| = \kappa > 0\). Fix \(0 < \epsilon < \frac{1}{2} \kappa\). Since \(f\) is infinitely differentiable, every \(n\)th derivative of \(f\) is continuous, so by the continuity of \(f^{(k)}\), there exists some \(\delta_\epsilon\) such that for any \(x_j\) among the infinitely many that lie inside \(B(x, \delta_\epsilon), f^{k}(x_j) \in B(f^{(k)}(x), \epsilon)\), a contradiction. \(_\square\)

Sep 18, 2014
0 notes

Some random basic problem:

Let \(V \overset{T}{\to} W \overset{S}{\to} X\) be maps between finite-dimensional real vector spaces. Then prove

\[\mathrm{rank}(T) + \mathrm{rank}(S) - \dim W \leq \mathrm{rank}(S \circ T) \leq \max (\mathrm{rank}(T), \mathrm{rank}(S)).\]

Proof. Observe \[S \circ T = S|_{\mathrm{im}(V)} \implies \mathrm{rank}(S \circ T) = \dim \mathrm{im}(S \circ T) = \dim \mathrm{im}(T) - \dim (\mathrm{im}(T) \cap \mathrm{ker}(S))\]

And so we obtain the right-hand inequality trivially, since \[\dim \mathrm{im}(T) - \dim (\mathrm{im}(T) \cap \mathrm{ker}(S)) \leq \mathrm{rank}(T) \leq \max (\mathrm{rank}(T), \mathrm{rank}(S)).\]

To see the left hand side, see

\[\mathrm{rank}(S \circ T) \geq \dim \mathrm{im}(T) - \min(\dim \mathrm{im}(T), \dim \mathrm{ker}(S)) \implies\]
\[\mathrm{rank}(S \circ T) \geq \dim \mathrm{im}(T) + \min (\mathrm{rank}(T), \mathrm{rank}(S) - \dim W) \geq \mathrm{rank}(T) + \mathrm{rank}(S) - \dim W _\square \]

Sep 17, 2014
0 notes

Preliminary definitions for the Stone-Weierstrass theorem:

Define an algebra \(\mathscr{A}\) of complex or real functions over a set \(E\)

\[\mathscr{A} = \{E \to \mathbb{C}\}\]

(or \(\mathbb{R}\), respectively) as a family of functions closed under pointwise addition, multiplication, and scalar multiplication. \(\mathscr{A}\) is uniformly closed if for every uniformly Cauchy sequence of functions \((f_i)\) in \(\mathscr{A}, (f_i)\) converges in \(\mathscr{A}\). One can then define the uniform closure \(\bar{\mathscr{A}}\) of an algebra \(\mathscr{A}\) as the smallest superalgebra of \(\mathscr{A}\) which contains all the uniform limits of \(\mathscr{A}\)-sequences.

An algebra separates points if \[\forall x_1, x_2 \in E)(\exists f \in \mathscr{A})[x_1 \neq x_2 \implies f(x_1) \neq f(x_2)]\] and vanishes at no point if \[(\forall x \in E)[\sup_{f \in \mathscr{A}} f(x) > 0].\] For example, one sees the family of polynomial functions on an interval \([a,b]\) satisfies both these properties.

Combining these two properties gives us the following useful lemma: if \(c_1, c_2 \in \mathbb{R}\) or \(\mathbb{C}\), and \(\mathscr{A}\) separates points and vanishes at no point, then \(\forall x_1 \neq x_2 \in E, \exists f \in \mathscr{A}\) such that \(f(x_1) = c_1, f(x_2) = c_2\).

Proof. Let \(f_0\) separate \(x_1, x_2\), \(f_1\) nonzero at \(x_1\), \(f_2\) nonzero at \(x_2\). One checks that

\[f(t) = c_1 \dfrac{f_1(t)(f_0(x_2) - f_0(t))}{f_1(x_1)(f_0(x_2) - f_0(x_1))} + c_2 \dfrac{f_2(t)(f_0(x_1) - f_0(t))}{f_2(x_2)(f_0(x_1) - f_0(x_2))}\] works. \(_\square\)

Sep 15, 2014
1 note

The spectral theorems:

Lemma 1. Let \(T \in \operatorname{End}(V)\) for \(V \in \mathrm{FinVect}_\mathbb{C}\). Then there exists an upper-triangular matrix representation of \(T\) with respect to some basis of \(V\).

Proof. By induction. Since we’re over \(\mathbb{C}\), each operator has an eigenvalue. Clearly this holds if \(n = \dim V = 1\). If \(u \in \mathrm{im}(T - \lambda I)\) then \(Tu = (T - \lambda I) u + \lambda u\) so \(U = \mathrm{im}(T - \lambda I)\) is \(T\)-invariant. Then by the induction hypothesis, \(T|_U\) has an upper-triangular matrix representation with respect to some basis, say \(u_1, \dots, u_m\). Extend this to a basis \(u_1, \dots, u_m, v_1, \dots v_n\) of \(V\). For each \(1 \leq j \leq n, Tv_j = (T - \lambda I)v_j + \lambda v_j \implies Tv_j\) is of the form \(au + \lambda v_j\) for \(a \in \mathbb{C}\) and \(u \in U\), so \(Tv_j \in \mathrm{spam}(u_1, \dots, u_m, v_1, \dots, v_j)\) for each \(j\). Since the matrix of \(T\) with respect to \(u_1, \dots, u_m\) is already upper-triangular, we have \(u_j \in \mathrm{span}(u_1, \dots u_j\) for each \(1 \leq j \leq m\). Combining these two statements, we see \(T\) is upper-triangular with respect to \(u_1, \dots, u_m, v_1, \dots, v_n\), as desired. \(_\square\)

Lemma 2. Let \(T \in \mathrm{End}(V)\) for \(V\) a real inner-product space. Then whenever \(T\) is self-adjoint, \(T\) has an eigenvalue.

Proof. Let \(\dim V = n\). Then for any \(0 \neq v \in V\), there exist \(a_0, \dots, a_n\), not all zero, such that \[(\sum_{0 \leq i \leq n} a_i T^i)v = 0.\] Write \(p(x) = \sum_{0 \leq i \leq n} a_i x^i\) as a member of \(\mathbb{R}]x\); \(p(x)\) admits a unique irreducible factorization into quadratic and linear factors of the form \[p(x) = \prod q_i(x) \cdot \prod l_i(x).\] We see that if, for each \(i\), \(q_i(T)\) is invertible, then we are done, since one of the linear factors gives us our desired eigenvalue. Proceeding: fix \(i\); since \(q_i\) is irreducible, then writing \(q_i = x^2 + a_1 x + a_0\) gives \(a_1^2 < 4a_0\) since the quadratic formula would give us a further factorization into linear factors otherwise. Now, applying the Cauchy-Schwarz inequality and completing the square: \[\langle q_i(T)v,v \rangle = \langle T^2 v, v \rangle + a_1 \langle Tv, v \rangle + a_0 \langle v, v \rangle\] \[ = \langle Tv, Tv \rangle + a_1 \langle Tv, v \rangle + a_0 || v||^2 \geq ||Tv||^2 - |a_i| ||Tv||||v|| + a_0 ||v||^2\] \[(||Tv|| - \frac{|a_1||v||}{2})^2 + (a_0 - \frac{a_1^2}{4})||v||^2 > 0\]

so that \(q_i(T)v \neq 0\) whenever \(v \neq 0\), so that \(q_i(T)\) has trivial kernel and is hence invertible, which completes the proof. \(_\square\)

Theorem 1. (The complex spectral theorem.) Let \(V\) be a finite-dimensional \(\mathbb{C}\) inner-product space, and \(T\) an endomorphism of \(V\). Then \(V\) has an orthonormal basis of eigenvectors of \(T\) if and only if \(T\) is normal, i.e. commutes with its adjoint.

Proof. \((\Leftarrow)\) Applying Gram-Schmidt to the basis \(u_1, \dots, u_m\) (whose existence is guaranteed by Lemma 1) to which \(T\) has an upper-triangular matrix yields an orthonormal basis \(b_1, \dots, b_n\) such that for each \(m \leq n\), \(\mathrm{span}(b_1, \dots, b_m) = \mathrm{span}(u_1, \dots, u_m)\) is \(T\)-invariant and which therefore also gives an upper-triangular matrix for \(T\), say

\[M = (a_{ij}).\] Now, by the Pythagorean theorem, \(||Tb_1||^2 = |a_{11}^2|^2\) and \(||T^*b_1||^2 = ||Tb_1||^2 = \sum_{i \leq n} |a_{1,i}|^2\) so \(\sum_{1 < i \leq n} |a_{1,i}|^2 = 0 \implies a_{12} = 0, \dots, a_{1n} = 0\). Iterating this, we see all the nondiagonal entries of \(M\) are zero, and hence each \(b_i\) is an eigenvector of \(T\), as desired.

\((\Rightarrow)\). Let \(e_1, \dots, e_n\) generate \(V\). Then \(T\) has a diagonal matrix with respect to this basis; the result is immediate. \(_\square\)

Theorem 2. (The real spectral theorem.) Let \(V\) be a finite-dimensional \(\mathbb{R}\) inner-product space, and \(T\) an endomorphism of \(V\). Then \(V\) has an orthonormal basis of eigenvectors of \(T\) if and only if \(T\) is self-adjoint.

Proof. \((\Leftarrow)\) \(T\) self-adjoint \(\implies (\dim V = 1 \implies \text{ result})\). Inducting, by Lemma 2, \(T\) has an eigenvalue with eigenvector (say of norm \(1\)) \(u\). Consider U = \(u^{\perp}\). \(\forall v \in U, \langle u, Tv \rangle = \langle \lambda u, v \rangle = 0 \implies Tv \in U \implies T|_U\) is an operator on \(U\), which must have dimension less than that of \(V\). By the induction hypothesis, \(T|_U\) self-adjoint \(\implies \exists v_1, \dots, v_m\) all eigenvectors of \(T|_U\), which generate \(U\). Each \(v_i\) is also an eigenvector of \(T\), so adjoining \(u\) to this list completes the proof. (Clearly, \(T|_U\) is self-adjoint, as \(\langle T|_U v, w \rangle = \langle Tv, w \rangle = \langle v, Tw \rangle = \langle v, T|_U w \rangle\), for every \(v,w \in U\).

\((\Rightarrow)\) Immediate. \(_\square\)

Sep 13, 2014
3 notes

The Cayley-Hamilton theorem:

Let \(A \in M_m(k)\) an algebraically closed infinite field \(k\). Then \(A\) satisfies its characteristic polynomial \(C_A(x) \in k[x]\) given by \(\operatorname{det}(t \mathrm{id} - A)\).

Proof. Let \(n = m^2\) and write \(M_m(k)\) as affine \(n\)-space \(\mathbb{A}^n\) by identifying matrices with lexicographically sorted tuples of matrix entries. Since \(A\) in \(\mathbb{A}^n\) satisfies \(C_A(A) = 0\) if and only if the entries of \(A\) simultaneously satisfy the polynomial with coefficients in \(k\) that is in the corresponding entry of \(C_A(A)\), the set of all such \(A\), say \(Y\), is Zariski-closed.

Next, suppose \(A\) is diagonalizable. Then \(C_A = \prod_{i \leq m} (x - \lambda_i)\) so \(C_A(A) = Q^{-1} C_A(\mathrm{diag}(\lambda_1, \dots, \lambda_m) Q = 0\) so \(Y\) contains all diagonalizable matrices.

Now, we can identify monic polynomials of degree \(m\) as points in affine \(m\)-space \(\mathbb{A}^m\) by \(x^m + a_1 x^{m-1} + \dots + a_m \mapsto (a_1, \dots, a_m)\); since the discriminant acts as a polynomial on \(\mathbb{A}^m\), the set of all such points such that their corresponding \(m\)-degree monic polynomials have zero discriminant is hence also Zariski-closed.

Now, one verifies that the map \(\mathbb{A}^n \to \mathbb{A}^m\) that takes \(A \to C_A\) is Zariski-continuous. Then the set \(Z\) of all \(m \times m\) matrices with characteristic polynomial with zero discriminant is also Zariski-closed.

When the discriminant of \(C_A\) is nonzero, \(A\) has \(m\) distinct eigenvalues and is hence diagonalizable. Hence we can write \(A^n = Y \cup Z\). Since \(A^n\) is Zariski-irreducible, \(A^n\) is either \(Y\) or \(Z\). Considering any \(\mathrm{diag}(\lambda_1 \dots \lambda_m)\) with the \(\lambda_i\) distinct renders the latter option absurd, so we are done. \(_\square\)

Sep 11, 2014
5 notes

One thing I learned from Terry that I was not taught in school is the importance of bad proofs. I would say “I think this is true”, work on it, see that there was no nice proof, and give up. Terry would say “Here’s a criterion that eliminates most of the problem. Then in what’s left, here’s a worse one that handles most of the detritus. One or two more epicycles. At that point it comes down to fourteen cases, and I checked them.” Yuck. But we would know it was true, and we would move on. (Usually these would get cleaned up a fair bit before publication.)

Or else I would say “I wonder if this is true” and Terry would say “Oh, it is for a while, but it starts to fail in six dimensions” where I hadn’t hardly exhausted the 3-dimensional case. That would get kind of spooky sometimes.

Allen Knutsen, in response to a Quora question about what it’s like to work with Terence Tao
Sep 11, 2014
2 notes

The Baire category theorem

eatsleepmath:

Neat proof; much like, etc.

Theorem. (Baire) Let \((E_\alpha)_{\alpha \in A}\) be an at most countable sequence of subsets of a complete metric space \(X;\) if \[\exists x,r : B(x,r) \subset \displaystyle \cup_{\alpha \in A} E_\alpha\] then at least one of the \(E_n\) is dense in a subball \(B’ \subset B\) (in particular, it ain’t nowhere dense;) taking the contrapositive of this, we have the other form in which BCT is usually stated, namely that the countable union of nowhere dense sets cannot contain a ball.

Proof. Suppose towards a contradiction not, i.e. \(\exists x_0, r_0: B(X_0,r_0) \subset \displaystyle \cup_{\alpha \in A} E_\alpha\) for \(E_\alpha\) each nowhere dense; \(E_\alpha\) nowhere dense \(\Rightarrow \forall B \exists B’ \subset B : B’ \cap E_\alpha = \emptyset\) (proof: look at the contrapositive, i.e. suppose \(\exists B : \forall B’ \subset B, B’ \cap E_\alpha \not = \emptyset \Rightarrow E\) dense in \(B;\) a contradiction) now we can recursively construct balls \(B_i = B(x_i,r_i), B_i \cap E_i = \emptyset\) (shrinking \(r_{i + 1}\) as necessary, we can assume \(r_{i + 1} \leq \dfrac{1}{10} r_{i}\)) such that \(\forall i \in \mathbb{N}, B_{i +1} \subset B_i;\) invoking the axiom of choice pick a sequence \((x_n)\) with \(x_i \in B_i\); by construction this is obviously Cauchy and so \((x_n) \rightarrow x \in B \backslash \cup_{\alpha \in A} E_\alpha,\) a contradiction. \(_\square\)

Sep 10, 2014
1 note

Arzela-Ascoli:

(Adapted from Wikipedia.)

Let \((f_i)\) be continuous on \([a,b]\) such that

a) for some \(M\), \(|f_i| \leq M\) for each \(i\), and

b) \(\forall \epsilon > 0, \exists \delta : x \in B(y,\delta) \implies f_i(x) \in B(f_i(y),\epsilon)\) for each i.

Note \([-M,M]\) is compact. Let \((q_i)\) be an enumeration of the rationals in \([a,b]\). Then obtain recursively by the following a family of subsequences \(\iota_i(f_i)\) of \((f_i)\):

\(\iota_1(f_i(q_1))\) satisfies the existence guaranteed by Bolzano-Weierstrass; and \(\iota_n(f_i)\) is a subsequence of \(\iota_{n-1}(f_i)\) which satisfies the existence guaranteed by Bolzano-Weierstrass when each member of \(\iota_n(f_i)\) is evaluated at \(q_n\).

Consider the diagonal of this array of sequences: \[\iota_n(f_{\iota_n(i)})_{n \in \mathbb{N}}.\] By construction this converges at every rational point in \([a,b]\). By equicontinuity, \(\forall \epsilon > 0, \forall x \in [a,b] \exists U_x\) open such that \(y \in U_x \implies f(y) \in B(f(x),\epsilon)\). Now, \(\bigcup_{x \in [a,b]} U_x\) covers \([a,b]\) and so admits a finite subcover, say \(U_1, \dots U_m\).

Now, there exists some finite \(K\) so that \(\forall k < K, (q_k)\) lies in every \(U_i\).

Let \(t \in [a,b]\). Then \(\exists j, k \leq K \in \mathbb{N}\) such that \({x_k,t} \subseteq U_j\) for some \(1 \leq j \leq m\).

Now, some geometric intuition: \(|a - b| \leq |a - c| + |c - d|\) because the quantity on the left’s the magnitude of the line between \(a\) and \(b\), while the quantity on the right is the magnitude of the segmented line that connects \(a \to c \to b\); this can be iterated.

So we have \[|f_n(t) - f_m(t)| \leq |f_n(t) - f_n(q_k)| + |f_n(q_k) - f_m(q_k)| + |f_m(q_k) - f_m(t)| \leq \sum_{i \leq 3} \epsilon / 3 = \epsilon\]

so \(\iota_n(f_{\iota_n(i)})_{n \in \mathbb{N}}\) is uniformly Cauchy and so converges uniformly, as desired. \(_\square\)

Navigate
« To the past Page 1 of 38
About
Repository for math-related anecdotes, code, random thoughts, and mathematical exposition and exercises from algebra, algebraic geometry, analysis, measure theory, and set/model theory. Subscribe via RSS.