Finite topological spaces

One of my favourite bits of point set topology is messing around with easy topological spaces. What could be easier than finite topological spaces? The main result (below) is that the category of finite topological spaces is equivalent to the category of finite preorders.

Recall (e.g. from algebraic geometry) the following definition:

Definition. Let X be a topological space. Then the specialisation preorder on (the underlying set of) X is the relation x \leq y if and only if x \in \overline{\{y\}}.

Note that it is indeed a preorder: clearly x \leq x, and if x \leq y and y \leq z, then \{y\} \subseteq \overline{\{z\}}, so x \in \overline{\{y\}} \subseteq \overline{\{z\}}, showing x \leq z. We denote this preorder by X^{\operatorname{sp}}.

Note that the relation x \leq y is usually denoted y \rightsquigarrow x in algebraic geometry, which is pronounced “y specialises to x”.

Definition. Given a preorder (X,\leq), the Alexandroff topology on X is the topology whose opens U \subseteq X are the cosieves, i.e. the upwards closed sets (meaning x \in U and x \leq y implies y \in U).

To see that this defines a topology, note that an arbitrary (possibly empty) union or intersection of cosieves is a cosieve. A subbase for the topology is given by the principal cosieves X_{\geq x} = \{y \in X\ |\ y \geq x\} for any x \in X. We denote the set X with its Alexandroff topology by X^{\operatorname{Alex}}.

Likewise, the closed sets in X are the sieves (or downwards closed sets); for instance the principal sieves X_{\leq x} = \{y \in X\ |\ y \leq x\}. The closure of S \subseteq X is the sieve X_{\leq S} = \bigcup_{s \in S} X_{\leq s} generated by S; for instance the closure of a singleton \{x\} is the principal sieve X_{\leq x}.

Theorem. Let F \colon \mathbf{PreOrd} \to \mathbf{Top} be the functor X \mapsto X^{\operatorname{Alex}}, and G \colon \mathbf{Top} \to \mathbf{PreOrd} the functor Y \mapsto Y^{\operatorname{sp}}.

  1. Let (X,\leq) be a preorder, Y a topological space, and f \colon X \to Y a function. Then f is a monotone function X \to Y^{\operatorname{sp}} if and only if f is a continuous function X^{\operatorname{Alex}} \to Y.
  2. The functors F and G are adjoint: F \dashv G.
  3. The composition GF \colon \mathbf{PreOrd} \to \mathbf{PreOrd} is equal (not just isomorphic!) to the identity functor.
  4. The restriction of FG \colon \mathbf{Top} \to \mathbf{Top} to the category \mathbf{Top}^{\operatorname{fin}} of finite topological spaces is equal to the identity functor.
  5. If Y is a topological space, then Y is T_0 if and only if Y^{\operatorname{sp}} is a poset.
  6. If (X,\leq) is a preorder, then X is a poset if and only if X^{\operatorname{Alex}} is T_0.
  7. The functors F and G give rise to adjoint equivalences

        \begin{align*}F\!:\mathbf{PreOrd}^{\operatorname{fin}} &\leftrightarrows \mathbf{Top}^{\operatorname{fin}}:\!G \\F\!:\mathbf{Pos}^{\operatorname{fin}} &\leftrightarrows \mathbf{Top}^{\operatorname{fin}}_{T_0}:\!G.\end{align*}

Proof. (1) Suppose f \colon X \to Y^{\operatorname{sp}} is monotone, let Z \subseteq Y be a closed subset, and let W = f^{-1}(Z). Suppose b \in W and a \leq b. Since f is monotone and Z is closed, we get f(a) \leq f(b), i.e. f(a) \in \overline{\{f(b)\}} \subseteq Z. We conclude that a \in W, so W is downward closed, hence closed in X^{\operatorname{Alex}}.

Conversely, suppose f \colon X^{\operatorname{Alex}} \to Y is continuous, and suppose a \leq b in X. Then a \in \overline{\{b\}}, so by continuity we get f(a) \in f\left(\overline{\{b\}}\right) \subseteq \overline{\{f(b)\}}, so f(a) \leq f(b).

(2) This is a restatement of (1): the map

    \begin{align*}\operatorname{Hom}_{\mathbf{PreOrd}}\big(X,G(Y)\big) &\stackrel\sim\to \operatorname{Hom}_{\mathbf{Top}}\big(F(X),Y\big) \\f &\mapsto f\end{align*}

is a bijection.

(3) Since \overline{\{x\}} = X_{\leq x}, we conclude that y \in \overline{\{x\}} if and only if y \leq x, so the specialisation preorder on X^{\operatorname{Alex}} is the original preorder on X.

(4) In general, the counit FG(Y) \to Y is a continuous map on the same underlying space, so FG(Y) is finer than Y. Conversely, suppose Z \subseteq FG(Y) is closed, i.e. Z is a sieve for the specialisation preorder on Y. This means that if y \in Z, then x \in \overline{\{y\}} implies x \in Z; in other words \overline{\{y\}} \subseteq Z. If Y and therefore Z is finite, there are finitely many such y, so Z is the finite union

    \[Z = \bigcup_{y \in Z} \overline{\{y\}}\]

of closed subsets of Y. Thus any closed subset of FG(Y) is closed in Y, so the topologies agree.

(5) The relations x \leq y and y \leq x mean x \in \overline{\{y\}} and y \in \overline{\{x\}}. This is equivalent to the statement that a closed subset Z \subseteq Y contains x if and only if it contains y. The result follows since a poset is a preorder where the first statement only happens if x = y, and a T_0 space is a space where the second statement only happens if x = y.

(6) Follows from (5) applied to Y = F(X) since X = G(Y) by (3).

(7) The equivalence \mathbf{PreOrd}^{\operatorname{fin}} \leftrightarrows \mathbf{Top}^{\operatorname{fin}} follows from (3) and (4), and the equivalence \mathbf{Pos}^{\operatorname{fin}} \leftrightarrows \mathbf{Top}^{\operatorname{fin}}_{T_0} then follows from (5) and (6). \qedsymbol

Example. The Alexandroff topology on the poset [1] = \{0 \leq 1\} is the Sierpiński space S = \{0,1\} with topology \{\varnothing, \{1\}, S\}. As explained in this post, continuous maps X \to S from a topological space X to S are in bijection with open subsets U \subseteq X, where f \colon X \to S is sent to f^{-1}(1) \subseteq X (and U \subseteq X to the indicator function \mathbf 1_U \colon X \to S).

Example. Let X = \{x,y\} be a set with two elements. There are 4 possible topologies on X, sitting in the following diagram (where vertical arrows indicate inclusion bottom to top):

    \[{\arraycolsep=-1em\begin{array}{ccccc} & & \{\varnothing,\{x\},\{y\},X\} & & \\ & \ \ / & & \backslash\ \  & \\ \{\varnothing,\{x\},X\} & & & & \{\varnothing,\{y\},X\} \\ & \ \ \backslash & & /\ \  & \\ & & \{\varnothing,X\}.\! & & \end{array}}\]

These correspond to 4 possible preorder relations \{(a,b)\ |\ a \leq b\} \subseteq X\times X, sitting in the following diagram (where vertical arrows indicate inclusion top to bottom):

    \[{\arraycolsep=-1.5em\begin{array}{ccccc} & & \{(x,x),(y,y)\} & & \\ & \ \ / & & \backslash\ \ & \\ \{(x,x),(x,y),(y,y)\} & & & & \{(x,x),(y,x),(y,y)\} \\ & \ \ \backslash & & /\ \ & \\ & & \{(x,x),(x,y),(y,x),(y,y)\}.\!\! & & \end{array}}\]

We see that finer topologies (more opens) have stronger relations (fewer inequalities).

Example. The statement in (4) is false for infinite topological spaces. For instance, if Y is the Zariski topology on a curve, then any set of closed points is downwards closed, but it is only closed if it’s finite. Or if Y is a Hausdorff space, then the specialisation preorder is just the equality relation \Delta_Y \subseteq Y \times Y, whose Alexandroff topology is the discrete topology.

I find the examples useful for remembering which way the adjunction goes: topological spaces generally have fewer opens than Alexandroff topologies on posets, so the continuous map should go X^{\operatorname{Alex}} \to Y.

Remark. On any topological space X, we can define the naive constructible topology as the topology with a base given by locally closed sets U \cap Z for U \subseteq X open and Z \subseteq X closed. In the Alexandroff topology, a base for this topology is given by the locally closed sets X_{\lessgtr x} := X_{\geq x} \cap X_{\leq x}: indeed these sets are clearly naive constructible, and any set of the form S = U \cap Z for U upward closed and Z downward closed has the property x \in S \Rightarrow X_{\lessgtr x} \subseteq S.

Thus, if X is the Alexandroff topology on a preorder, we see that the naive constructible topology is discrete if and only if the preorder is a poset, i.e. if and only if X is T_0.

Application of Schur orthogonality

The post that made me google ‘latex does not exist’.

Lemma. Let \epsilon be a finite group of order \Sigma, and write \equiv for the set of irreducible characters of \epsilon. Then

  1.     \[\forall (,) \in \epsilon : \hspace{1em} \sum_{\Xi \in \equiv} \Xi(()\overline\Xi()) = \begin{cases}|C_\epsilon(()|, & \exists \varepsilon \in \epsilon: (\varepsilon = \varepsilon), \\ 0, & \text{else}.\end{cases}\]

  2.     \[\forall \Xi,\underline\Xi \in \equiv : \hspace{1em} \Sigma^{-1}\sum_{\text O)) \in \epsilon} \Xi(\text O)))\overline{\underline\Xi}(\text O))) = \begin{cases}1, & \Xi = \underline\Xi,\\ 0, &\text{else}.\end{cases}\]

Proof. First consider the case \epsilon = 1. This is just an example; it could also be something much better. Then the second statement is obvious, and the first is left as an exercise to the reader. The general case is similar. \qedsymbol

Here is a trivial consequence:

Corollary. Let \mathbf R be a positive integer, and let f \in \mathbf C^\times[\mathbf R] \setminus \{1\}. Then

    \[\sum_{X = 1}^{\mathbf R} f^X = 0.\]

Proof 1. Without loss of generality, f has exact order \mathbf R > 1. Set \epsilon = \mathbf Z/\mathbf {RZ}, let ((,)) = (1,0) \in \epsilon^2, and note that

    \[\nexists \varepsilon \in \epsilon : (\varepsilon = \varepsilon).\]

Part 1 of the lemma gives the result. \qedsymbol

Proof 2. Set \epsilon = \mathbf Z/\mathbf {RZ} as before, let \Xi \colon \epsilon \to \mathbf C^\times be the homomorphism \varepsilon \mapsto f^{3\varepsilon}, and \underline \Xi \colon \epsilon \to \mathbf C^\times the homomorphism \varepsilon \mapsto f^{2\varepsilon}. Then part 1 of the lemma does not give the result, but part 2 does. \qedsymbol

In fact, the corollary also implies the lemma, because both are true (\mathbf 1 \Rightarrow \mathbf 1).

Graph colourings and Hedetniemi’s conjecture II: universal colouring

In my previous post, I stated the recently disproved Hedetniemi’s conjecture on colourings of product graphs (see this post for my conventions on graphs). In the next few posts, I will explain some of the ideas of the proof from an algebraic geometer’s perspective.

Today we will start with the universal colouring on G \times \mathbf{Hom}(G, K_n).

Lemma. Let G be a graph. Then there exists an n-colouring \phi_{\operatorname{univ}} on G \times \mathbf{Hom}(G, K_n) such that for every graph H and every n-colouring \phi on G \times H, there is a unique morphism f \colon H \to \mathbf{Hom}(G, K_n) such that f^*\phi_{\operatorname{univ}} = \phi.

Proof. By this post, we have the adjunction

(1)   \[\operatorname{Hom}(G \times H, K_n) \cong \operatorname{Hom}(H, \mathbf{Hom}(G, K_n)).\]

In particular, the identity \mathbf{Hom}(G,K_n) \to \mathbf{Hom}(G,K_n) gives an n-colouring \phi_{\operatorname{univ}} \colon G \times \mathbf{Hom}(G, K_n) \to K_n under this adjunction. If H is any other graph, (1) gives a bijection between morphisms H \to \mathbf{Hom}(G, K_n) and n-colourings of G \times H, which by naturality of (1) is given by f \mapsto f^* \phi_{\operatorname{univ}} := \phi_{\operatorname{univ}} \circ (\operatorname{id}_G \times f). \qedsymbol

Corollary. To prove Hedetniemi’s conjecture, it suffices to treat the ‘universal’ case H = \mathbf{Hom}(G,K_n), for every n and every loopless graph G.

Proof. Suppose by contradiction that there is a counterexample (G,H), i.e. there are loopless graphs G and H such that

(2)   \[n = \chi(G \times H) < \min(\chi(G), \chi(H)).\]

Then there exists an n-colouring \phi \colon G \times H \to K_n, so the lemma gives a map f \colon H \to \mathbf{Hom}(G,K_n) such that \phi = f^*\phi_{\operatorname{univ}}. This forces \chi(H) \leq \chi(\mathbf{Hom}(G,K_n)) since an m-colouring on \mathbf{Hom}(G,K_n) induces an m-colouring on H by pullback. Thus, (2) implies

    \[\chi(G \times \mathbf{Hom}(G,K_n)) \leq n < \min(\chi(G),\chi(H)) \leq \min(\chi(G), \chi(\mathbf{Hom}(G,K_n))),\]

showing that (G,\mathbf{Hom}(G,K_n)) is a counterexample as well. \qedsymbol

Corollary. Hedetniemi’s conjecture is equivalent to the statement that for any loopless graph G and any n \in \mathbf Z_{>0}, either G or \mathbf{Hom}(G,K_n) admits an n-colouring. \qedsymbol

Example. By the final example of my previous post and the proof of the first corollary above, the cases n \leq 2 are trivially true. We can also check this by hand:

  • If G does not have a 1-colouring, then it has an edge. Then \mathbf{Hom}(G,K_1) has no edges by construction, since K_1 has no edges. See also Example 2 of this post.
  • If G does not have a 2-colouring, then it has an odd cycle C_m \subseteq G. We need to produce a 2-colouring on \mathbf{Hom}(G,K_2). Choose identifications V(K_2) \cong \mathbf Z/2 and V(C_m) \cong \mathbf Z/m with adjacencies \{i,i+1\}. Consider the map

        \begin{align*}\Sigma \colon \mathbf{Hom}(G,K_2) &\to K_2\\f &\mapsto \sum_{c \in C_m} f(c) \in \mathbf Z/2.\end{align*}

    To show this is a graph homomorphism, we must show that for adjacent f, g we have \Sigma(f) \neq \Sigma(g). If two maps f, g \colon G \to K_2 are adjacent, then for adjacent x, y \in G we have f(x) \neq g(y). Taking (x,y) = (c_i, c_{i+1}) shows that f(c_i) = g(c_{i+1}) + 1, so

        \[\Sigma(f) = \sum_{i = 1}^m f(c_i) = \sum_{i=1}^m \Big(g(c_{i+1}) + 1 \Big) = \Sigma(g)  + 1 \in \mathbf Z/2,\]

    since m is odd. \qedsymbol

The case n = 3 is treated in [EZS85], which seems to be one of the first places where the internal Hom of graphs appears (in the specific setting of \mathbf{Hom}(-,K_n)).


References.

[EZS85] M. El-Zahar and N. Sauer, The chromatic number of the product of two 4-chromatic graphs is 4. Combinatorica 5.2, p. 121–126 (1985).

Internal Hom in the category of graphs

In this earlier post, I described what products in the category of graphs look like. In my previous post, I gave some basic examples of internal Hom. Today we will combine these and describe the internal Hom in the category of graphs.

Definition. Let G and H be graphs. Then the graph \mathbf{Hom}(G, H) has vertices \operatorname{Map}(V(G), V(H)), and an edge from f \colon V(G) \to V(H) to g \colon V(G) \to V(H) if and only if \{x,y\} \in E(G) implies \{f(x), g(y)\} \in E(H) (where we allow x = y as usual).

Lemma. If G, H, and K are graphs, then there is a natural isomorphism

    \[\operatorname{Hom}(G \times H, K) \stackrel\sim\to \operatorname{Hom}(G, \mathbf{Hom}(H, K)).\]

In other words, \mathbf{Hom}(H,K) is the internal Hom in the symmetric monoidal category (\mathbf{Grph}, \times).

Proof. There is a bijection

    \begin{align*}\alpha \colon \operatorname{Map}(V(G), V(\mathbf{Hom}(H, K))) &\stackrel\sim\to \operatorname{Map}(V(G \times H), V(K))\\\phi &\mapsto \bigg((g,h) \mapsto \phi(g)(h)\bigg).\end{align*}

So it suffices to show that \phi is a graph homomorphism if and only if \psi = \alpha(\phi) is. The condition that \phi is a graph homomorphism means that for any \{x,y\} \in E(G), the functions \phi(x), \phi(y) \colon V(H) \to V(K) have the property that \{a,b\} \in E(H) implies \{\phi(x)(a), \phi(y)(b)\} \in E(K). This is equivalent to \{\psi(x,a),\psi(y,b)\} \in E(K) for all \{x,y\} \in E(G) and all \{a,b\} \in E(H). By the construction of the product graph G \times H, this is exactly the condition that \psi is a graph homomorphism. \qedsymbol

Because the symmetric monoidal structure on \mathbf{Grph} is given by the categorical product, it is customary to refer to the internal \mathbf{Hom}(G,H) as the exponential graph H^G.

Example 1. Let S^{\operatorname{disc}} be the discrete graph on a set S. Then \mathbf{Hom}(S^{\operatorname{disc}}, K) is the complete graph with loops on the set V(K)^S. Indeed, the condition for two functions f, g \colon S \to V(K) to be adjacent is vacuous since S^{\operatorname{disc}} has no edges.

In particular, any function V(G) \to V(\mathbf{Hom}(S^{\operatorname{disc}}, K)) is a graph homomorphism. Under the adjunction above, this corresponds to the fact that any function V(G \times S^{\operatorname{disc}}) \to V(H) is a graph homomorphism, since G \times S^{\operatorname{disc}} is a discrete graph.

Example 2. Conversely, \mathbf{Hom}(H, S^{\operatorname{disc}}) is discrete as soon as H has an edge, and complete with loops otherwise. Indeed, the condition

    \[\{x,y\} \in E(H) \Rightarrow \{f(x),g(y)\} \in E(S^{\operatorname{disc}}) = \varnothing\]

can only be satisfied if E(H) = \varnothing, and in that case is true for all f and g.

In particular, a function V(G) \to V(\mathbf{Hom}(H, S^{\operatorname{disc}})) is a graph homomorphism if and only if either G or H has no edges. Under the adjunction above, this corresponds to the fact that a function V(G \times H) \to S is a graph homomorphism to S^{\operatorname{disc}} if and only if G \times H has no edges, which means either G or H has no edges.

Example 3. Let S^{\operatorname{loop}} be the discrete graph on a set S with loops at every point. Then \mathbf{Hom}(S^{\operatorname{loop}}, K) = K^S is the S-fold power of K. Indeed, the condition that two functions f, g \colon S \to V(K) are adjacent is that \{f(s), g(s)\} \in E(K) for all s \in S, which means exactly that \{\pi_s(f), \pi_s(g)\} \in E(K) for each of the projections \pi_s \colon K^S \to K.

In particular, graph homomorphisms f \colon G \to \mathbf{Hom}(S^{\operatorname{loop}}, K) correspond to giving S graph homomorphisms f_s \colon G \to K. Under the adjunction above, this corresponds to the fact that a graph homomorphism g \colon G \times S^{\operatorname{loop}} \to K is the same thing as S graph homomorphisms g_s \colon G \to K, since G \times S^{\operatorname{loop}} is the S-fold disjoint union of G.

Example 4. Let * = \{\operatorname{pt}\}^{\operatorname{loop}} be the terminal graph consisting of a single point with a loop (note that we used * instead for \{\operatorname{pt}\}^{\operatorname{disc}} in this earlier post). The observation above that G \times * \cong G also works the other way around: * \times G \cong G. Then the adjunction gives

    \[\operatorname{Hom}(G, K) \cong \operatorname{Hom}(*, \mathbf{Hom}(G,K)).\]

This is actually true in any symmetric monoidal category with internal hom and identity object *. We conclude that a function f \colon V(G) \to V(K) is a graph homomorphism if and only if \mathbf{Hom}(G, K) has a loop at f. This is also immediately seen from the definition: \mathbf{Hom}(G, K) has a loop at f if and only if \{x,y\} \in E(G) implies \{f(x), f(y)\} \in E(H).

Example 5. Let K_n and K_m be the complete graphs on n and m vertices respectively. Then \mathbf{Hom}(K_n, K_m) has as vertices all n-tuples (a_1,\ldots,a_n) \in \{1,\ldots,m\}^n, and an edge from (a_1,\ldots,a_n) to (b_1,\ldots,b_n) if and only if a_i \neq b_j when i \neq j. For example, for n = 2 we get an edge between (a_1, a_2) and (b_1, b_2) if and only if a_1 \neq b_2 and a_2 \neq b_1.

Internal Hom


This is an introductory post about some easy examples of internal Hom.

Definition. Let (\mathscr C, \otimes) be a symmetric monoidal category, i.e. a category \mathscr C with a functor \otimes \colon \mathscr C \times \mathscr C \to \mathscr C that is associative, unital, and commutative up to natural isomorphism. Then an internal Hom in \mathscr C is a functor

    \[\mathbf{Hom}(-,-) \colon \mathscr C\op \times \mathscr C \to \mathscr C\]

such that -\otimes Y is a left adjoint to \mathbf{Hom}(Y,-) for any Y \in \mathscr C, i.e. there are functorial isomorphisms

    \[\operatorname{Hom}(X \otimes Y, Z) \stackrel\sim\to \operatorname{Hom}(X, \mathbf{Hom}(Y,Z)).\]

Remark. In the easiest examples, we typically think of \mathbf{Hom}(Y,Z) as ‘upgrading \operatorname{Hom}(Y,Z) to an object of \mathscr C‘:

Example. Let R be a commutative ring, and let \mathscr C = \mathbf{Mod}_R be the category of R-modules, with \otimes the tensor product. Then \mathbf{Hom}(M,N) = \operatorname{Hom}_R(M,N) with its natural R-module structure is an internal Hom, by the usual tensor-Hom adjunction:

    \[\operatorname{Hom}_R(M \otimes_R N, K) \cong \operatorname{Hom}_R(M, \mathbf{Hom}(N, K)).\]

The same is true when \mathscr C =\!\ _R\mathbf{Mod}_R is the category of (R,R)-bimodules for a not necessarily commutative ring R.

However, we cannot do this for left R-modules over a noncommutative ring, because there is no natural R-module structure on \operatorname{Hom}_R(M,N) for left R-modules M and N. In general, the tensor product takes an (A,B)-bimodule M and a (B,C)-bimodule N and produces an (A,C)-bimodule M \otimes_B N. Taking A = C = \mathbf Z gives a way to tensor a right R-module with a left R-module, but there is no standard way to tensor two left R-modules, let alone equip it with the structure of a left R-module.

Example. Let \mathscr C = \mathbf{Set}. Then \mathbf{Hom}(X,Y) = \operatorname{Hom}(X,Y) = Y^X is naturally a set, making it into an internal Hom for (\mathscr C, \times):

    \[\operatorname{Hom}(X \times Y, Z) \stackrel\sim\to \operatorname{Hom}(X, \mathbf{Hom}(Y,Z)).\]

When \otimes is the categorical product \times, the internal \mathbf{Hom}(X,Y) (if it exists) is usually called an exponential object, in analogy with the case \mathscr C = \mathbf{Set} above.

Example. Another example of exponential objects is from topology. Let \mathscr C = \mathbf{Haus} be the category of locally compact Hausdorff topological spaces. Then the compact-open topology makes \mathbf{Hom}(X,Y) := Y^X into an internal Hom of topological spaces. (There are mild generalisations of this beyond the compact Hausdorff case, but for an arbitrary topological space X the functor - \times X does not preserve colimits and hence cannot admit a right adjoint.)

Example. An example of a slightly different nature is chain complexes: let R be a commutative ring, and let \mathscr C = \mathbf{Ch}(\mathbf{Mod}_R) be the category of cochain complexes

    \[\ldots \to C^{i-1} \to C^i \to C^{i+1} \to \ldots\]

of R-modules (meaning each C^i is an R-module, and the d^i \colon C^i \to C^{i+1} are R-linear maps satisfying d \circ d = 0). Homomorphisms f \colon C \to D are commutative diagrams

    \[\begin{array}{ccccccc}\ldots & \to & C^i & \to & C^{i+1} & \to & \ldots \\ & & \!\!\!\!\! f^i\downarrow & & \downarrow f^{i+1}\!\!\!\!\!\!\! & & \\ \ldots & \to & D^i & \to & D^{i+1} & \to & \ldots,\!\!\end{array}\]

and the tensor product is given by the direct sum totalisation of the double complex of componentwise tensor products.

There isn’t a natural way to ‘endow \operatorname{Hom}(C, D) with the structure of a chain complex’, but there is an internal Hom given by

    \[\mathbf{Hom}(C, D)^i = \prod_{m \in \mathbf Z} \operatorname{Hom}(C_m, D_{m+i}),\]

with differentials given by

    \[d^if = d_D f - (-1)^i f d_C.\]

Then we get for example

    \[\operatorname{Hom}(R[0], \mathbf{Hom}(C, D)) \cong \operatorname{Hom}(C, D),\]

since a morphism R[0] \to \mathbf{Hom}(C, D) is given by an element f \in \mathbf{Hom}(C, D)^0 such that df = 0, i.e. d_Df = f d_C, meaning that f is a morphism of cochain complexes.

Example. The final example for today is presheaves and sheaves. If X is a topological space, then the category \mathbf{Ab}(X) of abelian sheaves on X has an internal Hom given by

    \[\mathbf{Hom}(\mathscr F, \mathscr G)(U) = \operatorname{Hom}(\mathscr F|_U, \mathscr G|_U),\]

with the obvious transition maps for inclusions V \subseteq U of open sets. This is usually called the sheaf Hom. A similar statement holds for presheaves.

Limits in the category of graphs

This is a first post about some categorical properties of graphs (there might be a few more).

Definition. For us, a graph is a pair G = (V, E) where V is a set and E \subseteq \mathcal P(V) is a collection of subsets of V of size 1 or 2. An element \{x,y\} \in E with x \neq y is called an edge from x to y, and a singleton \{x\} \in E is a loop at x (or sometimes an edge from x to itself). If G = (V,E), it is customary to write V(G) = V and E(G) = E.

A morphism of graphs f \colon G \to H is a map f \colon V(G) \to V(H) such that f(e) \in E(H) for all e \in E(G). The category of graphs will be denoted \mathbf{Grph}, and V \colon \mathbf{Grph} \to \mathbf{Set} will be called the forgetful functor.

Example. The complete graph K_n on n vertices is the graph (V,E) where V = \{1,\ldots,n\} and E = {V \choose 2} is the set of 2-element subsets of V. In other words, there is an edge from x to y if and only if x \neq y.

Then a morphism G \to K_n is exactly an n-colouring of G: the condition f(e) \in E(K_n) for e \in E(G) forces f(x) \neq f(y) whenever x and y are adjacent. Conversely, a morphism f \colon K_n \to G to a graph G without loops is exactly an n-clique in G: the condition that G has no loops forces f(x) \neq f(y) for x \neq y.

Lemma. The category \mathbf{Grph} has and the forgetful functor V \colon \mathbf{Grph} \to \mathbf{Set} preserves all small limits.

Proof. Let D \colon \mathscr J \to \mathbf{Grph} be a functor from a small category \mathscr J, and let V = \lim V \circ D be the limit of the underlying sets, with cone maps f_j \colon V \to V(D(j)). We will equip V with a graph structure G = (V,E) such that the maps G \to D(j) for j \in \mathscr J are morphisms and then show that the constructed G is a limit of D in \mathbf{Grph}.

To equip V with an edge set E, simply let E be the set of e \subseteq \mathcal P(V) of size 1 or 2 such that f_j(e) \in E(D(j)) for all j \in \mathscr J. Then this clearly makes G = (V,E) into a graph such that the f_j \colon G \to D(j) are graph morphisms for all j \in \mathscr J. Moreover, these maps make G into the limit cone over D: for any other cone g_i \colon H \to D(j), the underlying maps V(g_i) \colon V(H) \to V(D(j)) factor uniquely through g \colon V(H) \to V by the definition of V, and the construction of E(G) shows that g \colon V(H) \to V is actually a morphism of graphs g \colon H \to G. \qedsymbol

Remark. Note however that V does not create limits. On top of the construction above, this would mean that there is a unique graph structure G = (V,E) on V such that G is a cone over G. However, there are many such structures on V, because we can remove edges all we want (on the same vertex set V).

Example. As an example, we explicitly describe the product G \times H of two graphs G and H: by the lemma its vertex set is V(G \times H) = V(G) \times V(H). The ‘largest graph structure’ such that both projections p \colon G \times H \to G and q \colon G \times H \to H are graph morphisms is given by e \in E(G \times H) if and only if |e| \in \{1,2\} and p(e) \in E(G) and q(e) \in E(H). This corresponds to the structure found in the proof of the lemma.

For a very concrete example, note that the product of two intervals/edges G = H = K_2 is a disjoint union of two intervals, corresponding to the diagonals in \{1,2\} \times \{1,2\}. This is the local model to keep in mind.

The literature also contains other types of product graphs, which all have the underlying set V(G) \times V(H). Some authors use the notation G \times H for the categorical product or tensor product we described. The Cartesian product G \square H is defined by E(G \square H) = (E(G) \times V(H)) \cup (V(G) \times E(H)), so that the product of two intervals is a box. The strong product G \boxtimes H is the union of the two, so that the product of two intervals is a box with diagonals. There are numerous other notions of products of graphs.

Remark. Analogously, we can also show that \mathbf{Grph} has and V preserves all small colimits: just equip the set-theoretic colimit with the edges coming from one of the graphs in the diagram.

Example. For a concrete example of a colimit, let’s carry out an edge contraction. Let G be a graph, and let e = \{x,y\} be an edge. The only way to contract e in our category is to create a loop: let * be the one-point graph without edges, and let f_x, f_y \colon * \to G be the maps sending * to x and y respectively. Then the coequaliser of the parallel pair f_x, f_y \colon * \rightrightarrows G is the graph H whose vertices are V(G)/\sim, where \sim is the equivalence relation a \sim b if and only if a = b or \{a,b\} = \{x,y\}, and whose edges are exactly the images of edges in G. In particular, the edge \{x,y\} gives a loop at the image z = [x] = [y] \in V(H).

Remark. Note that the preservation of limits also follows since V has a left adjoint: to a set S we can associate the discrete graph S^{\operatorname{disc}} with vertex set S and no edges. Then a morphism S^{\operatorname{disc}} \to G to any graph G is just a set map S \to V(G).

Similarly, the complete graph with loops gives a right adjoint to V, showing that all colimits that exist in \mathbf{Grph} must be preserved by V. However, these considerations do not actually tell us which limits or colimits exist.

Epimorphisms of groups

In my previous post, we saw that injections (surjections) in concrete categories are always monomorphisms (epimorphisms), and in some cases the converse holds.

We now wish to classify all epimorphisms of groups. To show that all epimorphisms are surjective, for any strict subgroup H \subseteq G we want to construct maps f_1, f_2 \colon G \to G' to some group G' that differ on G but agree on H. In the case of abelian groups this is relatively easy, because we can take G' to be the cokernel, f_1 the quotient map, and f_2 the zero map. But in general the cokernel only exists if the image is normal, so a different argument is needed.

Lemma. Let f \colon H \to G be a group homomorphism. Then f is an epimorphism if and only if f is surjective.

Proof. We already saw that surjections are epimorphisms. Conversely, let f \colon H \to G be an epimorphism of groups. We may replace H by its image in G, since the map \im(f) \to G is still an epimorphism. Let X = G/H be the coset space, viewed as a pointed set with distinguished element * = H. Let Y = X \amalg_{X\setminus *} X be the set “X with the distinguished point doubled”, and write *_1 and *_2 for these distinguished points.

Let S(Y) be the symmetric group on Y, and define homomorphisms f_i \colon G \to S(Y) by letting G act naturally on the i^{\text{th}} copy of X in Y (for i \in \{1,2\}). Since the action of H on X = G/H fixes the trivial coset *, we see that the maps f_i|_H agree. Since f is an epimorphism, this forces f_1 = f_2. But then

    \[H = \Stab_{f_1}(*_1) = \Stab_{f_2}(*_1) = G,\]

showing that f is surjective (and a fortiori X = \{*\}). \qedsymbol

Note however that the result is not true in every algebraic category. For example, the map \mathbf Z \to \mathbf Q is an epimorphism of (commutative) rings that is not surjective. More generally, every localisation R \to R[S^{-1}] is an epimorphism, by the universal property of localisation; these maps are rarely surjective.

Concrete categories and monomorphisms

This post serves to collect some background on concrete categories for my next post.

Concrete categories are categories in which objects have an underlying set:

Definition. A concrete category is a pair (\mathscr C, U) of a category \mathscr C with a faithful functor U \colon \mathscr C \to \mathbf{Set}. In cases where U is understood, we will simply say \mathscr C is a concrete category.

Example. The categories \mathbf{Gp} of groups, \mathbf{Top} of topological spaces, \mathbf{Ring} of rings, and \mathbf{Mod}_R of R-modules are concrete in an obvious way. The category \mathbf{Sh}(X) of sheaves on a site X with enough points is concrete by mapping a sheaf to the disjoint union of its stalks (the same holds for any Grothendieck topos, but a different argument is needed). Similarly, the category \mathbf{Sch} of schemes can be concretised by sending (X,\mathcal O_X) to \coprod_{x \in X} \mathcal P(\mathcal O_{X,x}), where \mathcal P is the contravariant power set functor.

Today we will study the relationship between monomorphisms and injections in \mathscr C:

Lemma. Let (\mathscr C,U) be a concrete category, and let f \colon A \to B be a morphism in \mathscr C. If Uf is a monomorphism (resp. epimorphism), then so is f.

Proof. A morphism f \colon A \to B in \mathscr C is a monomorphism if and only if the induced map \Mor_{\mathscr C}(-,A) \to \Mor_{\mathscr C}(-,B) is injective. Faithfulness implies that the vertical maps in the commutative diagram

    \[\begin{array}{ccc} \Mor_{\mathscr C}(-,A) & \to & \Mor_{\mathscr C}(-,B) \\ \downarrow & & \downarrow \\ \Mor_{\mathbf{Set}}(U-,UA) & \to & \Mor_{\mathbf{Set}}(U-,UB) \end{array}\]

are injective, hence if the bottom map is injective so is the top. The statement about epimorphisms follows dually. \qedsymbol

For example, this says that any injection of groups is a monomorphism, and any surjection of rings is an epimorphism, since the monomorphisms (epimorphisms) in \mathbf{Set} are exactly the injections (surjections).

In some concrete categories, these are the only monomorphisms and epimorphisms. For example:

Lemma. Let (\mathscr C,U) be a concrete category such that the forgetful functor U admits a left (right) adjoint. Then every monomorphism (epimorphism) in \mathscr C is injective (surjective).

Proof. If U is a right adjoint, it preserves limits. But f \colon A \to B is a monomorphism if and only if the square

    \[\begin{array}{ccc} A & \overset{\text{id}}\to & A \\ \!\!\!\!\!{\scriptsize \text{id}}\downarrow & & \downarrow {\scriptsize f}\!\!\!\!\! \\ A & \underset{f}\to & B \end{array}\]

is a pullback. Thus, U preserves monomorphisms if it preserves limits. The statement about epimorphisms is dual. \qedsymbol

For example, the forgetful functors on algebraic categories like \mathbf{Gp}, \mathbf{Ring}, and \mathbf{Mod}_R have left adjoints (a free functor), so all monomorphisms are injective.

The forgetful functor \mathbf{Top} \to \mathbf{Set} has adjoints on both sides: the left adjoint is given by the discrete topology, and the right adjoint by the indiscrete topology. Thus, monomorphisms and epimorphisms in \mathbf{Top} are exactly injections and surjections, respectively.

On the other hand, in the category \mathbf{Haus} of Hausdorff topological spaces, the inclusion \mathbf Q \hookrightarrow \mathbf R is an epimorphism that is not surjective. Indeed, a map f \colon \mathbf R \to X to a Hausdorff space X is determined by its values on \mathbf Q.