Limits in the category of graphs

This is a first post about some categorical properties of graphs (there might be a few more).

Definition. For us, a graph is a pair G = (V, E) where V is a set and E \subseteq \mathcal P(V) is a collection of subsets of V of size 1 or 2. An element \{x,y\} \in E with x \neq y is called an edge from x to y, and a singleton \{x\} \in E is a loop at x (or sometimes an edge from x to itself). If G = (V,E), it is customary to write V(G) = V and E(G) = E.

A morphism of graphs f \colon G \to H is a map f \colon V(G) \to V(H) such that f(e) \in E(H) for all e \in E(G). The category of graphs will be denoted \mathbf{Grph}, and V \colon \mathbf{Grph} \to \mathbf{Set} will be called the forgetful functor.

Example. The complete graph K_n on n vertices is the graph (V,E) where V = \{1,\ldots,n\} and E = {V \choose 2} is the set of 2-element subsets of V. In other words, there is an edge from x to y if and only if x \neq y.

Then a morphism G \to K_n is exactly an n-colouring of G: the condition f(e) \in E(K_n) for e \in E(G) forces f(x) \neq f(y) whenever x and y are adjacent. Conversely, a morphism f \colon K_n \to G to a graph G without loops is exactly an n-clique in G: the condition that G has no loops forces f(x) \neq f(y) for x \neq y.

Lemma. The category \mathbf{Grph} has and the forgetful functor V \colon \mathbf{Grph} \to \mathbf{Set} preserves all small limits.

Proof. Let D \colon \mathscr J \to \mathbf{Grph} be a functor from a small category \mathscr J, and let V = \lim V \circ D be the limit of the underlying sets, with cone maps f_j \colon V \to V(D(j)). We will equip V with a graph structure G = (V,E) such that the maps G \to D(j) for j \in \mathscr J are morphisms and then show that the constructed G is a limit of D in \mathbf{Grph}.

To equip V with an edge set E, simply let E be the set of e \subseteq \mathcal P(V) of size 1 or 2 such that f_j(e) \in E(D(j)) for all j \in \mathscr J. Then this clearly makes G = (V,E) into a graph such that the f_j \colon G \to D(j) are graph morphisms for all j \in \mathscr J. Moreover, these maps make G into the limit cone over D: for any other cone g_i \colon H \to D(j), the underlying maps V(g_i) \colon V(H) \to V(D(j)) factor uniquely through g \colon V(H) \to V by the definition of V, and the construction of E(G) shows that g \colon V(H) \to V is actually a morphism of graphs g \colon H \to G. \qedsymbol

Remark. Note however that V does not create limits. On top of the construction above, this would mean that there is a unique graph structure G = (V,E) on V such that G is a cone over G. However, there are many such structures on V, because we can remove edges all we want (on the same vertex set V).

Example. As an example, we explicitly describe the product G \times H of two graphs G and H: by the lemma its vertex set is V(G \times H) = V(G) \times V(H). The ‘largest graph structure’ such that both projections p \colon G \times H \to G and q \colon G \times H \to H are graph morphisms is given by e \in E(G \times H) if and only if |e| \in \{1,2\} and p(e) \in E(G) and q(e) \in E(H). This corresponds to the structure found in the proof of the lemma.

For a very concrete example, note that the product of two intervals/edges G = H = K_2 is a disjoint union of two intervals, corresponding to the diagonals in \{1,2\} \times \{1,2\}. This is the local model to keep in mind.

The literature also contains other types of product graphs, which all have the underlying set V(G) \times V(H). Some authors use the notation G \times H for the categorical product or tensor product we described. The Cartesian product G \square H is defined by E(G \square H) = (E(G) \times V(H)) \cup (V(G) \times E(H)), so that the product of two intervals is a box. The strong product G \boxtimes H is the union of the two, so that the product of two intervals is a box with diagonals. There are numerous other notions of products of graphs.

Remark. Analogously, we can also show that \mathbf{Grph} has and V preserves all small colimits: just equip the set-theoretic colimit with the edges coming from one of the graphs in the diagram.

Example. For a concrete example of a colimit, let’s carry out an edge contraction. Let G be a graph, and let e = \{x,y\} be an edge. The only way to contract e in our category is to create a loop: let * be the one-point graph without edges, and let f_x, f_y \colon * \to G be the maps sending * to x and y respectively. Then the coequaliser of the parallel pair f_x, f_y \colon * \rightrightarrows G is the graph H whose vertices are V(G)/\sim, where \sim is the equivalence relation a \sim b if and only if a = b or \{a,b\} = \{x,y\}, and whose edges are exactly the images of edges in G. In particular, the edge \{x,y\} gives a loop at the image z = [x] = [y] \in V(H).

Remark. Note that the preservation of limits also follows since V has a left adjoint: to a set S we can associate the discrete graph S^{\operatorname{disc}} with vertex set S and no edges. Then a morphism S^{\operatorname{disc}} \to G to any graph G is just a set map S \to V(G).

Similarly, the complete graph with loops gives a right adjoint to V, showing that all colimits that exist in \mathbf{Grph} must be preserved by V. However, these considerations do not actually tell us which limits or colimits exist.

An interesting Noether–Lefschetz phenomenon

The classical Noether–Lefschetz theorem is the following:

Theorem. Let X \subseteq \mathbf P^3_{\mathbf C} be a very general smooth surface of degree d \geq 4. Then the natural map \Pic(\mathbf P^3) \to \Pic(X) is an isomorphism.

If \mathscr X \to S is a smooth proper family over some base S (usually of finite type over a field), then a property \mathcal P holds for a very general X = \mathscr X_s if there exists a countable intersection U = \bigcap_i U_i \subseteq S of nonempty Zariski opens U_i such that \mathcal P holds for X_s for all s \in U.

In general, Hilbert scheme arguments show that the locus where the Picard rank is ‘bigger than expected’ is a countable union of closed subvarieties Z_i of S (the Noether–Lefschetz loci), but it could be the case that this actually happens everywhere (i.e. U = \varnothing). The hard part of the Noether–Lefschetz theorem is that the jumping loci Z_i are strict subvarieties of the full space of degree d hypersurfaces.

If \mathscr X \to S is a family of varieties over an uncountable field k, then there always exists a very general member \mathscr X_s with s \in S(k). But over countable fields, very general elements might not exist, because it is possible that \bigcup Z_i(k) = S(k) even when \bigcup Z_i \neq S.

The following interesting phenomenon was brought to my attention by Daniel Bragg (if I recall correctly):

Example. Let k = \bar{\mathbf F}_p (the algebraic closure of the field of p elements, but the bar is not so visible in MathJax), let S = \mathcal A_1 = \mathcal M_{1,1} (or some scheme covering it if that makes you happier) with universal family \mathscr E \to S of elliptic curves, and let \mathscr X = \mathscr E \times_S \mathscr E be the family of product abelian surfaces E \times E. Then the locus

    \[NL(S) = \left\{s \in S\ \big| \ \operatorname{rk} \Pic(\mathscr X_s) > 3\right\}\]

is exactly the set of k-points (so it misses only the generic point).

Indeed, \Pic(E \times E) \cong \Pic(E) \times \Pic(E) \times \End(E), and every elliptic curve E over k has \operatorname{rk} \End(E) \geq 2. But the generic elliptic curve only has \End(E) = \mathbf Z. \qedsymbol

We see that the Noether–Lefschetz loci might cover all k-points without covering S, even in very natural situations.

Scales containing every interval

This is a maths/music crossover post, inspired by fidgeting around with diatonic chords containing no thirds. The general lemma is the following (see also the examples below):

Lemma. Let n be a positive integer, and I \subseteq \mathbf Z/n\mathbf Z a subset containing k > \frac{n}{2} elements. Then every a \in \mathbf Z/n\mathbf Z occurs as a difference x - y between two elements x, y \in I.

Proof. Consider the translate I + a = \{x + a\ |\ x \in I\}. Since both I and I + a have size k > \frac{n}{2}, they have an element in common. If x \in I \cap (I + a), then x = y+a for some y \in I, so a = x - y. \qedsymbol

Here are some applications to music theory:

Example 1 (scales containing every chromatic interval). Any scale consisting of at least 7 out of the 12 available chromatic notes contains every interval. Indeed, 7 > \frac{12}{2}, so the lemma shows that every difference between two elements of the scale occurs.

The above proof in this case can be rephrased as follows: if we want to construct a minor third (which is 3 semitones) in our scale S, we consider the scale S and its transpose S + 3 by a minor third. Because 7 + 7 = 14 > 12, there must be an overlap somewhere, corresponding to an interval of a minor third in our scale.

In fact, this shows that our scale must contain two minor thirds, since you need at least 2 overlaps to get from 14 down to 12. For example, the C major scale contains two minor seconds (B to C and E to F), at least two major thirds (C to E and G to B), and two tritones (B to F and F to B).

The closer the original key is to its transpose, the more overlaps there are between them. For example, there are 6 major fifths in C major, since C major and G major overlap at 6 notes. Conversely, if an interval a occurs many times in a key S, that means that the transposition S + a of S by the interval a has many notes in common with the old key S. (Exercise: make precise the relationship between intervals occurring ‘many times’ and transpositions having ‘many notes in common’.)

We see that this argument is insensitive to enharmonic equivalence: it does not distinguish between a diminished fifth and an augmented fourth. Similarly, a harmonic minor scale contains both a minor third and an augmented second, which this argument does not distinguish.

Remark. We note that the result is sharp: the whole-tone scales 2\mathbf Z/12\mathbf Z and (2\mathbf Z + 1)/12\mathbf Z have size 6 = \frac{12}{2}, but only contain the even intervals (major second, major third, tritone, minor sixth, and minor seventh).

Example 2 (harmonies containing every diatonic interval). Any cluster of 4 notes in a major or minor scale contains every diatonic interval. Indeed, modelling the scale as integers modulo 7, we observe that 4 > \frac{7}{2}, so the lemma above shows that every diatonic interval occurs at least once.

For example, a seventh chord contains the notes¹ \{1,3,5,7\} of the key. It contains a second between 7 and 1, a third between 1 and 3, a fourth between 5 and 1, etcetera.

Thus, the largest harmony avoiding all (major or minor) thirds is a triad. In fact, it’s pretty easy to see that such a harmony must be a diatonic transposition of the sus4 (or sus2, which is an inversion) harmony. But these chords may contain a tritone, like the chord B-F-G in C major.

Example 3. If you work with your favourite 19-tone tuning system, then any scale consisting of at least 10 of those notes contains every chromatic interval available in this tuning.

¹ A strange historical artefact of music is that chords start with 1 instead of 0.

Local structure of finite unramified morphisms

It is well known that a finite étale morphism f \colon X \to Y of schemes is étale locally given by a disjoint union of isomorphisms, i.e. there exists an étale cover Y' \to Y such that the pullback X' \to Y' is given by X' = \coprod_{i=1}^n Y' \to Y'. Something similar is true for finite unramified morphisms:

Lemma. Let f \colon X \to Y be a finite unramified¹ morphism of schemes. Then there exists an étale cover Y' \to Y such that the pullback X' \to Y' is given by X' = \coprod_i Z_i \to Y', where Z_i \hookrightarrow Y' are closed immersions of finite presentation.

Proof. Let y \in Y be a point, let A = \mathcal O_{Y,y}^{\operatorname{sh}} be the strict henselisation of Y at y, and let \Spec B \to \Spec A be the base change of X \to Y along \Spec A \to Y. Then A \to B is unramified, so by Tag 04GL it splits as

    \[B = A_1 \times \ldots \times A_r \times C\]

whereA \to A_i is surjective for each i and no prime of C lies above \mathfrak m_y \subseteq A. But A \to C is also finite, so by Tag 00GU the map \Spec C \to \Spec A hits the maximal ideal if \Spec C \neq \varnothing. Thus, we conclude that C = 0, hence B is a product of quotients of A.

But A is the colimit of \mathcal O_{Y',y'} for (Y',y') \to (Y,y) an étale neighbourhood inducing a separable extension \kappa(y) \to \kappa(y'). Since f is of finite presentation, each of the ideals \ker(A \to A_i) and the projections B \to A_i are defined over some étale neighbourhood (Y',y') \to (Y,y). Then the pullback X' \to Y' is given by a finite disjoint union of closed immersions in Y'.

Then Y' \to Y might not be a covering, but since y \in Y was arbitrary we can do this for each point separately and take a disjoint union. \qedsymbol

Remark. The number of Z_i needed is locally bounded, but if Y is not quasi-compact it might be infinite. For example, we can take X \cong Y = \coprod_{i \in \N} \Spec k an infinite disjoint union of points, and f \colon X \to Y such that the fibre over y_i \in Y for i \in \N has i points.

Remark. In the étale case, we may actually take Y' \to Y finite étale, by taking Y' to be the Galois closure of X \to Y, which exists in reasonable cases². For example, if Y is normal, we may take Y' to be the integral closure of Y in the field extension corresponding to the Galois closure of k(Y) \to k(X). In general, if Y is connected it follows from Tag 0BN2 that a suitable component of the \deg(f)-fold fibre product of X over Y is a Galois closure Y' \to Y of X \to Y. If the connected components of Y are open, apply this construction to each component.

In the unramified case, this is too much to hope for. For example, if Y = \mathbf P^2_{\mathbf C}, then we may take X to be a nontrivial finite étale cover of an elliptic curve E \subseteq Y. This is finite and unramified, but does not split over any finite étale cover of \mathbf P^2 since there aren’t any. In fact, it cannot split over any connected étale cover Y' \to \mathbf P^2 whose image contains E, since that implies the image only misses finitely many points (as E is ample), which is again impossible since \pi_1(\mathbf P^2 \setminus \{p_1,\ldots,p_r\}) = 0.

¹For the purposes of this post, unramified means in the sense of Grothendieck, i.e. including the finite presentation hypothesis. In Raynaud’s work on henselisations, this was weakened to finite type. See Tag 00US for definitions.

²I’m not sure what happens in general.

Epimorphisms of groups

In my previous post, we saw that injections (surjections) in concrete categories are always monomorphisms (epimorphisms), and in some cases the converse holds.

We now wish to classify all epimorphisms of groups. To show that all epimorphisms are surjective, for any strict subgroup H \subseteq G we want to construct maps f_1, f_2 \colon G \to G' to some group G' that differ on G but agree on H. In the case of abelian groups this is relatively easy, because we can take G' to be the cokernel, f_1 the quotient map, and f_2 the zero map. But in general the cokernel only exists if the image is normal, so a different argument is needed.

Lemma. Let f \colon H \to G be a group homomorphism. Then f is an epimorphism if and only if f is surjective.

Proof. We already saw that surjections are epimorphisms. Conversely, let f \colon H \to G be an epimorphism of groups. We may replace H by its image in G, since the map \im(f) \to G is still an epimorphism. Let X = G/H be the coset space, viewed as a pointed set with distinguished element * = H. Let Y = X \amalg_{X\setminus *} X be the set “X with the distinguished point doubled”, and write *_1 and *_2 for these distinguished points.

Let S(Y) be the symmetric group on Y, and define homomorphisms f_i \colon G \to S(Y) by letting G act naturally on the i^{\text{th}} copy of X in Y (for i \in \{1,2\}). Since the action of H on X = G/H fixes the trivial coset *, we see that the maps f_i|_H agree. Since f is an epimorphism, this forces f_1 = f_2. But then

    \[H = \Stab_{f_1}(*_1) = \Stab_{f_2}(*_1) = G,\]

showing that f is surjective (and a fortiori X = \{*\}). \qedsymbol

Note however that the result is not true in every algebraic category. For example, the map \mathbf Z \to \mathbf Q is an epimorphism of (commutative) rings that is not surjective. More generally, every localisation R \to R[S^{-1}] is an epimorphism, by the universal property of localisation; these maps are rarely surjective.

Concrete categories and monomorphisms

This post serves to collect some background on concrete categories for my next post.

Concrete categories are categories in which objects have an underlying set:

Definition. A concrete category is a pair (\mathscr C, U) of a category \mathscr C with a faithful functor U \colon \mathscr C \to \mathbf{Set}. In cases where U is understood, we will simply say \mathscr C is a concrete category.

Example. The categories \mathbf{Gp} of groups, \mathbf{Top} of topological spaces, \mathbf{Ring} of rings, and \mathbf{Mod}_R of R-modules are concrete in an obvious way. The category \mathbf{Sh}(X) of sheaves on a site X with enough points is concrete by mapping a sheaf to the disjoint union of its stalks (the same holds for any Grothendieck topos, but a different argument is needed). Similarly, the category \mathbf{Sch} of schemes can be concretised by sending (X,\mathcal O_X) to \coprod_{x \in X} \mathcal P(\mathcal O_{X,x}), where \mathcal P is the contravariant power set functor.

Today we will study the relationship between monomorphisms and injections in \mathscr C:

Lemma. Let (\mathscr C,U) be a concrete category, and let f \colon A \to B be a morphism in \mathscr C. If Uf is a monomorphism (resp. epimorphism), then so is f.

Proof. A morphism f \colon A \to B in \mathscr C is a monomorphism if and only if the induced map \Mor_{\mathscr C}(-,A) \to \Mor_{\mathscr C}(-,B) is injective. Faithfulness implies that the vertical maps in the commutative diagram

    \[\begin{array}{ccc} \Mor_{\mathscr C}(-,A) & \to & \Mor_{\mathscr C}(-,B) \\ \downarrow & & \downarrow \\ \Mor_{\mathbf{Set}}(U-,UA) & \to & \Mor_{\mathbf{Set}}(U-,UB) \end{array}\]

are injective, hence if the bottom map is injective so is the top. The statement about epimorphisms follows dually. \qedsymbol

For example, this says that any injection of groups is a monomorphism, and any surjection of rings is an epimorphism, since the monomorphisms (epimorphisms) in \mathbf{Set} are exactly the injections (surjections).

In some concrete categories, these are the only monomorphisms and epimorphisms. For example:

Lemma. Let (\mathscr C,U) be a concrete category such that the forgetful functor U admits a left (right) adjoint. Then every monomorphism (epimorphism) in \mathscr C is injective (surjective).

Proof. If U is a right adjoint, it preserves limits. But f \colon A \to B is a monomorphism if and only if the square

    \[\begin{array}{ccc} A & \overset{\text{id}}\to & A \\ \!\!\!\!\!{\scriptsize \text{id}}\downarrow & & \downarrow {\scriptsize f}\!\!\!\!\! \\ A & \underset{f}\to & B \end{array}\]

is a pullback. Thus, U preserves monomorphisms if it preserves limits. The statement about epimorphisms is dual. \qedsymbol

For example, the forgetful functors on algebraic categories like \mathbf{Gp}, \mathbf{Ring}, and \mathbf{Mod}_R have left adjoints (a free functor), so all monomorphisms are injective.

The forgetful functor \mathbf{Top} \to \mathbf{Set} has adjoints on both sides: the left adjoint is given by the discrete topology, and the right adjoint by the indiscrete topology. Thus, monomorphisms and epimorphisms in \mathbf{Top} are exactly injections and surjections, respectively.

On the other hand, in the category \mathbf{Haus} of Hausdorff topological spaces, the inclusion \mathbf Q \hookrightarrow \mathbf R is an epimorphism that is not surjective. Indeed, a map f \colon \mathbf R \to X to a Hausdorff space X is determined by its values on \mathbf Q.

Rings that are localisations of each other

This is a post about an answer I gave on MathOverflow in 2016. Most people who have ever clicked on my profile will probably have seen it.

Question. If A and B are rings that are localisations of each other, are they necessarily isomorphic?

In other words, does the category of rings whose morphisms are localisations form a partial order?

In my previous post, I explained why k[x] and k[x,x^{-1}] are not isomorphic, even as rings. With this example in mind, it’s tempting to try the following:

Example. Let k be a field, and let K = k(x_1, x_2, \ldots). Let

    \[A = K[x_0,x_{-1},\ldots]\]

be an infinite-dimensional polynomial ring over K, and let

    \[B = A\left[\frac{1}{x_0}\right].\]

Then B is a localisation of A, and we can localise B further to obtain the ring


isomorphic to A by shifting all the indices by 1. To see that A and B are not isomorphic as rings, note that A^\times \cup \{0\} is closed under addition, and the same is not true in B. \qed

Is there a moral to this story? Not sure. Maybe the lesson is to do mathematics your own stupid way, because the weird arguments you come up with yourself may help you solve other problems in the future. The process is more important than the outcome.

Is the affine line isomorphic to the punctured affine line?

This is the story of Johan Commelin and myself working through the first sections of Hartshorne almost 10 years ago (nothing creates a bond like reading Hartshorne together…). This post is about problem I.1.1(b), which is essentially the following:

Exercise. Let k be a field. Show that k[x] and k[x,x^{-1}] are not isomorphic.

In my next post, I will explain why I’m coming back to exactly this problem. There are many ways to solve it, for example:

Solution 1. The k-algebra k[x] represents the forgetful functor \mathbf{Alg}_k \to \mathbf{Set}, whereas k[x,x^{-1}] represents the unit group functor R \mapsto R^\times. These functors are not isomorphic, for example because the inclusion k \to k[x] induces an isomorphism on unit groups, but not on additive groups. \qed

A less fancy way to say the same thing is that all k-algebra maps k[x,x^{-1}] \to k[x] factor through k, while the same evidently does not hold for k-algebra maps k[x] \to k[x].

However, we didn’t like this because it only shows that k[x] and k[x,x^{-1}] are not isomorphic as k-algebras (rather than as rings). Literal as we were (because we’re undergraduates? Lenstra’s influence?), we thought that this does not answer the question. After finishing all unstarred problems from section I.1 and a few days of being unhappy about this particular problem, we finally came up with:

Solution 2. The set k[x]^\times \cup \{0\} is closed under addition, whereas k[x,x^{-1}]^\times \cup \{0\} is not. \qed

This shows more generally that k[x] and \ell[x,x^{-1}] are never isomorphic as rings for any fields k and \ell.

The charm of chalk

This week, a video about professional mathematicians’ love for chalk went viral, reaching the top 10 trending on youtube with millions of views within a day of its release.

The video describes the closing down of Hagoromo, the manufacturer of what’s generally considered the best chalk available, and mathematicians’ response to this.

Despite the falling market demand for chalk (quality or otherwise), the closure actually seems as much tied to personal circumstances and the lack of an interested party to take over (although the formulas have since been bought and manufacturing restarted in Korea).

The two most pressing question arising from this story:

Question 1. Why do mathematicians still use chalk?
Question 2. How did this become a top 10 trending video on youtube?

I have little to say on the second question, except for the observations that the video has a high production value and a light-hearted, comical feel, and that the internet is an unpredictable place. [Insert fatalistic remark about the replacement of editorial journalism by poorly understood algorithms.]

But let me remark that there is totally going to be a Simpsons episode where Bart writes something about selling chalk to teachers or some other reference.

Modern teaching

If you are not a working mathematician (or even if you are), the main thing you might be asking is why modern (e.g. digital) teaching techniques have not yet taken over in mathematics.

The answer lies in the teaching demands specific to mathematics and other exact sciences. The material is typically technical and broken down into a lot of small steps. So it’s convenient to have 4-6 boards to refer back to, for example so that you can keep the statement of a result (as well as a picture) up while proving or applying the result.

The biggest problem with slide talks is exactly this: they tend to present information too quickly (and inorganically), and then it disappears too quickly again as well. It is generally considered by mathematicians a great challenge to give a good slide talk, which can only be accomplished by leaving out most of the technical details. This may be appropriate for large audience [non-expert] conference talks, but this is not how you want to be teaching.

Some of the same considerations apply to smartboards. Although you can write in real time (so it’s more organic than slides), the writing surface is small, allowing little content memory. Specific technical annoyances with smartboards are latency, not being able to see what you do, and general technological failure (which is not how you want to be spending your time).

How about whiteboards?

Whiteboards seem to provide a more reasonable alternative. You can still pave a wall in whiteboards to retain a lot of information at once, and the only difference is the material. I even distinctly remember from high school thinking that whiteboards are superior to blackboards in every way, and to have this turned around when I started undergrad.

Some difficulties that whiteboards have and blackboards do not:

  • The surface has too little resistance. Unlike writing on paper, the writing motion on blackboard comes from the arm and wrist and not the wrist and fingers. Whiteboards are sitting in a grey area where there is not enough resistance to accurately write from the arm, but writing from the wrist does not produce big enough characters. RSI is a problem too when you need to restrain your motion.
  • Whiteboards do not erase as well: often they have residue left from previous writing, and an occasional wet cleaning (typically with some chemicals) is needed to clean the board properly. On blackboard, typically a eraser suffices, and if all else fails a wet sponge will do the trick.
  • Whiteboard markers do not indicate their life expectancy. Because there are no exterior signs of a dead marker, they pile up into an unnavigable graveyard of mostly useless markers for you to sort through. With the clock literally ticking as a teacher you don’t want to waste time figuring out which marker to use (and having to go to an office to pick up a new one). Chalk will literally go until it’s a little stump, so it’s much easier to read life expectancy.
  • Whiteboards are shinier, and the reflection negatively affects legibility. (This also applies to lower quality blackboards, which unfortunately I have had to teach on at some point in the past.)
  • Although chalk on your hands (and, to a lesser extent, clothes) is annoying, continued exposure to marker fumes can lead to actual health issues. Plus, marker stains can be hard to wash out of clothes.

Conversely, the main argument for whiteboards over blackboards, as far as I am able to tell, seems to be a dislike of chalk. Admittedly, the feel of chalk on your hands is not great, and if you use a very dusty [low quality] chalk it can get into your mouth as well (which is much more nasty, needless to say). I also found a few people with the opinion that their writing comes out better on a lower friction surface, which is the opposite of what I described above.

Concluding remarks

All and all, mathematicians’ love for chalk on blackboard should not be thought of as an act of conservatism (although mathematicians are rather conservative creatures in some ways ― more in a later post). Rather, it is a product of the teaching challenges specific to the area, and common sense responses to those.

P¹ is simply connected

This is a cute proof that I ran into of the simple connectedness of \mathbb P^1. It does not use Riemann–Hurwitz or differentials, and instead relies on a purely geometric argument.

Lemma. Let k be an algebraically closed field. Then \mathbb P^1_k is simply connected.

Proof. Let f \colon C \to \mathbb P^1 be a finite étale Galois cover with Galois group G. We have to show that f is an isomorphism. The diagonal \Delta_{\mathbb P^1} \subseteq \mathbb P^1 \times \mathbb P^1 is ample, so the same goes for the pullback D = (f \times f)^* \Delta_{\mathbb P^1} to C \times C [Hart, Exc. III.5.7(d)]. In particular, D is connected [Hart, Cor. III.7.9].

But D \cong C \times_{\mathbb P^1} C is isomorphic to |G| copies of C because the action

    \begin{align*} G \times C &\to C \times_{\mathbb P^1} C\\ (g,c) &\mapsto (gc,c) \end{align*}

is an isomorphism. If D is connected, this forces |G| = 1, so f is an isomorphism. \qed

The proof actually shows that if X is a smooth projective variety such that \Delta_X is a set-theoretic complete intersection of ample divisors, then X is simply connected.

Example. For a smooth projective curve C of genus g \geq 1, the diagonal cannot be ample, as \pi_1(C) \neq 0. We already knew this by computing the self-intersection \Delta_C^2 = 2-2g \leq 0, but the argument above is more elementary.


[Hart] Hartshorne, Algebraic geometry. GTM 52, Springer, 1977.