Introduction to polynomial method

(The following is a blog-friendly version of Chapter 7 of my PhD thesis, which is an introduction to the so-called polynomial method.)

The polynomial method is an umbrella term for different techniques involving polynomials which have been used to solve several problems in finite geometry, discrete geometry, extremal combinatorics and additive number theory. One of the general philosophies behind this method is to associate a set of polynomials (possibly a single polynomial), to a combinatorial object that we want to study, and then use some properties of these polynomials to describe the combinatorial object. For a concrete example, let us go through Koornwinder’s proof of the absolute bound on equiangular lines.

A set of lines in the Euclidean space \mathbb{R}^n through the origin (or any other fixed point) is called equiangular if the angle between every pair of distinct lines in the set is the same. For example, joining the opposite vertices of a regular hexagon in the plane \mathbb{R}^2, we get three equiangular lines.

At most how many equiangular lines can there be in \mathbb{R}^n?

This question was addressed by Gerzon (as reported by Lemmens and Seidel) who proved that there are at most \binom{n+1}{2} equiangular lines in \mathbb{R}^n. Thus in particular, the regular hexagon example gives us the maximum possible equiangular lines in \mathbb{R}^2. But in general this bound is not sharp. In 1976, Koornwinder gave an “almost trivial proof” of Gerzon’s bound by giving a bijective correspondence between the set of equiangular lines in \mathbb{R}^n and a linearly independent set of polynomials lying in an {n+1 \choose 2} dimensional vector  space. This correspondence is as follows. Let L_1, \dots , L_k be k equiangular lines in \mathbb{R}^n and let u_1, \dots, u_k be unit vectors on these lines, chosen arbitrarily. Then we have \langle u_i,u_j \rangle^2 = \alpha for i \neq j where \alpha is a fixed real number in the interval [0,1). For i \in \{1, \dots, k\} define f_i \in \mathbb{R}[t_1, \dots, t_n] by f_i(t_1, \dots, t_n) = \langle u_i, (t_1, \dots, t_n) \rangle^2 - \alpha^2 (t_1^2 + \dots + t_n^2). Now since f_i(u_j) = 0 for all i \neq j and f_i(u_i) = 1 -\alpha^2 \neq 0, it is easy to see that f_1, \dots, f_k are linearly independent polynomials in the vector space \mathbb{R}[t_1, \dots, t_n]. As these polynomials lie in the {n + 1 \choose 2} dimensional subspace spanned by the monomial set \{t_i t_j : 1 \leq i < j \leq k\}, we get k \leq \binom{n + 1}{2}.

The argument above can also be seen as an example of the linear algebra method in combinatorics, which has been discussed in much detail in the influential (unfinished) manuscript of Babai and Frankl, and more recently in the beautiful book by Matoušek.

Another important way of using polynomials is to capture the combinatorial object via zeros of polynomials (or in general, algebraic varieties). One of the earliest examples here is the determination of the minimum size of an affine blocking set by Jamison in 1977. The problem is to find the minimum number of points required to “block” every hyperplane of the affine space \mathbb{F}_q^n. Clearly n(q - 1) + 1 points suffice (by taking all points that lie on one of the n axes), but can we do better? Jamison proved that we cannot, and his polynomial method proof can be sketched as follows: (a) first consider the dual problem, which is equivalent to finding the minimum number of hyperplanes required to cover all points of \mathbb{F}_q^n except the origin, (b) then identify \mathbb{F}_q^n with the finite field \mathbb{F}_{q^n}, (c) finally associate each hyperplane with the minimal polynomial over \mathbb{F}_q^n that vanishes on the hyperplane to show (using the theory of linearized polynomials) that if the number of hyperplanes is less than n(q - 1) + 1, then the polynomial t^{q^n - 1} - 1 = \prod_{\alpha \in \mathbb{F}_q^\times} (t - \alpha) does not divide the product of these polynomials corresponding to the hyperplanes. This technique of using polynomials over finite fields to solve finite geometrical problems came to be known as the “Jamison method” and it saw several applications (see for example this and this).

Brouwer and Schrijver gave another proof of Jamison’s theorem where they also started by considering the dual problem of hyperplane coverings but then proceeded by a much simpler argument involving multivariate polynomials over finite fields. Their approach was in fact quite similar to Chevalley’s proof of the famous Chevalley-Warning theorem  using reduced polynomials. We will see in Chapters 8 and 9 how both of these results are linked together by the notion of grid reduction, and in particular by the Lemma that a polynomial \mathbb{F}_q[t_1, \dots,t_n] which vanishes on all points of \mathbb{F}_q^n except one must have degree at least n(q-1). The Chevalley-Warning theorem, which is a statement on the zero set of a collection of polynomials over a finite field, has also found several applications in combinatorics. Alon, Friedland and Kalai used it to prove that every 4- regular graph plus an edge contains a 3-regular subgraph. Later, Bailey and Richter used the Chevalley-Warning theorem to give a new proof of the famous Erdős-Ginzburg- Ziv theorem from additive number theory. Recently, Clark, Forrow and Schmitt have shown that the Chevalley-Warning theorem and its combinatorial applications can be derived, and further generalized, using a result of Alon and Füredi from 1993. We will devote Chapter 9 to this Alon-Füredi Theorem, where we generalize the result and give a new simple proof. We also show how this result is linked to several other topics like coding theory, finite geometry and polynomial identity testing.

An important tool in the polynomial method involving zeros of polynomials is a result called Combinatorial Nullstellensatz. This powerful tool and its generalizations have been used extensively to solve several problems in additive number theory (see Chapter 9 of this for a survey) and more recently in some other areas as well. In 2014, Clark revisited Alon’s Combinatorial Nullstellensatz and showed how its proof can be seen as a “restricted variable analogue” of Chevalley’s proof of the Chevalley-Warning Theorem. He further generalized this result to commutative rings (adding certain extra conditions) and made it clear how many of the ideas involved are related to the notion of grid reduction. Ball and Serra introduced a related result which they called Punctured Combinatorial Nullstellensatz. This result was proved using Alon’s Combinatorial Nullstellensatz, and it has several combinatorial applications of its own. We will give another proof of this result in Chapter 8 by directly using the notion of grid reduction, and then use this result to prove a new generalization of the Chevalley-Warning theorem which we call the Punctured Chevalley-Warning Theorem. In fact, this result generalizes Brink’s Restricted Variable Chevalley-Warning theorem.

In recent years, there has been a lot of interest in the polynomial method as a result of Dvir’s two-page proof of the finite field Kakeya problem which involved an easy polynomial argument, and the developments which followed. Many experts worked on the finite field Kakeya problem using different techniques involving algebraic geometry and Fourier analysis, but made only partial progress towards a solution to this problem. And thus it was a great surprise to the mathematical community that such an easy polynomial argument could resolve this famous open problem. Even more recently, the notorious cap set problem has been solved using the polynomial method by Ellenberg and Gijswijt. Ideas originating from Dvir’s work have lead to several important advancements in mathematics, including the big breakthrough in the famous Erdős distinct distances problem by Guth and Katz. It is interesting to note that Dvir’s polynomial technique is a bit different from the techniques we have mentioned so far in this introduction as it involved polynomial interpolation instead of constructing explicit polynomials. For more details on this, we recommend the surveys by Dvir and Tao, and the recent book by Guth. Another example where a combinatorial problem is solved using polynomial interpolation, combined with a geometric argument, is Segre’s classical theorem on ovals in finite projective planes. Interestingly, the so-called “lemma of tangents” of Segre was used in combination with the Jamison/Brouwer-Schrijver bound on affine blocking sets by Blokhuis and Korchmáros to solve the Kakeya problem in 2 dimensions. Segre’s result (and his lemma of tangents) has been generalized further to higher dimensional finite projective spaces by Ball. For more on polynomial method in finite geometry, see the survey by Ball.

About Anurag Bishnoi

A mathematician working at TU Delft. I am broadly interested in combinatorics and finite geometry.
This entry was posted in Combinatorics, Incidence Geometry, Polynomial Method and tagged , , , , . Bookmark the permalink.

2 Responses to Introduction to polynomial method

  1. Maybe it might be better if you also mentioned the problem instead of just giving it’s name. I say this because I’m an undergraduate student and had to hop through quite a large number of links to figure stuff out, but for you it might have been easy.

    It’s just a suggestion. I really like your blog and your writing. Cheers.

  2. Thanks for your comment. Can you tell me for which problem it was particularly difficult for you to find the statement? I can then fix that.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s