The recent breakthrough of Conlon and Ferber has shown us that algebraic methods can be used in combination with probabilistic methods to improve bounds on multicolour diagonal Ramsey numbers. This was already shown for the off-diagonal Ramsey numbers by Mubayi and Verstraete last year, as discussed here. Sadly, for two colours we still don’t have any improvement over the classic probabilistic bound proved by Erdős in 1947 (except for some constant factors in front of the exponential function). More concretely, we haven’t been able to prove , for any since 1947, despite considerable effort by various mathematicians. Therefore, it might sound reasonable to think that is equal to , but the exponential improvement over the (purely ) probabilistic lower bounds on multicolour Ramsey numbers should cast some doubt on that.

In an end remark Conlon and Ferber mention that the lower bound of can be proved in a new way, using the arguments of their paper, where we have a mix of algebra and probability. This sounds like a very promising approach for further improvements as well. In this post I want to explore that, and attempt to explain why we will need some really new ideas to improve the lower bound. In the process, we will see some basic theory of bilinear forms, which is crucial to the arguments. For a more in depth exploration of the topic, see these notes of Keith Conrad and Chapter 3 of the book by Simeon Ball.

Given a vector space over a field , a **bilinear form** is a function , such that for any fixed , both and are linear functions. This is an abstraction of the dot product, which is so crucial in understanding the geometry of the real space . Inspired by that, we define a vector to be orthogonal to a vector if . There is a certain lack of symmetry in the definition here, namely, doesn’t necessarily imply , or in other words, we could have orthogonal to but non-orthogonal to . To fix that we introduce the notion of **reflexivity** and call a bilinear form reflexive if implies for all . Once have have a reflexive bilinear form we can talk about orthogonality of two vectors. From now on, will denote a reflexive bilinear form.

While this orthogonality defined in terms of reflexive bilinear forms might seem like a very general notion, it ends up being restricted to only two cases. A fundamental result, attributed to John von Neumann, states that for any vector space , a reflexive bilinear form on is of one of the following two types:

**Symmetric**: for all .**Alternating**: for all .

For example, the dot product on defined by is a symmetric bilinear form, whereas the function (assuming to be even) is alternating. These examples are in fact examples of what we call **non-degenerate** bilinear forms, i.e., form with respect to which no vector is orthogonal to every vector of the vector space.

One interesting property of alternating forms is that they are skew symmetric. Indeed, we have for any and thus . If the characteristic of the underlying field is even, then this shows that every alternating form is in fact symmetric. The following lemma gives us a relation in the other direction as well.

**Lemma 1**: If is a symmetric bilinear form on a vector space over a field of characteristic , then where and the restriction of on is an alternating form.

For example, if we take over , then we can take , which is equal to the hyperplane , since for any . The form restricted to is alternating by the definition of .

One of the basic notions associated to a bilinear form is that of an **isotropic subspace**, which shows up in the construction of Conlon and Ferber as well, as discussed in the previous post. An isotropic subspace is a subspace such that for any two, not necessarily distinct, in (such a subspace is usually a *totally isotropic* but we’d ignore that here). For an arbitrary subset of , if we denote the set by , then being isotropic is the same as saying . Note that is always a vector subspace. The following is another fundamental result whose corollary is what we used in the previous post.

**Lemma 2**: Let be a non-degenerate reflexive bilinear form over a finite dimensional vector space . Then for any subspace , we have .

**Corollary 3**: An isotropic subspace with respect to a non-degenerate (reflexive) bilinear form over an dimensional vector space has dimension at most .

The max dimension of an isotropic subspace is known as the Witt index of the bilinear form. It can be shown that every non-degenerate alternating form has Witt index exactly equal to (see Theorem 3.8 in Simeon Ball’s book). This also implies that there are no non-degenerate alternating forms over odd dimensional vector spaces.

Let us now look at the graph that plays the main role in the construction of Conlon and Ferber. Let be the orthogonality graph on the set of isotropic vectors in with respect to the bilinear form , with even. Then as we saw earlier, the set of isotropic vertices form a dimensional vector subspace on which is an alternating form. Moreover, this alternating form is degenerate because is orthogonal to every other vector in since is even. As there is no other isotropic vector which is orthogonal to every other isotropic vector, the set vector space can be decomposed as such that the form induced on is a non-degenerate alternating form. Therefore, we should be able to understand this graph starting from the non-degenerate alternating form. Let’s do that and start over.

Let be an even integer, and a non-degenerate alternating form on . Let be the orthogonality graph on these vectors, that is, if . We can notice the following property.

**Lemma 4:** *There are no independent sets of size in .*

Proof. Let be an independent set, and say . Then by applying on this sum and each we get that is in the null space of . But has eigenvalues and , which are both non-zero (since is even), and thus is non-singular. Therefore, for all , and these vectors form an independent set in the vector space. This is a contradiction since the dimension of the vector space is strictly less than .

While there can’t be any large independent sets, certainly has really large cliques. Any (totally) isotropic subspace of dimension gives rise to a clique of size , and in fact these are the largest possible cliques. So, trivially does not give any good lower bound on the Ramsey number . Still, we can apply the basic idea of Conlon and Ferber (which was also the main idea of Mubayi and Verstraete) and take a random induced subgraph of , hoping that it can destroy all large cliques while giving us a graph of reasonable size. (Note that any induced subgraph will also be free of any independent sets of size .) For that, let’s first count the total number of cliques of size in .

Every clique in spans an isotropic subspace of , which has a certain rank . The number of cliques of size and rank can be bounded from above by , where the first terms correspond to choosing linearly independent members of the clique and then we pick the remaining elements from this -dimensional subspace. By simplifying, we have

.

This number is increasing in from until , and then decreases till , where it finally becomes (unless , which is a trivial case that we’ll ignore). But, we know that from **Corollary 3**, and hence for , i.e., for , and for . While we are interested in , we will soon need the estimates for smaller cliques as well.

Repeating the probabilistic argument of Conlon and Ferber, we see that if we sample each vertex independently with probability , then there exists a subset of vertices of size inducing no clique of size as long as we can ensure that . Since , we can take , which then gives us the bound on the Ramsey number . This is worse than the purely probabilistic bound of . Can we do anything better?

Here’s an idea that is extracted from the paper of Conlon and Ferber. *We can look at cliques, and independent sets, of size smaller than in this graph. Perhaps for some , we can get good lower bound on by taking a random induced subgraph of that does not contain any cliques of independent sets of size . *

For that, let’s count the number of independent sets of size in . For even, the proof of Lemma 4 tells us that any such set will also be an independent set of vectors. Therefore, is at most or , depending on whether is even or odd. In any case, for , we have .

Recall that we had for and for . Therefore, we see that asymptotically is always bigger than . Thus, the probabilistic argument from before gives us a subset of vertices on which the induced graph has no cliques of size and no independent sets of size as long as , and since (for large enough), it suffices to pick an for which . We can achieve this by taking where for and for .

Sadly, both of these values of are , with equality only at . Therefore, we are always worse than the basic probabilistic bound except for the case of where we can show . This asymptotically matches the probabilistic bound but we arrive at it via a different (and much more complicated) construction. For improvements, using this easy method, it looks like we’ll have to consider some other graphs. This new approach definitely gives me hope that one can find graphs with a better control on the total number of cliques and independent sets, which can then be used to give an exponential improvement in the lower bound on two-colour diagonal Ramsey numbers.