Ramsey numbers from pseudorandom graphs

One of the foundational results in modern combinatorics is Ramsey’s theorem which states that for every positive integers s, t there exists a constant n_0 such that for all n \geq n_0, every 2-coloring of edges of the complete graph K_n has a monochromatic copy of either K_s or K_t. The smallest n_0 for which the statement holds true is then known as the Ramsey number R(s, t); that is, for all n \geq R(s, t) the statement above is true and there exists a 2-coloring of the edges of the complete graph on R(s, t) - 1 vertices that avoids monochromatic K_s and K_t. We know the exact values of R(s, t) only for some small values of s and t. A famous example is R(3,3) = 6, while despite decades of research we only known that 43 \leq R(5,5) \leq 48.

Determining the Ramsey numbers for larger values has been widely open since 1940’s. It has become one of the central problems of mathematics to determine the asymptotics of R(t, t) with t \rightarrow \infty and R(s, t) with s fixed and t \rightarrow \infty. An easy upper bound follows from a nice inductive prove due to Erdős and Szekeres (1947), and it shows that R(s, t) \leq \binom{s + t - 2}{s - 1}. In particular, this implies R(t,t) = O(4^t/\sqrt{t}) and R(s, t) = O(t^{s - 1}) (with the constant depending on s). The best upper bounds that we know are not that far away from these. David Conlon proved in 2016 that there exists a constant c for which

R(t, t) \leq \frac{4^t}{t^{-c \log t/ \log \log t}} .

Ajtai, Komlós and Szemerédi proved in 1980 that for every fixed s there exists a constant c_s such that

R(s, t) \leq c_s \frac{t^{s - 1}}{(\log t)^{s - 2}}

For lower bounds, Erdős proved using a probabilistic argument that

R(t, t) \geq (1 - o(1)) \frac{t}{\sqrt{2} e} \sqrt{2}^t

and the only improvement (since 1947) in that lower bound has been by a factor of 2.

In case of R(s, t), with s fixed, we know the following lower bound by Bohman and Keevash from 2010 that is again proved probabilistically by carefully analysing a natural random K_s-free process:

R(s, t) \geq c'_s \frac{t^{(s + 1)/2}}{(\log t)^{(s + 1)/2 - 1/(s - 2)}}.

Ignoring the log factors in the bottom, we have

c'_s t^{(s + 1)/2 - o(1)} \leq R(s, t) \leq c_s t^{s - 1 - o(1)} .

Any improvement in the powers of t here would be ground-breaking. What I want to talk about in this post is a recent work of Dhruv Mubayi and Jacques Verstraete that offers us a new way of possibly improving the lower bound. Briefly, they show that if the main conjecture on pseudorandom graphs that I mentioned in an earlier post is true, then R(s, t) \geq c''_s t^{s - 1 - o(1)} for some constant c''_s. Therefore, the existence of the optimal K_s-free pseudorandom graphs would give us the correct polynomial in R(s, t).

Here is how their proof goes. Firstly, observe that by taking one of the colour classes to define the edges of a graph, we need to construct an n vertex graph G that has no K_s as subgraphs and has independence number < t to show R(s, t) \geq n + 1. The main result that they prove is as follows:

Theorem 1. Let If there exists a K_s-free (n, d, \lambda)-graph, and n is large enough, then for t = \lceil 2n \log^2 n/d \rceil, we have

R(s, t) \geq c_s \frac{n}{\lambda} \log^2 n .

for some constant c_s.

The proof of Theorem 1 is not too hard if you are comfortable with the probabilistic method and you know that you should use Theorem 2.1 from Alon-Rödl. It can be summarised as: “pick each vertex uniformly at random with probability \log^2 n/(2e^2 \lambda) and in the induced subgraph remove one vertex from each independent set of size t”.

If the conjecture regarding K_s-free pseudorandom graphs is true, then for every integer s \geq 3 there exists an infinite sequence of (n, d, \lambda)-graphs with d = \Omega(n^{1-1/(2s - 3)}) and \lambda = O(\sqrt{d}). Consider such a graph G for n large enough. From t = \lceil 2n \log^2 n/d \rceil and n = O(d^{(2s - 3)/(2s - 4)}) we get d = \Omega(t^{2s - 4}/(\log t)^{2(2s - 4)}). Since \lambda = O(\sqrt{d}), from Theorem 1 we have R(s, t) \geq c_s \frac{n}{\lambda} \log^2n = c_s t \frac{d}{2 \lambda} = \Omega(t \sqrt{d}), and from the previous expression for d in terms of t we get

R(s, t) = \Omega \left(\frac{t^{s - 1}}{\log^{2s - 4} t} \right).

So, all we need to do is construct these K_s-free pseudorandom graphs. The best construction that we have so far (discussed here) has edge density \Theta(n^{-1/(s - 1))}. Sadly, the lower bound that this construction gives us on R(s, t) is c_s t^{s/2}/(\log t)^{s - 2}, which is worse than the probabilistic bound of Bohman and Keevash. In general, if we can construction K_s-free optimally pseudorandom graphs with density n^{-1/f(s)}, we will get the bound

R(s, t) \geq c_s \frac{t^{(f(s) + 1)/2}}{\log^{f(s) - 1} t}

Therefore, even a construction with density n^{-1/(s + 1)} (or in fact n^{-1/(s+\epsilon)}) would already break the Bohman-Keevash lower bound! That should be doable, right? I leave it as an open problem to the readers.

UPDATE 15th Oct 2019: Xiaoyu He has shown that the existence of these optimal K_s-free graphs for all s would also determine the multicolour Ramsey numbers R(s_1, \dots, s_k, t) with s_i fixed for all i and t \rightarrow \infty, up-to a poly-logarithmic factor.

About Anurag Bishnoi

A mathematician working at FU Berlin as a postdoc. I am broadly interested in combinatorics and finite geometry.
This entry was posted in Combinatorics, Extremal Combinatorics and tagged , , , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s