Trending topics
#
Bonk Eco continues to show strength amid $USELESS rally
#
Pump.fun to raise $1B token sale, traders speculating on airdrop
#
Boop.Fun leading the way with a new launchpad on Solana.

Paata Ivanisvili
Professor of Mathematics @ UCI. Exploring what AI can (and can’t) do in math.
Picked an open problem and used Grok Heavy to solve it. After a few prompts (try this, compute that, tweak this etc) it discovered a counterexample that settles the question.
The problem (first appeared on MathOverflow back in 2017 ) asks to find the smallest C>0 such that for every d ≥ 1 and every polynomial f of degree ≤ d on the Hamming cube {-1,1}^n,
‖f‖₂ ≤ C^d ‖f‖₁ ?
The author suggests that C = √2 might work, a plausible guess because for d=1 it coincides with the sharp Khinchin inequality (Szarek's constant √2). For d=2 it would imply an old conjecture of Pelczyński that the best constant for 2-homogeneous polynomials on the cube is 2.
But Grok Heavy found a counterexample showing that the best constant is at least √3. The full chat conversation


365
Grok 4.20 (Beta) improves the lower bound by 9.1% on the Gaussian perimeter of convex sets in two minutes.
This is something that was pointed out to me by Xinyuan Xie. Back in 1993, Keith Ball showed that the Gaussian perimeter of a convex body in n-dimensional Euclidean space is bounded from above by 4n^{1/4}. As for the lower bound, Ball showed that for a cube (of appropriate size) the perimeter can grow as \sqrt{\log(n)}. So there was a gap for a while as to which bound is sharp, until 2003, when, in a beautiful paper, Fedor Nazarov showed that on the example of a random polyhedron (the intersection of many random half-spaces) the lower bound can grow as C n^{1/4}, with C=\exp(-5/4)=0.286…. Besides, Nazarov also improved the constant 4 in the upper bound (replacing it with 0.64) when n is large. These bounds stayed unbeaten until recently, when in 2019 Martin Raic managed to improve the upper-bound constant factor from 0.64 to 0.59.
Grok 4.20 (Beta), by more carefully optimizing Nazarov’s construction, managed to improve the lower-bound constant from 0.286 to 0.3126. I find this surprising even if it is just playing within the techniques of Nazarov’s paper, because very recently Nadimpalli--Pascale (2025) posted a preprint where, with a different approach, they recovered Nazarov’s lower bound with the same constant factor 0.286….
Grok was very generous in its response: it said that the improvement it provided follows the same argument of Nazarov ``line-by-line,'' whereas when I asked other models (other than Grok) to verify Grok’s claim, they agreed on everything except this part; they said the improvement is not really ``line-by-line'' :D.
Finally, I would not say that Nazarov missed this improvement. Knowing him for a long time, I am pretty confident it is common for him to sacrifice optimal constants for algebraic elegance.
Why is all this interesting? Having control of the Gaussian perimeter allows one to control Fourier tails of characteristic functions of these sets, which leads to controlling the time complexity of PAC learning and agnostic learning algorithms for this family (see Klivans--O’Donnell--Servedio).
References:
Chat link with Grok 4.20 (Beta).
Keith Ball. The Reverse Isoperimetric Problem for Gaussian Measure. Discrete and Computational Geometry, 10:411–420, 1993.
Adam Klivans, Ryan O’Donnell, and Rocco A Servedio. Learning geometric concepts via Gaussian surface area. In Proc. 49th IEEE Symposium on Foundations of Computer Science (FOCS), pages 541–550, 2008.
Shivam Nadimpalli, Caleb Pascale. On the Maximal Gaussian Perimeter of Convex Sets, Revisited. Preprint (2025)
Fedor Nazarov. On the maximal perimeter of a convex set in R^n with respect to a Gaussian measure. In Geometric Aspects of Functional Analysis (2001-2002) pages 169–187. Lecture Notes in Mathematics, Volume 1807, Springer, 2003
Martin Raicz. A multivariate Berry–Esseen theorem with explicit constants. Bernoulli 25(4A), 2019, 2824–2853

1.21K
Disclaimer: I had given early access to internal beta version of Grok 4.20
It found a new Bellman function for one of the problems I’d been working on with my student N. Alpay.
The problem reduces to identifying the pointwise maximal function U(p,q) under two constraints and understanding the behavior of U(p,0).
In our paper we proved U(p,0)\geq I(p), where I(p) is the Gaussian isoperimetric profile, I(p) ~ p\sqrt{log(1/p)} as p ~ 0.
After ~5 minutes, Grok 4.20 produced an explicit formula U(p,q) = E \sqrt{q^2+\tau}, where \tau is the exit time of Brownian motion from (0,1) starting at p. This yields U(p,0)=E\sqrt{\tau} ~ p log(1/p) at p ~ 0, a square root improvement in the logarithmic factor.
Any significance of this result? It will not tell you how to change the world tomorrow. Rather, it gives a small step toward understanding what is going on with averages of stochastic analogs of derivatives (quadratic variation) of Boolean functions: how small can they be?
More precisely, this gives a sharp lower bound on the L1 norm of the dyadic square function applied to indicator functions 1_A of sets A \subset [0,1].
In my previous tweet about Takagi function, we saw that the sharp lower bound on ||S_1(1_A)||_1 miraculously coincides with Takagi function of |A| which (surprisingly to me) is related to the Riemann hypothesis. Here, we obtain a sharp lower bound on ||S_2(1_A)||_1 given by E \sqrt{\tau}, where Brownian motion starts at |A|. This function belongs to the family of isoperimetric-type profiles, but unlike the fractal Takagi function, it is smooth and does not coincide with the Gaussian isoperimetric profile.
Finally, in harmonic analysis it is known that the square function is not bounded in L^1. The question here was more about curiosity: how exactly does it blow up when tested on Boolean functions 1_A. Previously, the best known lower bound was |A|(1-|A|) (Burkholder—Davis—Gandy). In our paper, we obtained |A| (1-|A|)\sqrt{log(1/(|A|(1-|A|)))}. This new Grok’s Bellman function gives |A| (1-|A|) \log(1/(|A|(1-|A|))) and this bound is actually sharp.

1.44K
Top
Ranking
Favorites
