You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
I recently stumbled upon this paper Compression is all you need which proposes a criterion to distinguish "human mathematics" (the mathematics humans discover and value) from "formal mathematics" (the totality of all valid deductions). I was a bit skeptical at first, but one of the authors is Michael H. Freedman.
Here is the abstract.
Human mathematics (HM), the mathematics humans discover and value, is a vanishingly
small subset of formal mathematics (FM), the totality of all valid deductions. We argue that HM is
distinguished by its compressibility through hierarchically nested definitions, lemmas, and theorems.
We model this with monoids. A mathematical deduction is a string of primitive symbols; a definition
or theorem is a named substring or macro whose use compresses the string. In the free abelian
monoid ๐ด๐, a logarithmically sparse macro set achieves exponential expansion of expressivity. In
the free non-abelian monoid ๐น๐, even a polynomially-dense macro set only yields linear expansion;
superlinear expansion requires near-maximal density. We test these models against MathLib, a large
Lean 4 library of mathematics that we take as a proxy for HM. Each element has a depth (layers of
definitional nesting), a wrapped length (tokens in its definition), and an unwrapped length (primitive
symbols after fully expanding all references). We find unwrapped length grows exponentially with
both depth and wrapped length; wrapped length is approximately constant across all depths. These
results are consistent with ๐ด๐ and inconsistent with ๐น๐, supporting the thesis that HM occupies a
polynomially-growing subset of the exponentially growing space FM. We discuss how compression,
measured on the MathLib dependency graph, and a PageRank-style analysis of that graph can quantify
mathematical interest and help direct automated reasoning toward the compressible regions where
human mathematics lives.
It's the first time I see an attempt to characterize "human mathematics", so it is very intriguing to me.
Are you aware of similar attempts? possibly from philosophers?
In the proof engineering community we often say that the verification conditions arising from proving code correct are "wide but shallow", and hence easier to automate, than propositions arising from 'normal math', which are 'narrow but deep' usually. Certainly many people have noted formal verification pre 2020 seems to be more applied to code than math, I would argue for the deep vs wide reason.