You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
What are in your opinion the most fascinating theorems (or maybe ideas) in mathematics? This is a serious question: I'm interested in what experienced mathematicians think about that, and I also want to find out what else is there out there I've never heard about.
Do I count as an experienced mathematician? If yes, I am amazed by the correctness of Brzozowski's algorithm to minimize a DFA. The original proof was a combinatorial mess, but I found out about it in recent papers where they prove its correctness via a more general observation that in transition systems, reachability is dual to observability. Here is the paper.
Analyticity of holomorphic functions always felt like magic to me.
My topology lecturer had a thing he called the "golden theorem", namely that a function between metric spaces is continuous in the sense if and only if inverse images of open sets are open (where open = every point has an open ball in the set). That's the theorem that says that such a thing as topological spaces are even possible
I guess I'd want to start by talking about what counts as "fascinating". It's a very subjective term and I try to analyze things in a bit more fine-grained way that's hopefully a bit more useful than mere "fascination". I can be fascinated by something for a week and then stop being fascinated by it.
Some theorems are fascinating because the result is surprising. Some are fascinating because the result is really beautiful even if the proof not very pretty or easy to understand. Some theorems are fascinating because the proofs are really worthwhile - bringing in new concepts that make perfect sense once you get them. And so on.
Also, I don't tend to focus on "theorems" so much as subjects, which are usually bundles of examples, definitions and theorems.
For example, for the last month I've been fascinated by this subject: abelian varieties. Lange and Birkenhake's book Complex Abelian Varieties is the book I want to understand. The Appell-Humbert theorem classifies line bundles on abelian varieties, and this theorem sits near the heart of this subject. But there are a whole bunch of other connected theorems and definition, and it's probably the examples that make this subject fun for me.
Why am I fascinated by this subject? One reason is that at first it seems scary - I'd never studied algebraic geometry very much, and "principally polarized abelian variety" is the kind of buzzword I associated with scary people - yet in the end it turns out to be not so bad. It's a lot like linear algebra, but linear algebra where your vector spaces come with lattices inside them. So it's like bumping into a branch of linear algebra that I'd never gotten around to thinking about!
Another reason it's fascinating is that everyone talks a lot about elliptic curves - in a way they're the simplest really interesting example of algebraic varieties, so I started by trying to learn a bunch about those, years ago - and elliptic curves are exactly the same as 1-dimensional abelian varieties.
So, it's a lot like having studied 1-variable calculus for years and then suddenly reading about multi-variable calculus. The perspective broadens; some things stay the same but some change.
Yet another reason this subject is fascinating is that I'm thinking a lot about 2-rigs (basically categorified rings), and working on them with @Joe Moeller and @Todd Trimble, and this subject is a source of really interesting examples.
I think the PCP theorem from complexity theory is utterly mind-blowing.
So I've never heard of the PCP Theorem before, and reading the Wikipedia page, it does seem pretty mindblowing.
"The PCP theorem says that for some universal constant K, for every n, any mathematical proof for a statement of length n can be rewritten as a different proof of length poly(n) that is formally verifiable with 99% accuracy by a randomized algorithm that inspects only K letters of that proof."
But then in the next section they give an equivalent formulation: "An alternative formulation of the PCP theorem states that the maximum fraction of satisfiable constraints of a constraint satisfaction problem is NP-hard to approximate within some constant factor."
I don't see how these two are connected. Can you give some intuitive reason for why this equivalence holds?
The problem "Does there exist a proof of the statement X of length at most N?" (where N is coded in unary) is NP-complete so that gives the link between mathematical proofs and NP. For the constraint satisfaction stuff, I think the idea of the proof of the PCP theorem is that we transform an NP problem into a constraint satisfaction problem where either the constraints are all satisfiable or only a very small proportion of them are. (Then we can randomly sample constraints and see if they are satisfied.) Since this transformation is possible, if we could determine if some constant proportion of the constraints held in polynomial time we could also solve the original problem in polynomial time.