Category Theory
Zulip Server
Archive

You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.


Stream: community: general

Topic: AI-generated papers


view this post on Zulip David Michael Roberts (Mar 25 2025 at 00:33):

I hate to do this, but the four arXiv papers by this author, uploaded in the last week, all look wholly AI-generated to me: https://arxiv.org/search/?query=Reizi&searchtype=author

view this post on Zulip David Michael Roberts (Mar 25 2025 at 00:35):

In particular the appendix of https://arxiv.org/abs/2503.16555 there is a promise of proofs, examples and so on, but the entire text of the appendix is this:

In this appendix, we present additional proofs, detailed calculations, and further examples
that complement the results in the main text. In particular, the appendix includes:
* A complete proof of the back-and-forth construction used in Lemma 5.8.
* Detailed verifications of the functoriality of the Henkin and compactness-based model constructions.
* Concrete examples illustrating the construction of models for specific theories.

These supplementary materials are provided to offer deeper insight into the technical details and to demonstrate how our unified framework can be applied to various logical systems.

The next text is the bibliography and that's it. The content is also extremely banal.

view this post on Zulip Chad Nester (Mar 25 2025 at 06:48):

After a cursory inspection of https://arxiv.org/abs/2503.16570, I agree.

view this post on Zulip Kevin Carlson (Mar 25 2025 at 07:33):

I can't find any information about this supposed person online except an affiliation via their email, but I've made a report to the Arxiv.

view this post on Zulip fosco (Mar 25 2025 at 08:33):

image.png

yep, no way a human wrote this

view this post on Zulip David Michael Roberts (Mar 25 2025 at 09:17):

Stupid LLM forgetting the syntax for bold in TeX and falling back on Markdown...

view this post on Zulip Matteo Capucci (he/him) (Mar 25 2025 at 15:39):

I'm proud to say I called bullshit from the titles alone in my feed lol glad I wasn't wrong

view this post on Zulip Ryan Wisnesky (Mar 25 2025 at 16:10):

Heh, we did an experiment on LLMs that produce SQL code, and for many of them, no matter how much you tell them not to format the output, they still do it. Stripping extra comments and markdown/html out of responses turned out to be the hardest part of interacting with the LLM in an automated flow.

view this post on Zulip Joe Moeller (Mar 25 2025 at 22:06):

Matteo Capucci (he/him) said:

I'm proud to say I called bullshit from the titles alone in my feed lol glad I wasn't wrong

Right, natural transformations between theorems.

view this post on Zulip Joe Moeller (Mar 25 2025 at 22:07):

I noticed there are two orders of the names used. Two of the papers are JRB, and two are BJR. What could be the point of that?

view this post on Zulip David Michael Roberts (Mar 26 2025 at 00:27):

The email address seems to be attached to Open University Japan, so name-order may have been auto-generated differently for the different papers?

view this post on Zulip Noah Chrein (Mar 28 2025 at 15:44):

fosco said:

yep, no way a human wrote this

To be fair, I have seen researchers who just learned about category theory writing this way.

Anyway, the AI-generated slop CT papers are coming. I've noticed that Qwen 2.5 is trained on a lot of higher/formal category theory. It's fun to play with and it can produce approximately accurate references to results, which can sometimes cut down on search time. It's not yet good enough to generate any meaningfully creative results, and is not enough to fool a half-keen eye, but I can imagine an undergrad using qwen to write a undergrad thesis that nobody reads.

view this post on Zulip Ivan Di Liberti (Mar 28 2025 at 15:51):

Noah Chrein said:

fosco said:

yep, no way a human wrote this

To be fair, I have seen researchers who just learned about category theory writing this way.

Anyway, the AI-generated slop CT papers are coming. I've noticed that Qwen 2.5 is trained on a lot of higher/formal category theory. It's fun to play with and it can produce approximately accurate references to results, which can sometimes cut down on search time. It's not yet good enough to generate any meaningfully creative results, and is not enough to fool a half-keen eye, but I can imagine an undergrad using qwen to write a undergrad thesis that nobody reads.

What is Qwen and how happen it was trained on so much category theory?

view this post on Zulip Kevin Carlson (Mar 28 2025 at 16:07):

Qwen appears to be Alibaba's language model. I hadn't heard of it till now.

view this post on Zulip Noah Chrein (Mar 28 2025 at 16:08):

Perhaps the Chinese understand the importance of category theory to mathematics and hence to generalized cognition

view this post on Zulip Kevin Carlson (May 29 2025 at 22:18):

There’s an interesting fake paper on the ArXiv today. I can’t really tell if it’s AI crankery or just the old fashioned kind. Did anybody glance at it? https://arxiv.org/abs/2505.22558

view this post on Zulip Cole Comfort (May 29 2025 at 22:22):

Kevin Carlson said:

There’s an interesting fake paper on the ArXiv today. I can’t really tell if it’s AI crankery or just the old fashioned kind. Did anybody glance at it? https://arxiv.org/abs/2505.22558

The excessive use of lists suggests AI

view this post on Zulip Kevin Carlson (May 29 2025 at 22:47):

Right, that makes sense. It was harder to find obvious local absurdities than in papers further up this thread, which is disappointing.

view this post on Zulip David Michael Roberts (May 29 2025 at 23:32):

There's a whole bunch recently that I have been complaining about pointing out elsewhere. The author is uploading a new paper every couple of days, and the title names something after himself. I'm happy to see today that they've been moved to math.GM ! (as I suggested)

https://export.arxiv.org/find/math/1/au:+Alpay_F/0/1/0/all/0/1

view this post on Zulip David Michael Roberts (May 29 2025 at 23:35):

And in the case at the top of the thread, namely https://arxiv.org/search/?query=Reizi&searchtype=author all these are also math.GM classified now, not math.CT.

view this post on Zulip Ryan Wisnesky (May 29 2025 at 23:55):

Seems like in theory the arXiv "endorsement system" should deal with AI generated papers just like any other spam, but I guess it doesn't work in practice? https://info.arxiv.org/help/endorsement.html

view this post on Zulip Kevin Carlson (May 30 2025 at 00:35):

Yes, I'm a bit confused how all these people are getting endorsements.

view this post on Zulip Mike Shulman (May 30 2025 at 00:36):

At the very least it should be possible to "un-endorse" them after they've demonstrated their crankiness.

view this post on Zulip David Michael Roberts (May 30 2025 at 04:09):

Another one! https://arxiv.org/abs/2505.22931

view this post on Zulip David Michael Roberts (May 30 2025 at 04:18):

Maybe the arXiv needs to appoint a category theorist to the team of moderators...

view this post on Zulip Nathanael Arkor (May 30 2025 at 07:02):

I thought arXiv had a strong stance against crackpottery, so why are these papers allowed to remain under math.GM, rather than being removed entirely?

view this post on Zulip John Baez (May 30 2025 at 07:10):

Kevin Carlson said:

Right, that makes sense. It was harder to find obvious local absurdities than in papers further up this thread, which is disappointing.

The phrase "discrete conformal field theory" in the abstract made me raise my eyebrows. As if that were a known thing. Given how much people try everything, there probably is some work on something called discrete conformal field theory, but....

Yeah, there's a paper Conformal Field Theory at the Lattice Level: Discrete Complex Analysis and Virasoro Structure trying to understand how conformal field theory is related to field theory on a lattice. But most conformal transformations don't map a lattice to itself, so this is bound to be rough, and the idea that "Recursive Difference Categories and Topos-Theoretic Universality" would have something to say about it is, umm, questionable.

view this post on Zulip John Baez (May 30 2025 at 07:21):

Nathanael Arkor said:

I thought arXiv had a strong stance against crackpottery, so why are these papers allowed to remain under math.GM, rather than being removed entirely?

It can be hard to tell whether a math paper is crazy, and people whose papers are rejected entirely complain a lot, so it seems the arXiv folks find it convenient to put borderline papers into math.GM, expecting people 'in the know' to beware of such papers. That's my impression anyway.

view this post on Zulip John Baez (May 30 2025 at 07:24):

It's more diplomatic than having math.CP for crackpot math.

view this post on Zulip fosco (May 30 2025 at 08:16):

This is a truly beautiful era to witness first-hand.

view this post on Zulip Areeb SM (May 30 2025 at 08:31):

ViXra appears to be embracing the future...

view this post on Zulip David Michael Roberts (May 30 2025 at 08:42):

But not unreservedly:

viXra.org only accept scholarly articles written without AI assistance. Please go to ai.viXra.org to submit new scholarly article written with AI assistance.

view this post on Zulip Joe Moeller (May 30 2025 at 23:38):

arxiv could use the exact same disclaimer, only changing the first instance of "vixra".

view this post on Zulip John Baez (May 31 2025 at 08:49):

ai.viXra.org sounds like a fascinating crackpot sociology experiment. They have 343 papers so far. Within the subject of physics, most of the papers are on "relativity and cosmology", so we can guess that part of physics attracts crackpots the most. Within mathematics, 75% of the papers are on number theory.

view this post on Zulip John Baez (May 31 2025 at 08:52):

Yesterday's first submitted paper on general relativity and cosmology:

The Pi-Periodic 22/7ths Dimension: A Quantum Gravity Framework for Dark Energy

We propose a novel 4+1-dimensional quantum gravity framework incorporating a compactified extra dimension, τ , with a periodicity of π (to 22 decimal places), symbolically tied to the rational approximation 22/7.

Someone is taking this 22/7 stuff very seriously! I believe Archimedes came up with this approximation to pi, and it was good enough that by the Middle Ages a bunch of mathematicians believed π=\pi = 22/7.

view this post on Zulip fosco (May 31 2025 at 09:13):

by the Middle Ages a bunch of mathematicians believed π=\pi = 22/7.

:surprise: Wait, is it not? /s

view this post on Zulip James Deikun (May 31 2025 at 09:21):

Archimedes squared the circle with this ONE WEIRD TRICK! Geometers hate him!

view this post on Zulip John Baez (May 31 2025 at 09:26):

Actually I learned this when reading about the mathematician Franco of Liège. In 1020 he got interested in the ancient Greek problem of squaring the circle. But since he believed that pi is 22/7, he started studying the square root of 22/7. I don't know if he figured out how to construct the square root of 22/7 with straightedge and compass. But he did manage to prove that the square root of 22/7 is irrational!

Now, this is better than it sounds, because I believe the old Greek proof that 2\sqrt{2} is irrational had been lost in western Europe at this time. So it took some serious ingenuity.

Still, it's a sad reflection on the sorry state of mathematical knowledge in western Europe from around 500 AD to 1000 AD. It was better elsewhere at that time. I find this local collapse of civilization, and how people recovered, quite fascinating.

view this post on Zulip John Baez (May 31 2025 at 09:30):

Could AI slop prompt some loss of collective intelligence now?

view this post on Zulip Fabrizio Romano Genovese (May 31 2025 at 10:15):

John Baez said:

Could AI slop prompt some loss of collective intelligence now?

In general any tool that helps you thinking makes you sloppier in some respect. So yes. For instance, ancient languages are often way more complicated grammatically than new languages. One reason for this is that being able to say "Go around the mammoth, without being heard, by exactly half of a circle" in fewer words may have been a big advantage when we were hunter-gatherers, so languages tended to be more expressive. With civilization, inception of written support etc we lost the need to formulate such complicated statements in a compact way, languages became less expressive, and we probably lost some of our cognitive ability in the process as well. It's always a tradeoff.

view this post on Zulip John Baez (May 31 2025 at 11:30):

I'm thinking more about how successive generations of Roman summaries of Greek scientific texts watered them down to a homeopathic dilution of their original strength. Then many of the originals were lost, at least in western Europe.

view this post on Zulip Nathanael Arkor (May 31 2025 at 11:32):

@Fabrizio Romano Genovese: could you share a reference for the claim that older languages have higher entropy than modern languages?

view this post on Zulip Notification Bot (Jun 01 2025 at 08:05):

13 messages were moved from this topic to #meta: off-topic > language: the rise and fall of complex grammars by John Baez.

view this post on Zulip Matteo Capucci (he/him) (Jun 02 2025 at 17:32):

Fabrizio Romano Genovese said:

John Baez said:

Could AI slop prompt some loss of collective intelligence now?

In general any tool that helps you thinking makes you sloppier in some respect. So yes. For instance, ancient languages are often way more complicated grammatically than new languages. One reason for this is that being able to say "Go around the mammoth, without being heard, by exactly half of a circle" in fewer words may have been a big advantage when we were hunter-gatherers, so languages tended to be more expressive. With civilization, inception of written support etc we lost the need to formulate such complicated statements in a compact way, languages became less expressive, and we probably lost some of our cognitive ability in the process as well. It's always a tradeoff.

uuuhmm what's a reference for this? smells really funny to me...

view this post on Zulip David Michael Roberts (Jun 02 2025 at 19:44):

https://categorytheory.zulipchat.com/#narrow/channel/229451-meta.3A-off-topic/topic/language.3A.20the.20rise.20and.20fall.20of.20complex.20grammars/with/521434597