You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
I hate to do this, but the four arXiv papers by this author, uploaded in the last week, all look wholly AI-generated to me: https://arxiv.org/search/?query=Reizi&searchtype=author
In particular the appendix of https://arxiv.org/abs/2503.16555 there is a promise of proofs, examples and so on, but the entire text of the appendix is this:
In this appendix, we present additional proofs, detailed calculations, and further examples
that complement the results in the main text. In particular, the appendix includes:
* A complete proof of the back-and-forth construction used in Lemma 5.8.
* Detailed verifications of the functoriality of the Henkin and compactness-based model constructions.
* Concrete examples illustrating the construction of models for specific theories.These supplementary materials are provided to offer deeper insight into the technical details and to demonstrate how our unified framework can be applied to various logical systems.
The next text is the bibliography and that's it. The content is also extremely banal.
After a cursory inspection of https://arxiv.org/abs/2503.16570, I agree.
I can't find any information about this supposed person online except an affiliation via their email, but I've made a report to the Arxiv.
yep, no way a human wrote this
Stupid LLM forgetting the syntax for bold in TeX and falling back on Markdown...
I'm proud to say I called bullshit from the titles alone in my feed lol glad I wasn't wrong
Heh, we did an experiment on LLMs that produce SQL code, and for many of them, no matter how much you tell them not to format the output, they still do it. Stripping extra comments and markdown/html out of responses turned out to be the hardest part of interacting with the LLM in an automated flow.
Matteo Capucci (he/him) said:
I'm proud to say I called bullshit from the titles alone in my feed lol glad I wasn't wrong
Right, natural transformations between theorems.
I noticed there are two orders of the names used. Two of the papers are JRB, and two are BJR. What could be the point of that?
The email address seems to be attached to Open University Japan, so name-order may have been auto-generated differently for the different papers?
fosco said:
yep, no way a human wrote this
To be fair, I have seen researchers who just learned about category theory writing this way.
Anyway, the AI-generated slop CT papers are coming. I've noticed that Qwen 2.5 is trained on a lot of higher/formal category theory. It's fun to play with and it can produce approximately accurate references to results, which can sometimes cut down on search time. It's not yet good enough to generate any meaningfully creative results, and is not enough to fool a half-keen eye, but I can imagine an undergrad using qwen to write a undergrad thesis that nobody reads.
Noah Chrein said:
fosco said:
yep, no way a human wrote this
To be fair, I have seen researchers who just learned about category theory writing this way.
Anyway, the AI-generated slop CT papers are coming. I've noticed that Qwen 2.5 is trained on a lot of higher/formal category theory. It's fun to play with and it can produce approximately accurate references to results, which can sometimes cut down on search time. It's not yet good enough to generate any meaningfully creative results, and is not enough to fool a half-keen eye, but I can imagine an undergrad using qwen to write a undergrad thesis that nobody reads.
What is Qwen and how happen it was trained on so much category theory?
Qwen appears to be Alibaba's language model. I hadn't heard of it till now.
Perhaps the Chinese understand the importance of category theory to mathematics and hence to generalized cognition