You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
Today I learned that Trump's vice presidential pick has unlocked a lot of financial support from Silicon Valley billionaires, because Vance is deeply connected to folks like Peter Thiel and Marc Andreesen, who are afraid that Biden will rein in AI and crypto. I hope everyone using applied category theory for these technologies, or tech in general, keeps paying attention to these developments.
Oof where are the billionaires funding AI legislation?
fwiw, we try to compete against Thiel and palantir not just on tech, but on ethics. The open-source ACT community clip art associating sigma/delta/pi with Tolkien's Lord of the Rings silmarils jewels at http://silmarils.tech actually has an anti-palantir bent: in Lord of the Rings, the creators of the palantiri had a great foe, who wore a crown with three jewels. So I can offer this artwork as a symbol of resistance.
Kind of odd to imagine identifying with Team Morgoth, but I see where you're coming from!
the onion's thoughts on the matter: https://www.theonion.com/j-d-vance-vows-to-fight-for-forgotten-communities-in-s-1851602605
Cryptocommodity is a good thing!
Bye bye dollar ;)
I'm pretty disappointed by Robert Ghrist's recent writing on AI and mathematics. A lot of writing on AI seems to carry a note of bitter vindictiveness against mathematicians who are interested in logical rigor and correct arguments, as these are a form of Luddism in the age of AI.
3A// academics : your life is more complicated still. you have to teach the courses, write the grants, manage your phd students. the students are using AI to write their theses. they are either better at it than you or not -- either way, you are frustrated. your vaunted network effects (conferences, journal editors you know personally, your cadre of students) have diminishing returns as AI advances. what good is knowing terry tao personally when anyone can hit up the TT API from DeepSeek? plus -- and this is crucial -- you're a research mathematician, which means you are very conservative and probably don't believe in all that AI crap since it's just linear algebra and probability. can't fool you, no. you'll wake up too late and be too behind. HFSP (have fun staying pure)
and as long as you have some taste and management skill, you can rival a top mathematician in terms of generating an army of phdai students to work with you / for you. oh, sure, it will be infinitely easier to write crank 500-page nonsense proofs of the trisection of an angle, but that's because AI is a force multiplier.
A lab staffed by an army of "PhD" AI models is an interesting concept. I mostly think this is just fantasy though.
7/ end, for now. we'll see how this holds up in a few years. in the meantime, i've got some research to work on in fields that i've never published in before (neural networks architecture, financial network models, ...)
I look forward to seeing his AI assisted research in new fields. I guess there's really nothing stopping us from publishing in several adjacent fields which are entirely new to us now that we can learn the basics in two weeks with AI. After all, medical science is expected to advance very quickly - https://observer.com/2025/01/anthropic-dario-amodei-ai-advances-double-human-lifespans/
2/ what does AI change? not today's AI -- the AI that is coming, that reasons better than a crack phd student, writes clearly, has creative ideas (but maybe not a sense of good taste), can upload to the ArXiV, and, most importantly, can be manifested in as large a group of agents as you (or a manager-AI) can handle...
Wow, not just a lab staff of AI language models, but a manager too, to check their work and identify hallucinations. Like an ant farm of mathematicians. Lots to be optimistic about here.
Wow I'm more than disappointed. That's deliberately irritating writing.
Where did you get those quotes?
Matteo Capucci (he/him) said:
Where did you get those quotes?
nitter doesn't work with articles, so here's a direct link to that other fascist's website https://x.com/robertghrist/article/1883646365777236306
I guess he's looking to jump ship into tech?
By coincidence someone at our Lunar New Year party last night said that Ghrist was involved in some kind of "accelerationism" - I forget the adjective they stuck in front of that noun, but the idea was something like: the only way to save civilization is to accelerate the development of AI. This person seemed to be saying Ghrist had written a kind of accelerationist manifesto. Does anyone here know about that?
The article just mentioned doesn't seem to be that manifesto, quite.
I think it was "effective accelerationism".
Oh, yeah, "effective accelerationism" is like a religion in silicon valley https://en.wikipedia.org/wiki/Effective_accelerationism
For context on this awful Internet nonsense, effective accelerationism is a movement named in parody of effective altruism that supposes we just need to go full speed ahead to let the AIs replace us with transhuman interdimensional beings or whatever. The leader of the movement, before being deanonymized, was known as Based Beff Jezos. It's rather grim to my mind that Ghrist is into this.
Thanks! I don't actually know that Ghrist is into effective accelerationism, but the person I'm alluding to said he was.
I'm not sure if that explains Ghrist calling presheaves 'sheaves', but I suppose doing so speeds thing up a bit. :upside_down:
"e/acc" is dangerous pseudo-mystical claptrap that doesn't see any importance for the survival of humanity or human values in the future:
The founders of the movement see it as rooted in Jeremy England's theory on the origin of life, which is focused on entropy and thermodynamics.[11] According to them, the universe aims to increase entropy, and life is a way of increasing it. By spreading life throughout the universe and making life use up ever increasing amounts of energy, the universe's purpose would thus be fulfilled.[11]
In other words, it means caring about life only insofar as life produces entropy. Then as with libertarianism you get "(im)moderate optimists" who convince themselves that when following the core principle off of a cliff you actually won't encounter a cliff, and that the thing that maximizes production of entropy will be "sufficiently like humans" in some sense that's satisfying to them personally, thus having their cake and eating it too at the expense of intellectual honesty.
I find it pretty disturbing that people like that are allowed to be involved in the development of AI.
"e/acc" more like "eek" am I right
Well, the thing is, there’s no entity to with the affordance to “allow” or “not allow”, and having a mindset like that is very positively correlated with wanting to work on AI…
That's true in the strict sense of legality, but there are people making decisions about whether to enable such activity, such as those who give funding to these people, right?
From what I've read from Ghrist, I'd guess that he aligns with e/acc on aesthetic before ethical grounds. To a certain extent a lot of this seems like a re-edition of Italian futurism, it stems from a pre-rational excitement with new technology and wish to be a "person of the new century" and not one of those left behind. As in the case of futurists largely embracing fascism, though, this often goes hand in hand with an ethos of "adapt or die" that's easy to exploit for reactionaries (who will choose who exactly is "standing in the way of progress").
I mean, from what I see in his social media presence, Ghrist
all of which make me think of futurist intellectuals, and also makes me doubt that he would genuinely embrace some form of sci-fi-utopianism (but not that he would claim ironically/provocatively to be on the side of those who do)
Mike Shulman said:
That's true in the strict sense of legality, but there are people making decisions about whether to enable such activity, such as those who give funding to these people, right?
Mmm, yes, but those people still an extremely distributed entity, including every venture capitalist in the country, various different managers at every major tech company, every government research funding agency, not to mention analogous structures in Europe, China, etc... I don't think "allowed to work on AI" as a unified property of a person is a very meaningful frame. You could instead say "I think funders of AI should be discouraged from supporting such people working on AI", and at least that makes the problem clearer, of eg convincing every tech billionaire and everyone with budgetary authority at Microsoft, Google, or Meta independently not to fund such people.
I like the analogy with futurism; everything obsessed with the new is old again, I suppose.
The US government tried to slow Chinese and Russian work on AI in various ways, including the CHIPS and Science act:
Companies are subjected to a ten-year ban prohibiting them from producing chips more advanced than 28 nanometers in China and Russia if they are awarded subsidies under the law.
But, it seems the Chinese behind DeepSeek figured out how to develop LLMs more efficiently:
Deepseek says it only needed 2,000 specialized chips from Nvidia to train its V3. This is in comparison to a reported 16,000 or more required to train leading models, according to The New York Times.
So, if you're going to try to stop someone from working on AI, you have to have a lot of power over them.
Amar Hadzihasanovic said:
To a certain extent a lot of this seems like a re-edition of Italian futurism, it stems from a pre-rational excitement with new technology and wish to be a "person of the new century" and not one of those left behind. As in the case of futurists largely embracing fascism, though, this often goes hand in hand with an ethos of "adapt or die" that's easy to exploit for reactionaries (who will choose who exactly is "standing in the way of progress").
I love this comparison!
Amar Hadzihasanovic said:
and also makes me doubt that he would genuinely embrace some form of sci-fi-utopianism
judging his personal intent here is completely irrelevant. i don't think we should extend goodwill and fall into the plausible deniability trap. the far right understands this very well, and actively exploits the inability of "nicer" actors to take a decisive stance. see the more extreme example of elon's fascist salutes.
if anyone, by their actions, actively promotes effective accelerationism, we should take them at face value. even if they were doing it solely because they're fascinated with the aesthetics, they need to understand that what they're putting out there is a problem and be called out for it.
i keep far right propaganda, either IRL or on social media, that uses aesthetics as the primary hook, and that avoids mentioning any political stances they might have. this definitely works very well, and lots of people end up being radicalized in this way.
I think it is both interesting and very relevant wrt political action to be curious about people's motivation. The far right e.g. has historically succeeded by building heterogeneous coalitions whose actors often had incompatible motivations, and to dismantle these coalitions, it may be necessary to target different actors with different strategies.
The example of Elon's salute is relevant, as, I believe, is part of a trollish right-wing culture which has made itself impermeable to being "called out" (winding up their political enemies is one of the things it craves), so that's not going to work out, unlike it would, e.g., with a more traditional far-right politician who has tried to style themselves as "respectable" and "reassuring".