Category Theory
Zulip Server
Archive

You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.


Stream: practice: communication

Topic: To AI or not to AI and related questions


view this post on Zulip Mike Shulman (May 26 2025 at 15:09):

On the pedagogical issues with AI I appreciated the essay We, Robots by Kate Epstein.

AI is not going to make human intelligence obsolete; on the contrary, it is going to make human intelligence more necessary than ever. But human beings might well let AI make them stupid enough to believe that their own intelligence is obsolete.

view this post on Zulip Eric M Downes (May 27 2025 at 09:14):

Mike Shulman said:

Eric M Downes said:

Understanding preceding technical details is a sign of mastery

It took me a while to parse that correctly; I think you mean "it is a sign of mastery to be able to understand something at a high level before getting into the technical details". Right?

I had to reflect on what I meant here as well. I read your statements as saying that

  1. your intuitive / high level understanding comes first, then
  2. the technical details get worked out, and finally
  3. your post-details understanding then emerges, being at least functorially related to the pre-details understanding.

I realize you did not actually say (3) though, and I realize thats what I think of as "mastery"; it implies a fast converging sequence.

I also proceed from a high level / intuitive understanding first. But kind of like what James describes I go through many many iterations of big picture --> small picture before I finally have the details correct, if indeed I succeed. I make a lot of mistakes, though. Sometimes the process never converges.

So my experience of what constitutes "understanding" is more an active process of constantly applying error correction algorithms / processes to catch these inevitable mistakes. "Does the answer, obtained via composed details, match my intuition? Which one needs updating?" etc.

Where LLMs help, at least in how I use them programming, is that it greatly increases the rate at which I can apply the high-level error-correction steps.

AI, whether LLMs or other, are having a negative effect on crafts-people: those who are very skilled in the execution of a craft, and do not want to disengage to work at a level less-engaging of those skills.

And. I think they are having a positive impact on people who have (a) more ideas than (b) time*skill to implement them.

Understanding for the person engaging in the latter modality is supported by retaining the independent capacity for (c) critical evaluation of the implementation, and (d) honing core skills necessary for independent verification. (How to determine what constitutes core skills is a much larger discussion.)

Perhaps it is at least partly that students have not been previously taught or practiced (c) that Morgan is observing in the effects of AI on his students? At least I've often found that lacking in common education.

view this post on Zulip Eric M Downes (May 27 2025 at 09:25):

Regarding @Josselin Poiret and @John Baez concerns regarding environmental impacts... you're very right to aknowledge this, and I too care about this. I must ask though, do you use buildings and aidewalks built with concrete?

Likely, you use infrastructure with embodied carbon costs on a daily basis. The training of LLMs increase atmospheric pCO2 because the training uses energy, and that energy mix is dominated by fossil fuels, which are almost never burned applying carbon capture-at-source technology. (Despite the latter being ready to deploy.) @Peva Blanchard probably has more intelligent things to say here and might be able to help refine aspects of the lackluster analysis John noted above.

I am not at all calling you hypocrites, please understand, I am highlighting the difference between marginal carbon emissions, and embodied carbon costs. It seems .. odd to me to blame the energy demand, rather than the essential steps along the path linking demand to carbon emission.

I will also point out that if you already use a computer, you can use an already-trained self-hosted version of DeepSeek without very much increasing your marginal carbon emissions nor aiding in any way the major LLM-producing corporations. Certainly don't if you have deep concerns re: understanding as above! But IMO there are absolutely ways to engage with this technology without becoming a pawn of forces you despise.

Quite generally, people resisting the abuse of power are constantly repurposing the tools of those they oppose, or else they have too few tools to work with.

view this post on Zulip John Baez (May 27 2025 at 09:45):

Eric M Downes said:

I must ask though, do you use buildings and aidewalks built with concrete?

No, since 1) I don't think my avoiding buildings made of concrete would convince anyone to stop using them or building them, and 2) it would be extremely inconvenient in ways that would make me very unhappy.

As for language models, 1) I think that loudly announcing why I avoid them, especially on forums where a bunch of people listen to me, may have some effect and 2) it's quite easy for me to avoid using them, and it makes happier.

Further, carbon emissions are far from the only reason I dislike LLMs. (I explained that earlier.)

view this post on Zulip Josselin Poiret (May 27 2025 at 10:01):

Yes, I agree with @John Baez that LLMs should be easier to avoid than buildings. Also, I never said I am a proponent of rampant urbanization :) I do know that concrete (and construction in general) is a huge contributor to climate change, and if you asked me the same question about whether we should continue expanding cities I would definitely have a nuanced response (you do still need to account for actual basic needs like housing and teaching though, which LLMs do not fulfill)

view this post on Zulip Eric M Downes (May 27 2025 at 10:09):

Mike Shulman said:

On the pedagogical issues with AI I appreciated the essay We, Robots by Kate Epstein.

AI is not going to make human intelligence obsolete; on the contrary, it is going to make human intelligence more necessary than ever. But human beings might well let AI make them stupid enough to believe that their own intelligence is obsolete.

I think she makes many good points. And I respectfully disagree with one of her statements.

"""
[AI] can calculate and process superbly well, but it cannot reason, judge, or empathize at all.
"""

I think this is a category error? AI can definitely "act as if judging": it can seemingly choose which features to ignore and which to focus on when implementing test-suites / QA for software. It can even suggest an implementation path for a proof or a software project. I might disagree with those assessments, but sometimes they are correct, and either way demonstrate the capacity.

I think we should never make the mistake of confusing our subjective experience of a capability (reason / empathy / whatever) with the external observation of another system which may (or may not) be engaging that capability.

That is, pulled back to qualia, we should not confuse sympathy with empathy. This is commonly done to autistic people by those more neurotypical: "oh they cannot empathize" ... yet Temple Grandin (a quite successful autist) has displayed perhaps the greatest level of empathy I have seen documented: she was able to understand from a cow's perspective what the experience of going through a slaughterhouse was like, and what aspects made it stressful / not stressful, then reduce those aspects. Its a macabre example, but I hope it illustrates the point.

For me, mathematics is very empathic, but it involves no humans on the other side at all... it is more like feeling how the mathematical concepts must experience things on their own terms. I am no great mathematician, so maybe others would regard this as a poor substitute for however they process, but I still suggest our common understanding of empathy is ill-posed and probably wrong.

I think an LLM when reacting to a misunderstanding of a concept by its human interlocutor, and adopting the terminology and values of the human in order to explain this misunderstanding, might well be engaging in empathy. Wether this is "actually" so requires solving problems Turing himself felt it better to leave side, that is figuring out exactly how our qualia (our subjective experience) maps to external observations of consciousness / intelligence / empathy.

I do agree with her that we expose children to screens and technology at too young an age, generally, and I love her description of history as empathy. But I am not so sure that an AI is incapable of these things at all, though OpenAI et al. may be penalizing those responses to avoid self-involved loops.

None of this touches directly on wether AI is good or bad for math or human understanding, but I do think there is an important difference between the felt experience, and the observation of another's felt experience. Mirror neurons, and sympathy, theory of mind, etc. lessen the distance between those two things, and presumably were reinforced by evolution allowed us to cooperate, but we shouldn't make the error of presuming for instance, that plants cannot feel pain just because they do not express it as we do. (They do react to stimuli! Just more slowly.)

Similarly, I wonder if there is as much distance here between AI and human regarding empathy and reason... perhaps the enemies of reason and empathy: trauma, chauvinism/contempt, and unawareness of subtlety, or lack of care, understood categorically, will play a great role in wether AI is a tool for good or evil, as the uncategorified versions have in humans' own experiences and history. (Which is perhaps why I'm trying to get the people here to not outright dismiss this technology, because then surely the outcomes will be worse.)

view this post on Zulip Morgan Rogers (he/him) (May 27 2025 at 11:32):

I feel that you didn't put sufficient thought into your comparison between people with autism and AI systems. The notion that autism implies a lack of empathy is a harmful stereotype at best; pointing out that it is false does not strengthen your argument about AI systems which are de facto unrelated.

view this post on Zulip Mike Shulman (May 27 2025 at 15:04):

Eric M Downes said:

I think this is a category error? AI can definitely "act as if judging"

I think the distinction between "judging" and "acting as if judging" is exactly the point. A zombie cannot judge no matter how much it appears to be judging; neither can AI.

view this post on Zulip Eric M Downes (May 27 2025 at 15:51):

Morgan Rogers (he/him) said:

I feel that you didn't put sufficient thought into your comparison between people with autism and AI systems. The notion that autism implies a lack of empathy is a harmful stereotype at best; pointing out that it is false does not strengthen your argument about AI systems which are de facto unrelated.

I have to admit I don’t understand your argument. How is one to determine when something is “de facto” unrelated vs ?

It seems to me that people are making judgements about the subjective experience or lack thereof in others, based either on “how similar is it to me” or based on backing out the values they have already decided. I am quite used to people doing this to other people (hence why I mentioned the prejudices / assumptions around autism) and now I’m just observing that the lack of framework which allows this is now being applied more broadly.

Now on some level that’s fine, but I think these issues are difficult and shouldn’t be glossed over.

view this post on Zulip Eric M Downes (May 27 2025 at 15:59):

Mike Shulman said:

Eric M Downes said:

I think this is a category error? AI can definitely "act as if judging"

I think the distinction between "judging" and "acting as if judging" is exactly the point. A zombie cannot judge no matter how much it appears to be judging; neither can AI.

You’re not wrong about zombies by definition. And yet zombies are (outside of cordyceps infection!) mostly a philosophical construction? LLMs have more reality.

The power and one of the dangers of this technology is that is treading on this ground where we don’t actually know how to relate internal experience to external observations.

I’m not claiming Claude is conscious. I’m claiming I don’t know, and I don’t think anyone does because the access we have to judgements of consciousness are rules of thumb that depend on similarity or on (to me) arbitrary definitions. Or at least those with which I am familiar are like this.

view this post on Zulip Eric M Downes (May 27 2025 at 16:15):

Put more glibly maybe I’m just rehashing the “Does a dog have Buddha nature?” Koan, which was not really addressable by solely rational means then, any more than it is now.

view this post on Zulip Morgan Rogers (he/him) (May 27 2025 at 16:17):

Eric M Downes said:

Morgan Rogers (he/him) said:

I feel that you didn't put sufficient thought into your comparison between people with autism and AI systems. The notion that autism implies a lack of empathy is a harmful stereotype at best; pointing out that it is false does not strengthen your argument about AI systems which are de facto unrelated.

I have to admit I don’t understand your argument. How is one to determine when something is “de facto” unrelated vs ?

I was pointing out that people with autism are in particular people, who are thus affected by the things that people choose to say about them.
LLMs are not people. Nothing under the umbrella of AI that exists today is a person.

view this post on Zulip Ryan Wisnesky (May 27 2025 at 16:38):

heh, except in a legal sense, with corporations being legally defined as "persons" in the US

view this post on Zulip Eric M Downes (May 27 2025 at 17:26):

Morgan Rogers (he/him) said:

I was pointing out that people with autism are in particular people, who are thus affected by the things that people choose to say about them.
LLMs are not people. Nothing under the umbrella of AI that exists today is a person.

This sounds like a judgemental (in)equality rather than propositional.

A chap named Wittgenstein Paraphrased has asked to convey a question: “How would the observable universe be any different if AI did get offended by the things we said about it?”

If you can answer that better than Turing has, you might write a paper on it!

view this post on Zulip Madeleine Birchfield (May 27 2025 at 19:16):

I personally think that LLMs are simply an unprofitable fad being funded by American and Chinese venture capitalists and that sooner or later, they will pull their funding from LLM companies and the AI market will crash like the dot com bubble. 50 years in the future people will be talking about creating AI in the same way that people today talk about humans landing on the moon - we used to be able to do it but our society isn't willing to fund it anymore.

view this post on Zulip Morgan Rogers (he/him) (May 27 2025 at 20:39):

To be perfectly clear: @Eric M Downes I was deliberately not addressing your actual argument. I was pushing back on your decision to drop harmful stereotypes about actual people. There are definitely people with autism reading this discussion who may not have appreciated you saying what you did, quite independently of whatever reaction a LLM would have.

view this post on Zulip Spencer Breiner (May 27 2025 at 23:45):

@Morgan Rogers (he/him) What is the problem is here? It seems like you acknowledge the existence of a "harmful stereotype". If people are/can be bad at ascribing empathy to other people, isn't that germane to the discussion of whether we ascribe it to machines?

I'm not trying to make an object-level claim here, but I'm suspicious of attempts to conflate discussing an offensive thing with endorsing an offensive thing.

view this post on Zulip Eric M Downes (May 28 2025 at 05:18):

If anyone anywhere on the autism spectrum or within an adjacent neurodivergent space was offended by what I have said, I am sorry, and please speak up here or in DMs, I will listen. I care about you, I adamantly don’t believe you lack empathy or any other such odious thing, and I value you feeling welcome in this space.

All that being said, I am a little suspicious of arguments generally that one is fighting on behalf of other adults to protect them from even the mention of harmful prejudices, especially those they have no doubt themselves experienced… are the aggrieved present? Do they have agency to speak? If so, it seems a little close to treating them like children to claim the overriding need to protect them from speech which is not hate speech, nor a personal attack, etc. I don’t think this is Morgan’s intention, I believe he genuinely cares about people here and keeping this a good place to discuss, but it is something I have noticed erode the quality of other spaces.

If the concern is that there are children here, and indeed there may be, let’s discuss that directly. But adults deserve the respect of being able to speak for themselves and raise their own concerns. IMO this is especially true for adults whom society has at times decided to treat like children or disrespect the agency of.

view this post on Zulip Alex Kreitzberg (May 28 2025 at 17:11):

Morgan is a moderator, and is a very intelligent and reasonable person. When someone/yourself is moderated, in my view you should spend ten times longer thinking about why and sleeping on it. If, still, you truly disagree, bring it up with them rather then argue about their moderation publicly.

I have a complicated relationship with the "Autism/Aspergers" labels. I suspect most people wouldn't confuse me with somebody who can't emphasize. If I read your argument when I was just a bit younger, I might believe that means I mustn't have autism because I didn't exhibit the "symptom" you were taking for granted.

Specifically, the premise "if it looks like they don't have empathy, it doesn't mean they don't" isn't useful for understanding autistic people, who are obviously and very visibly empathetic in my experience.

And a person who is autistic, but doesn't know, might take the premise as a way to feel better about how they aren't, and thereby avoid getting the help they need.

(It's funny, rather than writing "I'm offended!" ((Communicating simple offense is of course important)) I'm working very very hard to communicate how this discordance with "the truth" makes life harder for folks who might be vulnerable to taking the premise too seriously/literally XD.)

view this post on Zulip Eric M Downes (May 29 2025 at 08:53):

Thank you for your explanation @Alex Kreitzberg. Certainly I never meant to take for granted your or anyone else’s experience or relationship to autism.

My experiences are in line with your own; presentations vary, and autism is a very diverse label, but autistic people are generally very empathic. I find the prejudice (even worse it used to be a professional diagnostic assertion) in question to be frustrating, lazy, and odious, which is exactly why I brought it up. I will have to take greater care in discussing this in the future as it’s clear that whatever I wrote has not reflected my own feelings about it.

view this post on Zulip Eric M Downes (May 29 2025 at 08:56):

I’m not sure where these conversations are supposed to take place if not publicly, though. If I merely DM’ed Morgan I’d have missed out on your perspective.

view this post on Zulip Eric M Downes (May 29 2025 at 09:29):

Also I don’t know if I said “symptom”? If I did it should also have had quotes around it. I certainly don’t like when people describe my immutable traits as symptoms. It’s dehumanizing.

FWIW My argument was not “if it looks like they don’t have empathy it doesn’t mean they don’t” it was and is as Spencer put it: many people, among them medical professionals, have been (sometimes catastrophically) bad at ascribing empathy, even to other people.

Why exactly this is, I don’t know… I expect its because they mistake empathy for sympathy. More broadly there seems to be a requirement of similarity in presentation.

Regardless I think this is a huge blindspot for a lot of people though and it concerns a gap between external vs. internal view, possibly including the categorical concept of evil.

How to overcome this seems to involve communication, which seems to be some kind of process of mutually adjusting mappings between my concepts and your concepts until those maps are functorial, then natural, etc.

view this post on Zulip Eric M Downes (May 29 2025 at 09:50):

Alex Kreitzberg said:

(It's funny, rather than writing "I'm offended!" ((Communicating simple offense is of course important)) I'm working very very hard to communicate how this discordance with "the truth" makes life harder for folks who might be vulnerable to taking the premise too seriously/literally XD.)

I think that’s very responsible though, and I appreciate it.

In particular I found this valuable.

And a person who is autistic, but doesn't know, might take the premise as a way to feel better about how they aren't, and thereby avoid getting the help they need.

It addresses why bringing up stereotypes with insufficient care is dangerous. Because they can contribute to an informal perception of “X is informally true and we’re just kind of accepting X because we are not explicitly commenting on it.” Even when (as in this case) endorsement was very much not intended.

I don’t think I would have got that just from Morgan’s messages, and it’s important, so thank you.

view this post on Zulip Morgan Rogers (he/him) (May 29 2025 at 15:19):

I had better catch up a bit!

As @Alex Kreitzberg pointed out, I am a moderator: ideally, I would like to avoid a situation where people are hurt, by pointing out when someone says something that could be harmful. It shouldn't be the sole responsibility of any social group to stand up for themselves.

I didn't think you (@Eric M Downes ) had ill intent, and I appreciate your efforts to make it even clearer that you did not. Rather, it seemed to me you didn't think through the implications of what you were saying in the original message. Since I didn't spell those out (in spite of claiming to be "perfectly clear") allow me to do so now.

A "harmful stereotype" is not called as such because it merely risks direct insult to its target, but rather because it reinforces an association which typically has no inherent basis in reality. Even when disavowed, the statement of a harmful stereotype builds acceptance of the premise that there is a possibility of the association being justified; this is largely a psychological effect, since the statement in isolation doesn't explicitly imply this premise(*). The idea that an individual can serve as a representative of a group in either illustrating or countering a stereotype, as in your example of an individual with autism displaying empathy, is reasoning that rests on this accepting of the premise and is consequently flawed. Unfortunately, this particular stereotype is prevalent enough that people with autism regularly feel under pressure to disprove it (the first reaction when I mentioned this conversation to a neurodivergent friend was to point me to a video by a creator with autism explaining how they express empathy...) The exchange above about how people with autism perceive themselves is another example of potential harm.

It is difficult to sensitively address and counteract these nuances when bringing up harmful stereotypes, especially in the context of a discussion whose focus is on something else. Let's consider that context: you (@Eric M Downes ) were discussing the subjective experiences of agents (term intended to include humans and AI) and in particular their capacity for empathy, a trait with a strong association with humanity. In this context, you were tracing the philosophical boundary between those who do/can experience empathy and those who do not. You evoked the stereotype that autistic people cannot empathise as an instance of there being some doubt cast on which side of the boundary some actual people might lie. It seems to me that the relevance of this stereotype to the discussion directly rests on the validity of this doubt. Since you don't seem to think that these doubts are valid, bringing up the stereotype therefore adds little to the discussion, at the expense of the side-effects I describe above.

Summarising your argument in broader terms, Spencer Breiner said:

It seems like you acknowledge the existence of a "harmful stereotype". If people are/can be bad at ascribing empathy to other people, isn't that germane to the discussion of whether we ascribe it to machines?

I think this is a good point in isolation, and I tend to believe that there are plenty of examples of social norms which mischaracterize the emotional and intellectual experiences of non-humans. However, it is again the context that troubled me: Eric was communicating a belief that there is evidence that machines are capable of empathy. In parallel, he presented an example of one individual displaying empathy as evidence against a stereotype. The notion that people with autism would need to provide evidence of their capacity for empathy is the very thing that is dehumanizing about the stereotype. If it were me, I would also be insulted to be compared so directly to machines, that kind of comparison also having dehumanizing implications.

Taken individually, the harm done by mere mention of a stereotype is negligible, but harm is cumulative (c.f. the more overt notion of microaggression).

(*) This an effect which is often exploited by those skilled at rhetoric to give themselves plausible deniability. A similar effect is that of "just asking questions".