You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
My friend William Waites, a computer scientist / software engineer / epidemiologist at the University of Southampton, has developed a simple category-based language for constructing "agent networks": networks of LLMs, for example, that can be played off each other to achieve more robust results:
The follow-on post is out now, in which we explain something about agent self-knowledge and memory management show how to do session types.
Reading this
The agent constructed a plausible, confident, and completely wrong explanation: the summary was "provided to me by the system at the start of this conversation" as a "briefing or recap." When pressed, it doubled down...
it's so very like the left-hemisphere style of thinking as describe by Iain McGilchrist in The Master and his Emissary. I was reading a nice example of this yesterday. Someone with a right-hemisphere stroke may have paralysis of the left arm. The functioning left hemisphere then goes through the repertoire of denial and then imagined false explanation of why the arm can't move.
Curiously, if you give it an external potential cause, e.g., pretend to induce paralysis by injection, then it accepts this account.
The way to deal with this is not to tell the language model to be more considerate. The way to deal with it is to make sure the agent has enough information to give good advice, and that the information does not get lost.
Translated to the brain, for McGilchrist, this would derive from access via the right-hemisphere to a fuller flow of information from the body.
I suppose it wouldn't be so surprising if there was something to learn for AI from the solution evolution found for the human brain.