You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
I have been spending a lot of time working with pen and paper lately, and books.
I feel like sharing a little bit of what is a current work-in-progress.
Maybe someone might point out a few things to give me new ideas.
I have been considering a board game similar to Go.
The board is an array of cells, where .
There are 2 players.
They alternate taking turns.
A turn consists of placing a piece on the board.
Player 1 places black pieces, and player 2 places white pieces.
Define as the smallest set such that and .
The positions on the board is the set .
For example, if , .
A cell is either empty, has a black piece, or a white piece. Let be .
The board at a given moment is a function .
The set of all possible board configurations is , equivalently . This is the state space of the board.
There is a relation , where iff there is a valid move at board state which results in board state .
A set has bijections , also known as permutations or automorphisms.
Some bijections are symmetries, and some are not.
For example, if , the function is a symmetry.
One way we might define symmetry is via an adjacency relation. If , there is no apparent order for the elements of , and the same is true for . If we state , can inherit an order where iff exclusive-or .
An adjacency relation is symmetric, irreflexive, and antitransitive.
After forming the product , we define an adjacency relation to define the grid-structure of the board.
An automorphism on is a symmetry if it does not violate the adjacency relation.
In other words, for any board state , there may be other board states which are actually the same board state, with the board simply rotated or reflected.
In our version, the game Go has no "memory", in the sense that the current board state and whose turn it is is not affected by what previous board states led to the current one. Multiple gameplay paths can lead to the same board state.
What we want to consider is "true games", which are equivalence classes of board states under symmetries.
In a game, player 1 has 4 initial moves, but only 1 "true move" (playing the corner). Player 2 then has 2 true moves: adjacent or opposite. The board no longer has any symmetries. If player 2 played adjacent, player 1 has 2 true moves: capture or do not capture. If capture, player 1 wins. If do not capture, player 2 has 1 valid move and the game is a tie. If player 2 played opposite, the game is a tie. The game is determined: player 1 has 1 initial move, and then player 2 decides if it will be a tie or a 50-50 chance of tie or lose. Assuming perfect knowledge, they choose to tie.
I have drawn out possible gameplay paths for a board up to 3 moves so far.
I will keep working on this. I want to define moves as a partial function .
In theory, the set of possible moves is - place a black or white piece at a certain board position.
The function is partial because not every move is valid in every state.
We can augment the set of states with a value , where if is invalid at state , .
820C863F-7B72-4709-8D01-48865807D206.jpg
7D36B960-1A30-481E-BB4B-C67AAA657859.jpg
It sounds like you might be interested in counting things in the presence of symmetries. You might enjoy experimenting with the "orbit-counting theorem" if you haven't tried it out before.
It might be interesting to investigate how board states in Go are modelled in simulated games.
In case anyone is unaware. Its worthwhile to mention a book by Berlekamp & Wolfe, which develops and applies algebraic combinatorics and game theory to Go endgames. This is the furthest progress such techniques have made and preceded the current approach to the game, the success of which (AlphaGo and AlphaZero) heralded the ongoing AI revolution. The original AlphaGo used monte carlo methods and very cool probability / optimization theory via the multi-armed bandit problem; a decent intro for programmers I once used for a class was this book. The way they represent board states iirc is to use a 2d-array with elements for the actual game state, and a hash-table for constant-time lookup of previous states, which does not respect the symmetry.
Monte carlo tree search with machine learning to shape the distribution seems like such an elegant way to solve these kinds of problems. On grounds of aesthetic preference and elegance, I have some hope that the future of machine learning does lie in a harmonious coupling of symbolic/structured methods vs LLM/shoggoth methods, with the shoggoth guiding the symblic search, rather than the LLM just point-blank answering the question by spitting out the next move. (Here I'm thinking of theorem proving as well, where there is a choice between ML-guided tree search through the space of proofs vs just guessing the next tactic directly via an LLM)
Thank you for that book recommendation, Eric.