You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
Thank you.
I meant that for 3 independent binary decisions, there are outcomes. Call the set of outcomes the sample space, denoted . Elements of are of the form , for example.
I then asked about the permutations of . This is where I got the number from.
One way to list the elements of is viewing choices as a tree. Let the initial state be the root. Assume is written left of . At a given node, choose the leftmost option that hasn’t been visited yet. You will make these choices, in this order:
.
I wonder if a way to understand which permutations are rule-based rather than random is by interpreting binary strings as base integers. The above sequence becomes .
If we started at and added mod each time, we would get . Translated into binary, this is .
So we can say that that last sequence isn’t random. When we translate it back into “ independent binary choices”, it is harder to see what rule tells us how to make choice if we are given choice . But it turns out to be possible to do so.
I’m basically investigating, even when we know that a set is enumerable because we found a way to enumerate it, how many other ways are there to enumerate it?
And some aspects I am thinking about are:
Every enumeration should be able to be written as a function from . These are the trivial rules, which list the order of the elements directly.
Some of the enumerations can be written as inductive rules. For example, .
There are some rules which require memory, in that we have to be able to check if a sequence has been produced before.
The subtlety in this is that the complexity of a description is heavily dependent on which operations/rules are allowed and which are considered basic. For example, you have to exploit some structure on the set to get any rule at all (otherwise no ordering is inherently more natural than any other). Here you're using orderings on 2 and 3 plus the lexicographic ordering on one hand, and a total ordering on [8] plus the translation to base 2 on the other. Presenting the problem as a bijection as you do, you can compose with a bijection in the domain or the codomain, whose complexity must be measured. If you treat [8] as a ring of integers mod 8, you might allow addition as a basic operation on the domain. Or you might take permutations of 2 and 3 as basic on the right. Or you might measure complexity of permutations ignoring all that structure, taking transpositions as the basic ones... there is no a priori comparison between these different ways of constructing orderings from one another.