You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
Scenario:
A collection of elements called Nodes
We wish to represent “edges” between them
One way is to capture the information of what nodes an edge is between
So far this is a multigraph, but undirected
We allow there to be a collection of elements called Edges
We can map an edge to the collection (pair) of nodes it exists between
But we want to be able to associate nodes and edges too
We could create another collection, “Node-Edge Hyperedges”, where a hyperedge is mapped to a pair: an edge, and a node, that it exists between.
And another collection, “Edge-Edge Hyperedges”, where those hyperedges are mapped to a pair of edges they exist between.
This is intractable. We keep having to define another collection, to get hyperedges between elements of any of the previous collections (like hyperedges between Edge-Edge Hyperedges and Nodes).
Is there some principle or pattern here that makes it evident “what the problem is”?
Is the issue that we actually are dealing with recursion? Let’s say at stage 1, we have a collection of elements . We can form edges between any of them. Then, we take the union of the edges we formed and the collection . Call that . Now, at stage 2, we can repeat the process of forming edges on elements of .
It’s kind of similar to the Von Neumann cumulative hierarchy.
It seems like there are some recursive functions where we can characterize the “total set” generated by one such function, without recursion.
How do we know if a set, equipped with a recursive function to generate new elements in stages, generates a set of elements which can be defined without recursion?
Would this help us find a non-recursive definition of hypergraphs?
Thank you very much.
Someone told me we can use a type constructor to define a hierarchy of types inductively, and then we identify the type containing them all as the transfinite limit of them.
But I’m also wondering how this relates to and the axiom of constructibility.
I feel like the Von Neumann hierarchy is a collection of sets defined inductively by a single, simple formula (); but I think the point is if , we can define the same collection of sets without induction; we have formulae, the axioms of a set theory, which determine the same model of sets. Is that right?
My idea of a certain hypergraph has a similar inductive formula. At stage 0, there’s a collection of nodes, . At stage 1, we form (unordered) pairs of elements from , introduce a second type for edges, and map edges to pairs, like . Then we take the union of those elements with , and this becomes .
This definition does not seem good. There’s a lot of topics that are new to me here, maybe someone can help me.
I think there must be some paradigm regarding a dichotomy of how we can define a set by building it up in stages, versus how we can assume the total set of elements which would be generated already exists and define its structure in terms of some endomorphisms.