You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
Hello all! This is the official channel for discussion about Bob's talk.
Title: Quantum Natural Language Processing
Zoom Meeting:
https://mit.zoom.us/j/280120646
Meeting ID: 280 120 646
Youtube live stream:
https://youtu.be/YV6zcQCiRjo
Hello. In 5 minutes we start!
starting now!
I feel like "DisCo(Cat)" arose from someone wanting "disco cat" to be a formal math term.
I'm kind of curious about things like movement in a grammar like this
Rongmin Lu so, basically just the 2 dimensional version of "disco ball"?
Here is the repo, as shared by Alexis: https://github.com/oxford-quantum-group/discopy
You can play with it if you want!
I'd like a nice machinery for translating traditional semantic denotations of words into this framework. I can see it for easy functions like transitive verbs (, but I don't see how you'd process something like "the" in this framework.
Logically speaking pregroup grammars (that's what these things are called) have the same "expressive power" as context free grammars, there are translations back and forth. Lambek and Anne Preller and probably some other people I don't remember wrote some papers on the grammar of different real-world languages using pregroups
Some of the grammatical types can get pretty complicated, but like often with formal grammar it gets easier with practice
Doing it for "fiddly" words like the will depend a lot on the grammatical specifics of the language, so English in this case. My guess (and it's just a guess, maybe Lambek did something totally different) is that you'd have a primitive type for "undetermined noun phrase" and another for "determined noun phrase", and the takes the former on the right and outputs the latter
So , if I give you a non-intersective adjective like "favorite", would that just be a state with 2 lines coming in, and the agent gets hooked up to the person it needs to be hooked up to somehow?
Explaining neural nets is a big problem in theory and in applications. If NNs can be turned into a lower level approximation below the quantum circuits, it could be quite helpful in settings where it's important for humans to understand/interpret the model.
The ultimate NLP problem is: "one is a baby receiving example utterances paired with vision, and one wants to infer syntax and semantics" --- which parts of this does QNLP help with, and which are left for future work?
Repeating my question from the other chat. Is there a way to translate these circuits to classical neural nets?
The language seems more expressive and coherent. It would be a nice "higher level language" if it sat atop NNs.
@Tim Sears I think the converse question is interesting, too: can we interpret NN's as approximating a lower level QNLP substrate?
@Tim Sears via the formalism of "Tensor Networks" (just penrose string diagrams, where the connotation is that one thinks about them in a certain computational way), I think this can be modeled neurally in straightforward fashion . If one is on a classical computer, though, the tensors can become unwieldy to represent. In deep learning, one often approximates huge tensors by a low rank approximation. Perhaps this is useful both as an optimization and also as an Occam prior on the nature of language?
as long as the syntax tree of the sentence under consideration is given. This is a limitation shared by the specific QNLP approach Bob shared, I think. With LSTMs, though, as long as one commits to a certain dimensionality shared by all types, it is straightforward to emulate the QNLP networks as long as one's syntax trees are generated by a regular expression (so imagine CFG-style Pushdown Automata with bounded but potentially large stacks).
Question re. Meaning in Use : can you comment on meaning as dynamic significance (or influence) in some dynamic context of evolving, shared discourse + belief + action. < — was addressed nicely
Tim Sears said:
Repeating my question from the other chat. Is there a way to translate these circuits to classical neural nets?
The language seems more expressive and coherent. It would be a nice "higher level language" if it sat atop NNs.
I think @Martha Lewis worked on this
Paolo Perrone said:
Here is the repo, as shared by Alexis: https://github.com/oxford-quantum-group/discopy
You can play with it if you want!
Woo, I am very happy to see the @ symbol is used for tensor product. (Using it for matrix product is an abomination, imo.)
I'm not so sure about the ">>" operator... as long as we never need to use "+" in these expressions it should be ok (>> has lower precedence than +).
This is my own attempt at "verifying" the zig-zag equation works, https://github.com/punkdit/bruhat/blob/master/bruhat/vec.py (see line 518.)
I just use "*" for vertical composition.
Here's the video for all those that missed the talk!
https://youtu.be/mL-hWbwVphk
@Tim Sears @Sam Tenka FYI, there has been some work on finding a correspondence between certain NNs and tensor networks, e.g. see the work of Levine et. al. https://arxiv.org/pdf/1803.09780.pdf and https://arxiv.org/pdf/1704.01552.pdf
Rongmin Lu said:
Tai-Danae Bradley said:
Tim Sears Sam Tenka FYI, there has been some work on finding a correspondence between certain NNs and tensor networks, e.g. see the work of Levine et. al. https://arxiv.org/pdf/1803.09780.pdf and https://arxiv.org/pdf/1704.01552.pdf
Oh wow, thank you! These do appear to confirm the intuition that some of us had.
In a way, Penrose might have been morally correct to say that "consciousness is quantum"; it seems we're now slowly figuring out the technically correct way to say the same thing.
FWIW here are the published versions:
- 1704.01552: https://openreview.net/forum?id=SywXXwJAb (ICLR 2018 conference with open peer review)
- 1803.09780: https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.122.065301
This is interesting---I was not aware of journals that publicly display reviews like this!