Category Theory
Zulip Server
Archive

You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.


Stream: event: MIT Categories Seminar

Topic: May 7 - Bob Coecke's talk


view this post on Zulip Paolo Perrone (May 05 2020 at 17:09):

Hello all! This is the official channel for discussion about Bob's talk.
Title: Quantum Natural Language Processing

Zoom Meeting:
https://mit.zoom.us/j/280120646
Meeting ID: 280 120 646

Youtube live stream:
https://youtu.be/YV6zcQCiRjo

view this post on Zulip Paolo Perrone (May 07 2020 at 15:56):

Hello. In 5 minutes we start!

view this post on Zulip Paolo Perrone (May 07 2020 at 16:00):

starting now!

view this post on Zulip Brian Pinsky (May 07 2020 at 16:23):

I feel like "DisCo(Cat)" arose from someone wanting "disco cat" to be a formal math term.

view this post on Zulip Brian Pinsky (May 07 2020 at 16:32):

I'm kind of curious about things like movement in a grammar like this

view this post on Zulip Brian Pinsky (May 07 2020 at 16:38):

Rongmin Lu so, basically just the 2 dimensional version of "disco ball"?

view this post on Zulip Paolo Perrone (May 07 2020 at 17:01):

Here is the repo, as shared by Alexis: https://github.com/oxford-quantum-group/discopy
You can play with it if you want!

view this post on Zulip Brian Pinsky (May 07 2020 at 17:02):

I'd like a nice machinery for translating traditional semantic denotations of words into this framework. I can see it for easy functions like transitive verbs (e(et)e\to (e\to t), but I don't see how you'd process something like "the" in this framework.

view this post on Zulip Jules Hedges (May 07 2020 at 17:04):

Logically speaking pregroup grammars (that's what these things are called) have the same "expressive power" as context free grammars, there are translations back and forth. Lambek and Anne Preller and probably some other people I don't remember wrote some papers on the grammar of different real-world languages using pregroups

view this post on Zulip Jules Hedges (May 07 2020 at 17:07):

Some of the grammatical types can get pretty complicated, but like often with formal grammar it gets easier with practice

view this post on Zulip Jules Hedges (May 07 2020 at 17:09):

Doing it for "fiddly" words like the will depend a lot on the grammatical specifics of the language, so English in this case. My guess (and it's just a guess, maybe Lambek did something totally different) is that you'd have a primitive type for "undetermined noun phrase" and another for "determined noun phrase", and the takes the former on the right and outputs the latter

view this post on Zulip Brian Pinsky (May 07 2020 at 17:15):

So , if I give you a non-intersective adjective like "favorite", would that just be a state with 2 lines coming in, and the agent gets hooked up to the person it needs to be hooked up to somehow?

view this post on Zulip Tim Sears (May 07 2020 at 17:19):

Explaining neural nets is a big problem in theory and in applications. If NNs can be turned into a lower level approximation below the quantum circuits, it could be quite helpful in settings where it's important for humans to understand/interpret the model.

view this post on Zulip Sam Tenka (May 07 2020 at 17:22):

The ultimate NLP problem is: "one is a baby receiving example utterances paired with vision, and one wants to infer syntax and semantics" --- which parts of this does QNLP help with, and which are left for future work?

view this post on Zulip Tim Sears (May 07 2020 at 17:24):

Repeating my question from the other chat. Is there a way to translate these circuits to classical neural nets?
The language seems more expressive and coherent. It would be a nice "higher level language" if it sat atop NNs.

view this post on Zulip Sam Tenka (May 07 2020 at 17:25):

@Tim Sears I think the converse question is interesting, too: can we interpret NN's as approximating a lower level QNLP substrate?

view this post on Zulip Sam Tenka (May 07 2020 at 17:27):

@Tim Sears via the formalism of "Tensor Networks" (just penrose string diagrams, where the connotation is that one thinks about them in a certain computational way), I think this can be modeled neurally in straightforward fashion {\color{red}^\star}. If one is on a classical computer, though, the tensors can become unwieldy to represent. In deep learning, one often approximates huge tensors by a low rank approximation. Perhaps this is useful both as an optimization and also as an Occam prior on the nature of language?

{\color{red}^\star} as long as the syntax tree of the sentence under consideration is given. This is a limitation shared by the specific QNLP approach Bob shared, I think. With LSTMs, though, as long as one commits to a certain dimensionality shared by all types, it is straightforward to emulate the QNLP networks as long as one's syntax trees are generated by a regular expression (so imagine CFG-style Pushdown Automata with bounded but potentially large stacks).

view this post on Zulip Lee Mondshein (May 07 2020 at 17:27):

Question re. Meaning in Use : can you comment on meaning as dynamic significance (or influence) in some dynamic context of evolving, shared discourse + belief + action. < — was addressed nicely

view this post on Zulip Jules Hedges (May 07 2020 at 18:35):

Tim Sears said:

Repeating my question from the other chat. Is there a way to translate these circuits to classical neural nets?
The language seems more expressive and coherent. It would be a nice "higher level language" if it sat atop NNs.

I think @Martha Lewis worked on this

view this post on Zulip Simon Burton (May 07 2020 at 21:00):

Paolo Perrone said:

Here is the repo, as shared by Alexis: https://github.com/oxford-quantum-group/discopy
You can play with it if you want!

Woo, I am very happy to see the @ symbol is used for tensor product. (Using it for matrix product is an abomination, imo.)

view this post on Zulip Simon Burton (May 07 2020 at 21:07):

I'm not so sure about the ">>" operator... as long as we never need to use "+" in these expressions it should be ok (>> has lower precedence than +).
This is my own attempt at "verifying" the zig-zag equation works, https://github.com/punkdit/bruhat/blob/master/bruhat/vec.py (see line 518.)
I just use "*" for vertical composition.

view this post on Zulip Paolo Perrone (May 07 2020 at 21:18):

Here's the video for all those that missed the talk!
https://youtu.be/mL-hWbwVphk

view this post on Zulip Tai-Danae Bradley (May 07 2020 at 22:08):

@Tim Sears @Sam Tenka FYI, there has been some work on finding a correspondence between certain NNs and tensor networks, e.g. see the work of Levine et. al. https://arxiv.org/pdf/1803.09780.pdf and https://arxiv.org/pdf/1704.01552.pdf

view this post on Zulip Arthur Parzygnat (May 08 2020 at 09:38):

Rongmin Lu said:

Tai-Danae Bradley said:

Tim Sears Sam Tenka FYI, there has been some work on finding a correspondence between certain NNs and tensor networks, e.g. see the work of Levine et. al. https://arxiv.org/pdf/1803.09780.pdf and https://arxiv.org/pdf/1704.01552.pdf

Oh wow, thank you! These do appear to confirm the intuition that some of us had.

In a way, Penrose might have been morally correct to say that "consciousness is quantum"; it seems we're now slowly figuring out the technically correct way to say the same thing.

FWIW here are the published versions:

This is interesting---I was not aware of journals that publicly display reviews like this!