Category Theory
Zulip Server
Archive

You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.


Stream: deprecated: programming

Topic: type classes


view this post on Zulip Henri Tuhola (Apr 24 2020 at 16:00):

Some sort of motivation for asking this question. There's quite simple way to typecheck programs expressed as natural transformations, besides doing that seem to clarify some things about bidirectional type checking. Plus focused proof search becomes a simple concept and because of identity natural transformations you can relate it to type inference. Then continue this route little bit further and CAM can be thought as an NbE for categorical combinators.

view this post on Zulip Eduardo Ochs (Apr 24 2020 at 16:59):

Henri Tuhola said:

Some sort of motivation for asking this question. There's quite simple way to typecheck programs expressed as natural transformations, besides doing that seem to clarify some things about bidirectional type checking. Plus focused proof search becomes a simple concept and because of identity natural transformations you can relate it to type inference. Then continue this route little bit further and CAM can be thought as an NbE for categorical combinators.

Where can I read more about that?
(Thanks in advance =))

view this post on Zulip Dan Doel (Apr 24 2020 at 17:10):

You might need to specify what type classes you're talking about. Most things that have 'type classes' are not exactly implementing the same thing as the original, and it might affect how you view them.

view this post on Zulip Dan Doel (Apr 24 2020 at 17:13):

In Haskell, there is a requirement that if there is an instance for a a type, it is uniquely chosen. But quite a lot of implementations that are called 'type classes' do not have that requirement.

view this post on Zulip Luc Chabassier (Apr 24 2020 at 17:39):

I'm only speaking for haskell, but I always considered type classes as convenient syntaxic sugar : a type class is simply a tuple of functions, and you could implement them simply by adding the different functions as arguments. The constraint of them being uniquely chosen is so that the compiler can know which one to use (and all the other constraints on typeclasses are mostly there to make sure the algorithm to find them is decidable). So if you're not studying how the typeclasses themselves are resolved, you can rewrite them to code that does not use them.

view this post on Zulip Henri Tuhola (Apr 24 2020 at 17:43):

When I say "type classes" I'd think people immediately think of Haskell's type classes. There are quite few systems that are similar. Miranda has "abstract datatypes" and ML has modules. In C# traits perhaps are close as an idea. In general: Looking for a concept that abstracts things such as operations of addition/subtraction such that you can use the same thing to refer to addition in natural numbers, real numbers, vectors, etc..

If I think of what typeclasses denote, that'd be dictionary passing in Haskell. You define addition for (+) : a → a → a, then the 'a' is found from the context. (Num a) => a becomes (Num a) -> a. I know there's more details about how Haskell handles these, eg. the orphan instances being disallowed. But perhaps that's not important for finding a match in category theory for the concept.

view this post on Zulip Dan Doel (Apr 24 2020 at 18:00):

There are implementations of type classes that don't pass dictionaries, though. For instance, JHC just used representations of types at runtime, because you can instead imagine that type classes are doing case analysis on the particular type. And that might influence how you think about them for this, too. Like, you can view them as defining a subkind of all types with chosen extra structure.

view this post on Zulip Dan Doel (Apr 24 2020 at 18:01):

Although that view becomes less tenable with multi-parameter classes.

view this post on Zulip Gershom (Apr 24 2020 at 18:01):

The constructions of "abstract datatypes" "modules" and "type classes" (with and without uniqueness) are all very different, even though they are used in practice sometimes for similar things, such as overloading syntax to take different meanings over different classes of data.

Because they are different, if you try to look at them through categorical semantics you will get different things. On modules, Moggi's "Category-Theoretic Account of Program Modules" is a good place to start: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.41.3812&rank=3

On abstract datatypes, see the classic "Abstract Types have Existential type" http://homepages.inf.ed.ac.uk/gdp/publications/Abstract_existential.pdf from which one can then proceed to standard accounts of the semantics of existential quantification (which looks sort of like a coend).

As for the question of what abastracts a typeclass _with laws_ (i.e. when one is trying to get at the essence of an algebraic structure) then we can ignore the mechanics of implementation, and say maybe we are interested in the algebraic structures being presented, which leads to the study of lawvere theories.

Dictionary passing is an implementation detail, and taking it into account will lend more confusion than clarity.

view this post on Zulip Henri Tuhola (Apr 24 2020 at 18:03):

Eduardo Ochs said:

Where can I read more about that?

This is something I figured out, but I don't know if it's new idea. You can carve holes into natural transformations, such that they start looking like functors that take morphisms in and produce another morphism. Eg. pair(f) would mean for (f x f).pair and pair.f. Likewise if you take product counit, fst and snd. They also form fst(f,g) = f.fst = fst.(f x g) and likewise. Now the typechecking of these is trivial. You just construct "identity natural transformations" for them in each end. They're simple enough that you can match them up to "typecheck" the expression.

It's fairly cool because there is already the categorical abstract machine. It consists of 7 rules that defines "evaluation" in cartesian closed categories. That's a matching "typechecking" algorithm that is equally simple and encodes the type entirely in the expression it types.

Natural transformation in category theory encodes a family of morphisms such that another morphisms can "leap" over it. This has been related to in Wadler's "Theorems for free" -paper. And every adjoint functor pair forms a pair of natural transformations and "reduction rules" for them.

view this post on Zulip Henri Tuhola (Apr 24 2020 at 18:42):

Gershom said:

Dictionary passing is an implementation detail, and taking it into account will lend more confusion than clarity.

Dictionary passing resembles the use of de-bruijn indices to give denotational semantics to lambda calculus. It should not be disregarded. But you're right that it could be worthwhile to put it aside.

I've been already examining final tagless encoding and Conal Elliott's "compiling to categories". If you stick to dictionary passing as denotational semantics for typeclasses, it'd seem to reveal that just the categorical combinators such as "fst, snd, pair" can be thought of as parameters into the expression. Though it gets quite weird. Since they're defined as units/counits in adjoint functors, they still would have reduction rules even if they were treated as parameters that you can fill in.

Following your advice and examining those papers and lawvere theories next.

view this post on Zulip Exequiel Rivas (Apr 24 2020 at 18:51):

I think that here there was some discussion about type classes and a possible categorical interpretation.

view this post on Zulip Eduardo Ochs (Apr 24 2020 at 19:37):

Henri Tuhola said:

Eduardo Ochs said:

Where can I read more about that?

This is something I figured out, but I don't know if it's new idea. You can carve holes into natural transformations, such that they start looking like functors that take morphisms in and produce another morphism. Eg. pair(f) would mean for (f x f).pair and pair.f. Likewise if you take product counit, fst and snd. They also form fst(f,g) = f.fst = fst.(f x g) and likewise. Now the typechecking of these is trivial. You just construct "identity natural transformations" for them in each end. They're simple enough that you can match them up to "typecheck" the expression.

It's fairly cool because there is already the categorical abstract machine. It consists of 7 rules that defines "evaluation" in cartesian closed categories. That's a matching "typechecking" algorithm that is equally simple and encodes the type entirely in the expression it types.

Natural transformation in category theory encodes a family of morphisms such that another morphisms can "leap" over it. This has been related to in Wadler's "Theorems for free" -paper. And every adjoint functor pair forms a pair of natural transformations and "reduction rules" for them.

Nifty! Thanks! =)

view this post on Zulip Henri Tuhola (Apr 26 2020 at 18:47):

Eduardo Ochs said:

Henri Tuhola said:

Eduardo Ochs said:

Where can I read more about that?

This is something I figured out, but I don't know if it's new idea. You can carve holes into natural transformations, such that they start looking like functors that take morphisms in and produce another morphism. Eg. pair(f) would mean for (f x f).pair and pair.f. Likewise if you take product counit, fst and snd. They also form fst(f,g) = f.fst = fst.(f x g) and likewise. Now the typechecking of these is trivial. You just construct "identity natural transformations" for them in each end. They're simple enough that you can match them up to "typecheck" the expression.

It's fairly cool because there is already the categorical abstract machine. It consists of 7 rules that defines "evaluation" in cartesian closed categories. That's a matching "typechecking" algorithm that is equally simple and encodes the type entirely in the expression it types.

Natural transformation in category theory encodes a family of morphisms such that another morphisms can "leap" over it. This has been related to in Wadler's "Theorems for free" -paper. And every adjoint functor pair forms a pair of natural transformations and "reduction rules" for them.

Nifty! Thanks! =)

I just published a blog post on this.