Category Theory
Zulip Server
Archive

You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.


Stream: theory: applied category theory

Topic: observing structured objects


view this post on Zulip David Egolf (Aug 11 2021 at 19:25):

(background: I'm an engineering student who would like to understand medical imaging better)
I was thinking about more about the nature of imaging, and thought I would share here in the hope of exploring these thoughts a bit further.

Historically, imaging techniques have not directly incorporated knowledge about the structure of the object being imaged. This results in images that are obviously wrong in some way. For example, consider this image:
image.png
(image from: https://ecgwaves.com/)

The circular object in the middle-top of the image appears to be casting a shadow below it. This is called "acoustic shadowing" and happens when a large fraction of the probing ultrasound wave is reflected by an object, resulting in weaker signal from the objects below.
Now, we know that biological tissue doesn't actually look like this. Nonetheless, ultrasound imaging systems will happily produce images like these, leaving interpretation to human experts.

We can understand how these artifacts arise by noting that a common approach to reconstruction is to form an image from a collection of pixels. The object is estimated at each pixel, with an estimate at a given pixel often estimated without consideration of estimates at other pixels. The pixel values can be thought of being the image of a function acting on a set, without any additional structure constraining the values of the pixels. Consequently, the resulting imaging system is able to generate unrealistic images.

I am wondering if modeling the unknown object as a groupoid - instead of a set - would help lead to better reconstruction algorithms with fewer artifacts (or possibly just provide another way of understanding fancy reconstruction approaches that do try to reject artifacts, like sparsity-based approaches). The idea is to split the object to be imaged into a number of parts, and to describe the object in terms of the symmetry groups of each part and also invertible linear transformations that relate the parts (like translation, scaling, and brightening/dimming). For example, we could model a blood vessel as a large number of thin rectangles stitched together to form the vessel. The groupoid would then describe how to (invertibly) transform a single reference segment of the blood vessel into all the other segments of the blood vessel.

I am wondering how to go from a collection of observations to an estimate of a groupoid, as opposed to the more traditional estimate of a function acting on a set. This seemed to me like something category theory might be useful for, as it is able to help translate structure between different contexts, even when the contexts have different levels of structure or complexity.

As a simple example, imagine we have a wheel with four equally-spaced spokes that is spinning slowly. Say we take a picture of the top-left quarter of the wheel, and then a few moments later a picture of the top-right quarter of the wheel. Say each picture captures an image of a single spoke. The two images, if interpreted naively, would seem to indicate that the two spokes have an angle between them that is not a multiple of 90 degrees. However, if we knew the symmetry group of the spokes of the wheel, we would be able to correct this rough image so that the two spokes lie at exactly some multiple of 90 degrees to each other. This correction map goes from a more complicated domain (where all relative angles of spokes are possible) to a simpler domain (where only certain relative angles are allowed).

I have no amazing conclusion as how to actually do this in practice, but was hoping this might start some conversation on the topic of estimating groupoids (or objects modelled by groupoids) from observations of them that don't have as much symmetry.

view this post on Zulip fosco (Aug 11 2021 at 21:13):

Have you heard of something called "computational homology"?

view this post on Zulip David Egolf (Aug 11 2021 at 21:55):

I had not heard of it! However, looking at the introduction of "Computational Homology" (by Kaczynski et al.) has certainly piqued my interest. Maybe I can explore some of these ideas in part by learning some things from that book.
I would also be interested in hearing your perspective @fosco on how computational homology relates to this kind of problem.

view this post on Zulip Morgan Rogers (he/him) (Aug 12 2021 at 13:25):

David Egolf said:

I am wondering if modeling the unknown object as a groupoid - instead of a set - would help lead to better reconstruction algorithms with fewer artifacts (or possibly just provide another way of understanding fancy reconstruction approaches that do try to reject artifacts, like sparsity-based approaches). The idea is to split the object to be imaged into a number of parts, and to describe the object in terms of the symmetry groups of each part and also invertible linear transformations that relate the parts (like translation, scaling, and brightening/dimming). For example, we could model a blood vessel as a large number of thin rectangles stitched together to form the vessel. The groupoid would then describe how to (invertibly) transform a single reference segment of the blood vessel into all the other segments of the blood vessel.

I like the idea, but should the transformations necessarily form a groupoid? In your blood vessel example, a machine wouldn't necessarily know that the blood vessel it's observing is the largest branch size, so it would surely need to extend in both directions (if the transformations are all invertible, where would it stop? :stuck_out_tongue_wink: ).
Also, the symmetry in biological situations is rarely very precise, and we surely want an image that is accurate over one that is generated from symmetries, so how will you account for that?

view this post on Zulip David Egolf (Aug 12 2021 at 14:16):

Morgan Rogers (he/him) said:

I like the idea, but should the transformations necessarily form a groupoid? In your blood vessel example, a machine wouldn't necessarily know that the blood vessel it's observing is the largest branch size, so it would surely need to extend in both directions (if the transformations are all invertible, where would it stop? :stuck_out_tongue_wink: ).
Also, the symmetry in biological situations is rarely very precise, and we surely want an image that is accurate over one that is generated from symmetries, so how will you account for that?

Hi Morgan!
I don't think the transformations relating the different parts of an object would necessarily have to form a groupoid. I am wondering if this could give a good-enough approximation, though, while allowing us to work with a model that is still relatively simple.

When you say "surely need to extend in both directions" - do you mean that the machine would need to consider scaling transformations of all levels of scaling? That is, corresponding to arbitrarily large or small amounts of scaling? I'm not sure this would necessarily be a real problem. If we are reconstructing an object limited to some bounded area of space with some resolution limit, then I think this will naturally limit the largest relevant scaling transformation.

I agree that the symmetry in biological situations is not very precise, although I think the precision of description by symmetries will increase as one makes the "building blocks" used to build up the rest of the object through invertible transformations either smaller, more detailed, or more numerous. Further, many current imaging methods actually use a very crude description of objects: objects are modelled as a rectangular grid of squares (or cubes), called pixels (or voxels). I suppose this array of squares does fit into this groupoid modelling scheme (you can invertibly transform any pixel into any other by translation and brightening/dimming), but the structure of this groupoid is very regular and does not capture anything in particular about the object being imaged. More recently, sparsity-based reconstruction techniques have emerged where the object is approximated as a weighted sum of one or more simple shapes (like points, sinusoids, or wavelets). So, approximation of complex biological objects from crude building blocks is very common in state-of-the-art imaging techniques.

view this post on Zulip David Egolf (Dec 17 2021 at 20:32):

I learned today about the "fundamental groupoid", which made me remember this thread. (The fundamental groupoid of a topological space XX is a category where the objects are the points of XX, and the morphisms are the homotopy equivalence classes of paths between pairs of points. So, we put two paths in the same equivalence class if they have the same endpoints and one can be continuously deformed into the other.)

Sometimes medical imaging is used to detect the presence or absence of things. For example, we might want to know if there is a kidney stone present or not. In this case, the presence or absence of something - apart from its precise shape - is of interest. This is in tension with the usual method of image reconstruction, which is pixel-by-pixel. I am wondering if the fundamental groupoid - or a related topological concept - could be helpful for an image reconstruction approach that is designed to answer these more topological questions.

As an idealized example, assume we are imaging a 2D slice of tissue that is homogeneous except for (possibly) the presence of a cyst (a pocket of fluid). Then, I think the fundamental groupoid of the non-cyst tissue holds relevant information as to whether the cyst is present or not.

cyst

I think there is no continuous way to deform path 1 into path 2 while keeping the endpoints fixed. If I'm getting the terminology right, that means that path 1 and path 2 are in different homotopy equivalence classes. That means that there are at least two morphisms from xx to yy in the fundamental groupoid of the tissue. If the cyst wasn't present, then path 1 and path 2 would be in the same homotopy equivalence class, and so there should be fewer morphisms from xx to yy.

The example above suggests that the fundamental groupoid of the tissue is detecting information that we care about in an image reconstruction setting. It could be interesting to consider an image reconstruction approach that tries to work to directly reconstruct a fundamental groupoid. For example, let FF be the "observation" map that sends the tissue TT to some observations made of it - perhaps the observations are ultrasound echoes observed at different locations and times. Then, it seems like it could be possible that the topological information of TT could be preserved in F(T)F(T), provided FF is designed properly. Probing the topological properties of F(T)F(T) - working directly in observation space (or "echo space" in this case) - might be a direct approach to learn something about the topological properties of TT.

view this post on Zulip Morgan Rogers (he/him) (Dec 18 2021 at 11:19):

Topological data analysis aims to do something similar to this, but using the derived information of (co)homology of the space. The idea is a good one as long as you're careful (in a full 3D reconstruction of the tissue, the two paths you draw would become homotopic again, so you have to focus on slices for what you described to work as stated)

view this post on Zulip David Egolf (Dec 21 2021 at 01:06):

I seem to have run into a problem with this idea.
Assume we are trying to image a target TT, which I am wanting to model as a topological space (so that I can consisder its fundamental groupoid or other topological invariants). I was wanting to introduce an observation map f:TOf: T \to O, which maps into the topological space of observations OO. This map associates a single point tTt \in T with a single observation o=f(t)Oo=f(t) \in O. However, this does not model how observation actually works in the case I am interested in, unfortunately. In ultrasound imaging, the response actually observed is associated with the entire target, and is given by if(ti)\sum_i f(t_i), where we are summing responses from each individual piece of the target.

To me, this seems like a significant problem for the idea described above - which is to study the topology of the target TT by the topology of the observations made from it. Any thoughts are appreciated!

view this post on Zulip Spencer Breiner (Dec 21 2021 at 14:08):

Hi David. It sounds to me like you might want to look at function spaces as the carriers of your observation map, rather than the underlying spaces themselves.

I find it easier to think through things with a concrete example, so let's suppose that your TT is a three-dimensional block of material, and OO is a two-dimensional projection from that. If the data that comes out is intensity, that will have the form of a function i:OR+i:O\to\mathbb{R}^+. Similarly, if the relevant occlusions are determined by something like density, that will also the form d:TR+d:T\to\mathbb{R}^+.

The main point is that you can then describe your measurement process as a function (R+)T(R+)O(\mathbb{R}^+)^T\to(\mathbb{R}^+)^O. For example, in a very simple model of occusion with no scattering, we might have something like
i(x,y)=exp(d(x,y,z)dz)i(x,y) = exp(-\int d(x,y,z) dz)
where the light is projected along the zz-axis. Of course, you are interested in the inverse problem, mapping observations into target structures, but that should be a section of this map (i.e., the composite (R+)O(R+)T(R+)O(\mathbb{R}^+)^O\to (\mathbb{R}^+)^T\to(\mathbb{R}^+)^O should be the identity).

You can add in topology by putting norms on the spaces of functions. There are lots of choices appropriate for different purposes. Then I would expect a small change to the density dd will lead to a small change in the intensity ii.

While we're here, it's also worth noting that that you could also use something other than R+\mathbb{R}^+ as the values for your function spaces. This is probably relevant if the material you are imaging isn't isotropic (same in all directions), in which case you could replace your density by a field of vectors or bivectors.

view this post on Zulip David Egolf (Dec 21 2021 at 19:11):

Hi Spencer,
This is a neat language for talking about observations! Your example reminds me of x-ray imaging, and it also seems to work for describing some modes of ultrasound imaging. For example:

However, I don't immediately see how this perspective helps save the broad idea of trying to extract topological properties of a target from topological properties of the observation of that target. In this language, a particular (unknown) target of interest is just a point in a set (or potentially in a topological space). The information about topology being carried by a continuous measurement process ff I think is actually information about the topology of (R+)T(\mathbb{R}^+)^T, the space of different targets. I was hoping the measurement process could carry topological information about a particular target of interest, instead.

view this post on Zulip David Egolf (Dec 21 2021 at 19:41):

Some thoughts on another direction for modelling imaging, trying to model that we generally are unable to image each tiny part of the target in isolation:

Earlier, we saw that a map ff from a topological space TT ("the target") to a topological space OO ("the observations") did not well model some kinds of medical imaging. In particular, the problem is that we usually do not actually observe f(t)f(t) for any tTt \in T, but instead we observe larger parts of the target all at once.

To describe observations that observe large parts of a target all at once, it seems reasonable to start by trying to model a target in a way that lets us easily talk about (and hopefully define maps from) different parts of the target. Let's start with a topological space (T,τT)(T, \tau_T) as before, but instead of focusing on the set TT, let us focus on the open sets τT\tau_T. We now think of each of the open sets of τT\tau_T as describing a different part of the thing we want to image. To focus on this perspective, perhaps it is a good idea to consider the partial order category induced by τT\tau_T. The objects are the open sets from τT\tau_T, and there is a morphism from AA to BB if ABA \subseteq B.

If we now consider an observation function on topological spaces f:TOf: T \to O, we can extend that into a corresponding function on "parts" of the target. Let F(T)F(T) be the partial order of open sets of TT. For each open set UU in this partial order, we want to associate an observation corresponding to observing that entire part of TT at once. Call this fext(U)f_{ext}(U). (I suspect this is sometimes given by fext(U)=if(pi)f_{ext}(U) = \sum_i f(p_i), where pip_i are the points in UU). I think we can then organize these sets {fext(U)UτT}\{f_{ext}(U) | U \in \tau_T\} into a partial order by copying the partial order structure from F(T)F(T). I think this gives a structure that describes the observations obtained by observing different parts of the target, which is hopefully useful.