You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
(background: I'm an engineering student who would like to understand medical imaging better)
I was thinking about more about the nature of imaging, and thought I would share here in the hope of exploring these thoughts a bit further.
Historically, imaging techniques have not directly incorporated knowledge about the structure of the object being imaged. This results in images that are obviously wrong in some way. For example, consider this image:
image.png
(image from: https://ecgwaves.com/)
The circular object in the middle-top of the image appears to be casting a shadow below it. This is called "acoustic shadowing" and happens when a large fraction of the probing ultrasound wave is reflected by an object, resulting in weaker signal from the objects below.
Now, we know that biological tissue doesn't actually look like this. Nonetheless, ultrasound imaging systems will happily produce images like these, leaving interpretation to human experts.
We can understand how these artifacts arise by noting that a common approach to reconstruction is to form an image from a collection of pixels. The object is estimated at each pixel, with an estimate at a given pixel often estimated without consideration of estimates at other pixels. The pixel values can be thought of being the image of a function acting on a set, without any additional structure constraining the values of the pixels. Consequently, the resulting imaging system is able to generate unrealistic images.
I am wondering if modeling the unknown object as a groupoid - instead of a set - would help lead to better reconstruction algorithms with fewer artifacts (or possibly just provide another way of understanding fancy reconstruction approaches that do try to reject artifacts, like sparsity-based approaches). The idea is to split the object to be imaged into a number of parts, and to describe the object in terms of the symmetry groups of each part and also invertible linear transformations that relate the parts (like translation, scaling, and brightening/dimming). For example, we could model a blood vessel as a large number of thin rectangles stitched together to form the vessel. The groupoid would then describe how to (invertibly) transform a single reference segment of the blood vessel into all the other segments of the blood vessel.
I am wondering how to go from a collection of observations to an estimate of a groupoid, as opposed to the more traditional estimate of a function acting on a set. This seemed to me like something category theory might be useful for, as it is able to help translate structure between different contexts, even when the contexts have different levels of structure or complexity.
As a simple example, imagine we have a wheel with four equally-spaced spokes that is spinning slowly. Say we take a picture of the top-left quarter of the wheel, and then a few moments later a picture of the top-right quarter of the wheel. Say each picture captures an image of a single spoke. The two images, if interpreted naively, would seem to indicate that the two spokes have an angle between them that is not a multiple of 90 degrees. However, if we knew the symmetry group of the spokes of the wheel, we would be able to correct this rough image so that the two spokes lie at exactly some multiple of 90 degrees to each other. This correction map goes from a more complicated domain (where all relative angles of spokes are possible) to a simpler domain (where only certain relative angles are allowed).
I have no amazing conclusion as how to actually do this in practice, but was hoping this might start some conversation on the topic of estimating groupoids (or objects modelled by groupoids) from observations of them that don't have as much symmetry.
Have you heard of something called "computational homology"?
I had not heard of it! However, looking at the introduction of "Computational Homology" (by Kaczynski et al.) has certainly piqued my interest. Maybe I can explore some of these ideas in part by learning some things from that book.
I would also be interested in hearing your perspective @fosco on how computational homology relates to this kind of problem.
David Egolf said:
I am wondering if modeling the unknown object as a groupoid - instead of a set - would help lead to better reconstruction algorithms with fewer artifacts (or possibly just provide another way of understanding fancy reconstruction approaches that do try to reject artifacts, like sparsity-based approaches). The idea is to split the object to be imaged into a number of parts, and to describe the object in terms of the symmetry groups of each part and also invertible linear transformations that relate the parts (like translation, scaling, and brightening/dimming). For example, we could model a blood vessel as a large number of thin rectangles stitched together to form the vessel. The groupoid would then describe how to (invertibly) transform a single reference segment of the blood vessel into all the other segments of the blood vessel.
I like the idea, but should the transformations necessarily form a groupoid? In your blood vessel example, a machine wouldn't necessarily know that the blood vessel it's observing is the largest branch size, so it would surely need to extend in both directions (if the transformations are all invertible, where would it stop? :stuck_out_tongue_wink: ).
Also, the symmetry in biological situations is rarely very precise, and we surely want an image that is accurate over one that is generated from symmetries, so how will you account for that?
Morgan Rogers (he/him) said:
I like the idea, but should the transformations necessarily form a groupoid? In your blood vessel example, a machine wouldn't necessarily know that the blood vessel it's observing is the largest branch size, so it would surely need to extend in both directions (if the transformations are all invertible, where would it stop? :stuck_out_tongue_wink: ).
Also, the symmetry in biological situations is rarely very precise, and we surely want an image that is accurate over one that is generated from symmetries, so how will you account for that?
Hi Morgan!
I don't think the transformations relating the different parts of an object would necessarily have to form a groupoid. I am wondering if this could give a good-enough approximation, though, while allowing us to work with a model that is still relatively simple.
When you say "surely need to extend in both directions" - do you mean that the machine would need to consider scaling transformations of all levels of scaling? That is, corresponding to arbitrarily large or small amounts of scaling? I'm not sure this would necessarily be a real problem. If we are reconstructing an object limited to some bounded area of space with some resolution limit, then I think this will naturally limit the largest relevant scaling transformation.
I agree that the symmetry in biological situations is not very precise, although I think the precision of description by symmetries will increase as one makes the "building blocks" used to build up the rest of the object through invertible transformations either smaller, more detailed, or more numerous. Further, many current imaging methods actually use a very crude description of objects: objects are modelled as a rectangular grid of squares (or cubes), called pixels (or voxels). I suppose this array of squares does fit into this groupoid modelling scheme (you can invertibly transform any pixel into any other by translation and brightening/dimming), but the structure of this groupoid is very regular and does not capture anything in particular about the object being imaged. More recently, sparsity-based reconstruction techniques have emerged where the object is approximated as a weighted sum of one or more simple shapes (like points, sinusoids, or wavelets). So, approximation of complex biological objects from crude building blocks is very common in state-of-the-art imaging techniques.
I learned today about the "fundamental groupoid", which made me remember this thread. (The fundamental groupoid of a topological space is a category where the objects are the points of , and the morphisms are the homotopy equivalence classes of paths between pairs of points. So, we put two paths in the same equivalence class if they have the same endpoints and one can be continuously deformed into the other.)
Sometimes medical imaging is used to detect the presence or absence of things. For example, we might want to know if there is a kidney stone present or not. In this case, the presence or absence of something - apart from its precise shape - is of interest. This is in tension with the usual method of image reconstruction, which is pixel-by-pixel. I am wondering if the fundamental groupoid - or a related topological concept - could be helpful for an image reconstruction approach that is designed to answer these more topological questions.
As an idealized example, assume we are imaging a 2D slice of tissue that is homogeneous except for (possibly) the presence of a cyst (a pocket of fluid). Then, I think the fundamental groupoid of the non-cyst tissue holds relevant information as to whether the cyst is present or not.
I think there is no continuous way to deform path 1 into path 2 while keeping the endpoints fixed. If I'm getting the terminology right, that means that path 1 and path 2 are in different homotopy equivalence classes. That means that there are at least two morphisms from to in the fundamental groupoid of the tissue. If the cyst wasn't present, then path 1 and path 2 would be in the same homotopy equivalence class, and so there should be fewer morphisms from to .
The example above suggests that the fundamental groupoid of the tissue is detecting information that we care about in an image reconstruction setting. It could be interesting to consider an image reconstruction approach that tries to work to directly reconstruct a fundamental groupoid. For example, let be the "observation" map that sends the tissue to some observations made of it - perhaps the observations are ultrasound echoes observed at different locations and times. Then, it seems like it could be possible that the topological information of could be preserved in , provided is designed properly. Probing the topological properties of - working directly in observation space (or "echo space" in this case) - might be a direct approach to learn something about the topological properties of .
Topological data analysis aims to do something similar to this, but using the derived information of (co)homology of the space. The idea is a good one as long as you're careful (in a full 3D reconstruction of the tissue, the two paths you draw would become homotopic again, so you have to focus on slices for what you described to work as stated)
I seem to have run into a problem with this idea.
Assume we are trying to image a target , which I am wanting to model as a topological space (so that I can consisder its fundamental groupoid or other topological invariants). I was wanting to introduce an observation map , which maps into the topological space of observations . This map associates a single point with a single observation . However, this does not model how observation actually works in the case I am interested in, unfortunately. In ultrasound imaging, the response actually observed is associated with the entire target, and is given by , where we are summing responses from each individual piece of the target.
To me, this seems like a significant problem for the idea described above - which is to study the topology of the target by the topology of the observations made from it. Any thoughts are appreciated!
Hi David. It sounds to me like you might want to look at function spaces as the carriers of your observation map, rather than the underlying spaces themselves.
I find it easier to think through things with a concrete example, so let's suppose that your is a three-dimensional block of material, and is a two-dimensional projection from that. If the data that comes out is intensity, that will have the form of a function . Similarly, if the relevant occlusions are determined by something like density, that will also the form .
The main point is that you can then describe your measurement process as a function . For example, in a very simple model of occusion with no scattering, we might have something like
where the light is projected along the -axis. Of course, you are interested in the inverse problem, mapping observations into target structures, but that should be a section of this map (i.e., the composite should be the identity).
You can add in topology by putting norms on the spaces of functions. There are lots of choices appropriate for different purposes. Then I would expect a small change to the density will lead to a small change in the intensity .
While we're here, it's also worth noting that that you could also use something other than as the values for your function spaces. This is probably relevant if the material you are imaging isn't isotropic (same in all directions), in which case you could replace your density by a field of vectors or bivectors.
Hi Spencer,
This is a neat language for talking about observations! Your example reminds me of x-ray imaging, and it also seems to work for describing some modes of ultrasound imaging. For example:
However, I don't immediately see how this perspective helps save the broad idea of trying to extract topological properties of a target from topological properties of the observation of that target. In this language, a particular (unknown) target of interest is just a point in a set (or potentially in a topological space). The information about topology being carried by a continuous measurement process I think is actually information about the topology of , the space of different targets. I was hoping the measurement process could carry topological information about a particular target of interest, instead.
Some thoughts on another direction for modelling imaging, trying to model that we generally are unable to image each tiny part of the target in isolation:
Earlier, we saw that a map from a topological space ("the target") to a topological space ("the observations") did not well model some kinds of medical imaging. In particular, the problem is that we usually do not actually observe for any , but instead we observe larger parts of the target all at once.
To describe observations that observe large parts of a target all at once, it seems reasonable to start by trying to model a target in a way that lets us easily talk about (and hopefully define maps from) different parts of the target. Let's start with a topological space as before, but instead of focusing on the set , let us focus on the open sets . We now think of each of the open sets of as describing a different part of the thing we want to image. To focus on this perspective, perhaps it is a good idea to consider the partial order category induced by . The objects are the open sets from , and there is a morphism from to if .
If we now consider an observation function on topological spaces , we can extend that into a corresponding function on "parts" of the target. Let be the partial order of open sets of . For each open set in this partial order, we want to associate an observation corresponding to observing that entire part of at once. Call this . (I suspect this is sometimes given by , where are the points in ). I think we can then organize these sets into a partial order by copying the partial order structure from . I think this gives a structure that describes the observations obtained by observing different parts of the target, which is hopefully useful.