You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
In the category of sets, the pullback of the diagram is the subset of consisting of pairs such that . Consider the case where:
In this case, the pullback consists of pairs where:
Because I like to use analogies to medical imaging, we might consider a similar scenario where we have:
Then the pullback has elements where and are images that agree on some specified subset of their overlap.
However, in a medical imaging context, it is too much to ask for equality between different images on overlaps. There are multiple reasons for this, but in particular the presence of noise means that different images that "essentially agree" on their overlaps won't actually be equal on their overlaps.
Can we set up a category where the pullback ends up detecting pairs of images that "approximately agree on an indicated subset of their overlap"? (That is, the difference between the two images over some indicated part of their overlap is below the noise floor).
I'd also be interested in thoughts regarding how to generalize this question beyond the medical imaging analogy: How can we set up categories in general where pullbacks detect "approximate agreement" between parts of two objects?
I'd instantly be tempted to categories internal to Top or Met, where the set of objects has a topology or metric (as does the set of morphisms, etc.).
Scratch that thought: no, what I want is for each object in my category to have an underlying metric space!
Then you could define an "-approximate pullback" for each - I hope it's clear how.
You could study this and generalize it, but if I were you I'd start by trying to use it for something and see where that leads!
Have you looked at sheaves? I think they're relevant, and quite beautiful; deal with inclusions, function restrictions to subsets etc. with (potentially) much weaker assumptions than classical analysis.
https://youtube.com/playlist?list=PLnNqTHlK5sGJrRvH0YBxE4Oe1M9EoSTPQ&si=nkVggthMjHkBEHUy
If I was doing this I might try to represent noise algebraically, considering formal power series in a ring with indeterminate . One gets a hierarchy of asymptotic equivalences for free on polynomials of the ring: .
Then we look at how perturbations (powers of ) interplay with the sheaf structure (restrictions of functions to subsets). We should obtain 2-morphs that map contravariant with inclusions -- can you get these to arise from other constructions naturally, without asserting them?
.
I'm imagining the restrictions of functions to subsets as the objects directly. We already have something that resembles a series of subcategories indexed by powers of , in which the existence of a limit in subcat exhibits -consistency on an open subset. The presence of significant noise at scale on set should be detectable by whether the indexed subcategory contains a limit for pair .
This language in principle allows you to pull in a desired/useful tool from applied math (some chapter of Bender & Orszag probably, or generating functions, etc) and try to specify what is needed to get that to work in categorical language. I would hope any application should yield consistent results to equipping each object with a convergence space of some kind, and if not the contrast alone could be very interesting!
Breaking stuff you have built can be fun. It might be interesting to look at classic material on divergences, log-poles etc. in physics through this lens; especially around critical points of continuous phase transitions. Any classes of morphisms that survive in
as the critical point is approached would be valuable.
This was fun to think about, thanks for asking that question!
Thanks to both of you for your responses! I'll do my best to understand them and respond, as my energy level allows.
John Baez said:
Then you could define an "-approximate pullback" for each - I hope it's clear how.
That's an interesting idea! Let me see if I'm following you. If the category our objects live in is , then to give each object an underlying metric space, we can introduce a functor , where is the category of metric spaces.
Then, if we have the diagram , we probably want to define an "-approximate pullback" to be some object such that is a metric space, having an underlying set with pairs , with and , such that . Here, I'm making using of the fact that and are both elements in the set underlying , and so we can use the metric on to measure "how far apart" they are.
Depending on what our functor is like, there could be multiple such objects, I am guessing.
My initial guess is that such an object wouldn't actually be a pullback in , though. My intuition is that the "pullback diagram" isn't required to commute, but only to "-commute".
I was hoping that we could set up a category so that an "approximate pullback" becomes an actual pullback. That's because I want to be able to draw analogies with imaging as I learn about pullbacks and other standard category theory / sheaf theory concepts.
Yes, that's basically the idea I had. I guess I was just going down to and doing the approximate pullback there, but yes, we could hope there exists an object, or objects, that map down to that metric space (or some isomorphic metric space). Perhaps it would help to inquire about functors such that given an object and a subspace of , we get a subobject of .
David Egolf said:
My initial guess is that such an object wouldn't actually be a pullback in , though. My intuition is that the "pullback diagram" isn't required to commute, but only to "-commute".
To talk about a diagram that commutes up to , all we need is that our category is enriched over metric spaces - so the homset between any two objects is a metric space. This seems mathematically more enjoyable than assuming our category is "concrete" over metric spaces (i.e. maps down to , perhaps faithfully), which is a way of saying that each object is a metric space, or more precisely has an underlying metric space.
I really want a very specific concrete example of how this is going to be applied - a category and some pullback data, where we can think about what we want the approximate pullback to be like.
John Baez said:
To talk about a diagram that commutes up to , all we need is that our category is enriched over metric spaces - so the homset between any two objects is a metric space. This seems mathematically more enjoyable than assuming our category is "concrete" over metric spaces (i.e. maps down to , perhaps faithfully), which is a way of saying that each object is a metric space, or more precisely has an underlying metric space.
Ah, I think I see what you're saying. If we have morphisms and , and our category is enriched over metric spaces, then is a metric space. Then we can consider , and ask if it is less than some . I suppose a nice thing about this approach is that we don't necessarily have to think about "elements" of our objects?
I'm still hoping, by the way, that we don't end up being stuck with the notion of a diagram that commutes up to . I'm hoping to eventually get a category where commuting "on the nose" corresponds to commuting up to in a related category.
David Egolf said:
I'm still hoping, by the way, that we don't end up being stuck with the notion of a diagram that commutes up to . I'm hoping to eventually get a category where commuting "on the nose" corresponds to commuting up to in a related category.
equality up to ε is not transitive, so there's no chance of that except for weird cases.
Maybe David means commutativity is equivalent to commutativity up to ?
Thanks for chiming in @Josselin Poiret , @Ralph Sarkis !
I'll be happy to talk about this in a bit. Let me first set up a specific category to facilitate easier discussion, as John Baez was suggesting above.
David Egolf said:
John Baez said:
To talk about a diagram that commutes up to , all we need is that our category is enriched over metric spaces - so the homset between any two objects is a metric space. This seems mathematically more enjoyable than assuming our category is "concrete" over metric spaces (i.e. maps down to , perhaps faithfully), which is a way of saying that each object is a metric space, or more precisely has an underlying metric space.
Ah, I think I see what you're saying. If we have morphisms and , and our category is enriched over metric spaces, then is a metric space. Then we can consider , and ask if it is less than some . I suppose a nice thing about this approach is that we don't necessarily have to think about "elements" of our objects?
Right, though as a practical man I'd say the decision depends on the examples you're trying to handle: do your objects have elements or not? If they do, you might as well use that fact: it's not as if category theorists should have some religious objection to elements! But if they don't - if the objects are not most easily thought of as metric spaces with extra structure - then we need another approach.
Hmm. I've been working on typing out a definition of a very specific category to work in. The tricky thing is that I have several different ideas, and I'm not sure which one is best. Maybe I'll just think out loud here for a bit, instead of just posting a huge paragraph with a somewhat complicated definition.
The picture I have in mind is this one:
overlapping regions
The idea is we have two images, looking at different parts of some thing. But the regions they image have some overlap. On that overlap the two images aren't equal, because of noise. The presence of differing noise between the regions is illustrated here with the cross-hatched blue and yellow.
Keeping that picture in mind, I want to construct a sort of "presheaf-like" category. That's because I want to be able to have a running example related to imaging in mind as I learn about presheafs.
An important example of a presheaf is the one that associates to each open set of a topological space the set of continuous real-valued functions on . I want to try and stay close to this important example.
What exactly constitutes an "image" is I think an important and not very easy question. But for purposes of concreteness and simplicity, let's think of an image here as a continuous real-valued function on an open subset of . However, I want a notion of a "noisy image", not just an image.
Here's the picture I have in mind for a noisy image:
noisy image
The idea is that at each point in our image, we recognize that there is some uncertainty due to noise. The bold line in the picture above is like the image we calculate or observe. The thin upper and lower lines indicate the upper and lower bounds on the image that we could get at each point, if there was no noise.
So, let's define a noisy image as follows:
A noisy image on an open set consists of the following data:
The idea is that at each point we observe the value in some noisy image, but recognize that if there was no noise, the value we would have observed could be as small as or as large as . (Here I am using to refer to the first coordinate of , and similarly for ).
Our objects in our category are then as follows.
We now want to define morphisms, continuing in the "presheaf of continuous functions" spirit. The most analogous thing to do would be to define a morphism (for ) to be a restriction function on each element. That is, where is the restriction of to , and where is the restriction of to .
I was wondering if it could be useful to multiple morphisms from to . The idea is that different morphisms could correspond to adjusting the envelopes of noisy images in different ways. But this quickly gets a bit confusing to think about, so maybe it's best to mostly ignore this idea for the moment.
I should mention for @Ralph Sarkis and @Josselin Poiret that something like this is what I had in mind when I mentioned above that I'd like to a category where commuting diagrams express "commuting up to " in a related category. I was imagining a category where morphisms involve "cropping" images and potentially adjusting their noise envelopes in some constrained way - in the hope that this would allows us to take two images that differ only in their noise envelopes and then make them equal via a series of steps.
My reasoning behind considering morphisms corresponding to adjustment of noise levels is roughly as follows. If we made a measurement that is known to be between 4 and 7, then certainly it must be between 3 and 8. So, if one views a given envelope as a statement "the true value lies in this range", then one can always expand these envelopes to get a new envelope that is consistent with the previous one.
But maybe, intead of comparing two raw images and , you compare a filtered version of them. Two images are equivalent iff their filtered versions are equal. The filter is a way to cut-off high-frequency noise.
My Fourier/Wavelet knowledge is a bit rusty, so I am unsure how to formulate that more precisely.
Peva Blanchard said:
But maybe, intead of comparing two raw images and , you compare a filtered version of them. Two images are equivalent iff their filtered versions are equal. The filter is a way to cut-off high-frequency noise.
My Fourier/Wavelet knowledge is a bit rusty, so I am unsure how to formulate that more precisely.
That's a neat idea! That's almost like adding another "observational step" to our imaging pipeline, so that we end up with things that are simpler to compare. (Although I wouldn't in general expect a filter to perfectly remove noise; so I'd still expect two post-filtering images to be different if they differed originally only their noise. I suppose it would depend on the kind of noise present.)
In general, the idea of "projecting down" to a simpler category via some destructive operation is interesting. You lose some information, but maybe certain things become easier. And potentially we could still "lift" notions back to the original, more complex category too - as I think you are indicating.
FYI: https://doi.org/10.32408/compositionality-2-2
I re-read the earlier posts. Actually, I think the idea is the same as that of @Eric M Downes from above. The acts like the cut-off frequency.
John Baez said:
I really want a very specific concrete example of how this is going to be applied - a category and some pullback data, where we can think about what we want the approximate pullback to be like.
To finish this up, here's some pullback data:
JR said:
That looks relevant! Thanks for sharing that link!
Whew, that was a lot! I'll stop here for today, but I guess some next questions could be: Does this category have pullbacks? What about "approximate" pullbacks? And if not, why not? And how could we modify this category so that it obtains those things?
Thanks again to everyone for their thoughts.
Eric M Downes said:
Have you looked at sheaves? I think they're relevant, and quite beautiful; deal with inclusions, function restrictions to subsets etc. with (potentially) much weaker assumptions than classical analysis.
https://youtube.com/playlist?list=PLnNqTHlK5sGJrRvH0YBxE4Oe1M9EoSTPQ&si=nkVggthMjHkBEHUyIf I was doing this I might try to represent noise algebraically, considering formal power series in a ring with indeterminate . One gets a hierarchy of asymptotic equivalences for free on polynomials of the ring: .
Then we look at how perturbations (powers of ) interplay with the sheaf structure (restrictions of functions to subsets). We should obtain 2-morphs that map contravariant with inclusions -- can you get these to arise from other constructions naturally, without asserting them?
.I'm imagining the restrictions of functions to subsets as the objects directly. We already have something that resembles a series of subcategories indexed by powers of , in which the existence of a limit in subcat exhibits -consistency on an open subset. The presence of significant noise at scale on set should be detectable by whether the indexed subcategory contains a limit for pair .
This language in principle allows you to pull in a desired/useful tool from applied math (some chapter of Bender & Orszag probably, or generating functions, etc) and try to specify what is needed to get that to work in categorical language. I would hope any application should yield consistent results to equipping each object with a convergence space of some kind, and if not the contrast alone could be very interesting!
Breaking stuff you have built can be fun. It might be interesting to look at classic material on divergences, log-poles etc. in physics through this lens; especially around critical points of continuous phase transitions. Any classes of morphisms that survive in
as the critical point is approached would be valuable.This was fun to think about, thanks for asking that question!
I should also mention this - this looks really cool, but it's mostly going over my head at this point. :sweat_smile: I'll respond in more detail as I'm able / as I learn more about this stuff. Although I can answer your opening question: I've started looking at sheaves a little bit, and I'd like to learn a lot more about them!
Sheaves are fun!
I think John Baez probably has a clear idea of how his "commuting up to " should work, but here is my naive concern, echoing Josselin Poiret:
Category theory aside, does this "noise adjustment strategy" you want the morphisms to apply, actually exist?
My memory of error propagation in practice is this:
where is the leading ("fastest growing") term in a (possibly divergent) expansion. So if then , which is still , but if you have ; a new error equivalence class assuming .
So unless you already know the "true image" there's no way to "subtract noise" without adding more noise! Pre-amps etc in engineering solve this by keeping two copies of a signal and diff'ing them to detect noise, and designing your system using some control theory stuff (complex analysis) so that you don't accidentally amplify a bunch of nonsense you thought was signal. It's quite beautiful math and enginering, but you need to know the kinds of signals and noise in some detail to design an effective strategy.
So, a Caveat! A big drawback of my idea is you have to already know, in some Galaxy-brain Yoneda fashion, how the error propagation of a morphism will compound with everything else, before you even assign it a subcategory.
So I can imagine forming a presheaf, and even a sheaf if the leading term equivalence class commutes with the join operation (union) over domains. But keeping a composition of error-propagating morphisms "inside" the error-controlled category might be tough. In many practical circumstance I could imagine there are no new morphisms in that aren't already in . That is, if any composition would push you into a bigger-error category, you must already be there. I think you can prove that the only morphisms in any but not are linear functions. Things would get more exciting with different error classes vs vs etc. but also trickier to be sure you've done it right. We'll see how that paper JR posted deals with this!
Essentially, we should listen to JB when he stresses examples. We would benefit a lot from having a clear protocol from applied math or stats or just some existing non-categorical error-propagation strategy/example you believe works decently well already, before categorizing it.
I happen to know that Neil Ghani is currently working on stuff to do with Met-enriched categories for applications to do with approximation and numerical analysis, including working out the appropriate notion if weak co/limits
Which I would guess is a baby version of -category theory, viewing metric spaces as -groupoids
I would've guessed that the standard notion of weighted (co)limit in an enriched category would be suitable, but perhaps that doesn't quite work for the relevant applications? In the metric case, these can express things like "makes this diagram commute up to and is universal as such", see e.g. Def 2.2. of this and especially sections 3 and 4 of this.
Wow, so many interesting comments! I don't have the energy right now to engage with them in more detail, but please know that I appreciate all your thoughts.
One brief comment, though - I think it is not so unrealistic to consider taking a signal with an "uncertainty/noise envelope" and considering morphisms that narrow that envelope. For example, as one gains additional information, one may be able to better understand that signal, and narrow down that envelope. For instance, this could occur in the case where there are multiple sensors recording a signal.
(More broadly, I'm also interested in reasoning about "signals with uncertainty" where the uncertainty can arise not only from noise but also due to lack of information about an object being imaged.)
I agree it should be possible and think its definitely worth trying! Just wanted to be clear of some technicalities.
Josselin Poiret said:
David Egolf said:
I'm still hoping, by the way, that we don't end up being stuck with the notion of a diagram that commutes up to . I'm hoping to eventually get a category where commuting "on the nose" corresponds to commuting up to in a related category.
equality up to ε is not transitive, so there's no chance of that except for weird cases.
Thinking about this a little more, I had a few related thoughts:
So, I may indeed be stuck with the notion of diagrams that commute up to ! That makes things more challenging, I think, because I can't easily copy notions that involve diagrams actually commuting.
Eric M Downes said:
Sheaves are fun!
I think John Baez probably has a clear idea of how his "commuting up to " should work, but here is my naive concern, echoing Josselin Poiret:
Category theory aside, does this "noise adjustment strategy" you want the morphisms to apply, actually exist?
My memory of error propagation in practice is this:
where is the leading ("fastest growing") term in a (possibly divergent) expansion. So if then , which is still , but if you have ; a new error equivalence class assuming .
So unless you already know the "true image" there's no way to "subtract noise" without adding more noise! Pre-amps etc in engineering solve this by keeping two copies of a signal and diff'ing them to detect noise, and designing your system using some control theory stuff (complex analysis) so that you don't accidentally amplify a bunch of nonsense you thought was signal. It's quite beautiful math and enginering, but you need to know the kinds of signals and noise in some detail to design an effective strategy.
So, a Caveat! A big drawback of my idea is you have to already know, in some Galaxy-brain Yoneda fashion, how the error propagation of a morphism will compound with everything else, before you even assign it a subcategory.
So I can imagine forming a presheaf, and even a sheaf if the leading term equivalence class commutes with the join operation (union) over domains. But keeping a composition of error-propagating morphisms "inside" the error-controlled category might be tough. In many practical circumstance I could imagine there are no new morphisms in that aren't already in . That is, if any composition would push you into a bigger-error category, you must already be there. I think you can prove that the only morphisms in any but not are linear functions. Things would get more exciting with different error classes vs vs etc. but also trickier to be sure you've done it right. We'll see how that paper JR posted deals with this!
Essentially, we should listen to JB when he stresses examples. We would benefit a lot from having a clear protocol from applied math or stats or just some existing non-categorical error-propagation strategy/example you believe works decently well already, before categorizing it.
Also, thanks for pointing this out! It makes sense that composing noisy signals can be complicated and in general lead to increased effective noise/error levels.
I guess the next step for me at this point is to take a look at some of the papers that have been linked here!
I feel like I'm not very good yet at figuring out how to relate the abstract math I'm learning about to concepts relating to imaging :sweat_smile:. It's hard for me to come up with simple specific examples, and things that I would like to work often don't end up working! So, thank you all for your helpful ideas and for your patience.
Discussion always helps!!
David Egolf said:
Our objects in our category are then as follows.
- For each open subset we have one object .
- The object associated to is the set of all noisy images on . So, is a set of pairs , as described above.
We now want to define morphisms, continuing in the "presheaf of continuous functions" spirit. The most analogous thing to do would be to define a morphism (for ) to be a restriction function on each element. That is, where is the restriction of to , and where is the restriction of to .
I'm a bit confused about these morphisms. Is there at most one morphism from any object to any other? That's the impression I'm getting: you seem to have one morphism from to if is the restriction of from to and is the restriction of from to , and none otherwise.
I think that gives a category, but I believe it's a preorder and even a poset. If so, what we've got can be summarized with a partial order on noisy image meaning "this noisy image is a piece of that noisy image".
Posets are a kind of category, but they have a very special flavor all their own, because all diagrams commute . For example, a pullback of a diagram
in a poset doesn't on the morphisms from and to , since there's only one possible choice - but more impressively, it doesn't even depend on the object ! (If that's not obvious it makes a good puzzle.) It's just what we call the "greatest lower bound" of and .
That's a very good point! So, if I want pullbacks to be meaningful (to depend on ), I really need more than one morphism in at least some hom-sets!
And, yes, in the category I explained above, there is exactly one morphism from to if . I did briefly consider a way to add in more morphisms, but I'm not sure it's any good.
For the record, the idea was that we could add in additional morphisms that are allowed to adjust the noise/uncertainty "envelope" in each pair, while simultaneously performing some restriction.
Okay. So if you want a poset of noisy images I think you're already fine, but if you intended to get a more juicy category (that's a technical term for a category with lots of morphisms between objects :upside_down:) I think you want to add some juice.
I might expect image processing people to want morphisms that rotate images, maybe scale them down, even stretch them, etc. But there's no need to make things too complicated right away!
That makes a lot of sense! One slogan for dreaming up morphisms , I think, is "how can I make (part of) from "? In the category I explained above, the only way to do that was basically "by cropping the image ". But if we add more tools to our toolbox for making things from other things, then we can expect to get a lot more morphisms!
John Baez said:
I might expect image processing people to want morphisms that rotate images, maybe scale them down, even stretch them, etc. But there's no need to make things too complicated right away!
This is a bit of a side note, but there's a reason that those transformations don't come to mind for me. In ultrasound imaging, which is mostly what I hold in my mind as an example, we do generally form a collection of related images, and we care about how similar they are. However, each image is of the the area to be imaged. So when we compare these different images, there is no need for rotating/scaling/stretching.
However, how "trustworthy", "confident" or "high quality" an image is can vary over the area of that image. For example, we might have an image reported that we think is probably great on the left hand side, but is probably giving close to pure nonsense on the right hand side.
Then you might want to consider a collection of categories, representing different "image-processing toolboxes", and functors between them, describing (at the very least) how a small toolbox can be contained in a larger toolbox.
If all these categories have the same set of objects - i.e. you restrict yourself to the same class of images, and only change the tools available to manipulate them - you might be able to agglomerate all these different categories into a single enriched category, where the hom-thing between two images is a set of morphisms labelled by the toolboxes they belong to. There is definitely work left to make this idea precise: namely, what do you do when you try to compose morphisms belonging to different toolboxes.
You could say the composite is an "undefined" morphism (and put in an "undefined" morphism between every pair of objects), or you could do something more sneaky....
Anyway, there's a lot of fun to be had here, but I feel myself drifting off into unproductive ruminations because I have no specific task I'm trying to accomplish! In my work applying categories to epidemiology I have the benefit of working with someone who has some pretty specific concrete goals. Then I try to make the prettiest, easy-to-compute-with formalisms that attain those goals.
I wish I could be more specific. Once I figure out some more concrete goals, I'll be sure to post about them somewhere on this zulip! But for now my goals are more vague things like trying to understand what we really by "imaging" and what "reconstruction" is about, abstractly. At this point, these vague goals are mostly excuses for me to learn more math. :upside_down:
Back on the topic of dreaming up tools for the "image transformation toolbox", I wanted to note that I'm not just interested in considering what I call "post-processing" transformations of images. I'm also interested in updating a rough image to a better image once we obtain more information. Given a particular strategy for image updating, I think we can get a lot of morphisms: put a morphism from image to image corresponding to a particular way of getting more data, such that our image updating strategy changes our image from to .
From this perspective, we can get tons of morphisms now! One for each pair of the following things:
Anyways, thanks for pointing out this line of thought: thinking about ways to get more morphisms feels really important.
Hmm. From this perspective, I think I could start to sketch a more specific category that relates to a particular way of doing medical imaging. That's because this perspective lets me incorporate a lot more things in the morphisms! For example, I think I could probably define a category of "walking aperture transmission with delay and sum beamforming ultrasound images", now.
Although I'm not sure how useful such a thing would be to define! But maybe it could be pretty fun (and possibly informative) to think about pullbacks or something in that category.
I'll put that on my list to explore, in addition to looking at the papers above. If I figure something cool out, maybe I'll post it here!
Great! Maybe one "concrete goal" would be to make an open-source collection of toolkits for image processing, which has the feature that you can easily add new toolkits and have them work well with existing toolkits. I don't know what sort of problems people have with "compatibility" of different image-processing tools. But I find that category theory is great for making things fit together smoothly. That's why in epidemiology we realized that the golden role for category theory is "community-based modeling", where lots of different people, experts in different subjects, are trying to work together.
I'll emphasize that even if you don't actually make an open-source library of image-processing tools, it can be helpful to imagine doing it.
It puts you in a "design" frame of mind.
I really like that strategy - picking something specific to think about designing or building.
Regarding the particular idea of combining multiple image processing tools smoothly - my initial reaction is that (in my limited experience) people don't talk much about doing this kind of thing - at least in the context of ultrasound or photoacoustic imaging. The focus tends to be more on finding different specific ways of making good images, and figuring out which way is "best" in some sense (e.g. is most robust to motion; gives the sharpest resolution; gives the clearest contrast between features), or improving a given way of imaging by changing it a bit in clever ways (e.g. to make it more computationally efficient).
But I'll give it some thought, and maybe I can think of a related design-based concrete goal.
David Egolf said:
The focus tends to be more on finding different specific ways of making good images, and figuring out which way is "best" in some sense (e.g. is most robust to motion; gives the sharpest resolution; gives the clearest contrast between features), or improving a given way of imaging by changing it a bit in clever ways (e.g. to make it more computationally efficient).
Doesn't that tend to lead to lots of different tools that are incompatible in various ways? That's a problem that often happens - e.g. the appalling diversity of computer languages, or climate modeling methods, and the difficulty of getting them to work smoothly together.
Maybe with image processing you just use one tool, output an image as a jpeg, then feed that into the next tool. But I can't imagine it's quite that easy. Even the free Irfanview tool I use gives me about 20 choices of image format: JLS, JP2, JNG, JPM, JXL... So I imagine medical imaging has dozens of specialized formats and standards.
David Egolf said:
I feel like I'm not very good yet at figuring out how to relate the abstract math I'm learning about to concepts relating to imaging :sweat_smile:. It's hard for me to come up with simple specific examples, and things that I would like to work often don't end up working! So, thank you all for your helpful ideas and for your patience.
If it was easy, there would be many more category theorists! I must consistently remind myself: have patience also with yourself! Grothendieck once described his approach to cracking problems as a "rising sea"; for a long time nothing happens, but consistent deep thought wets the hard shell outside of the nut, until the shell can be opened not even requiring a sharp and precise tool, but simply by squeezing one's fingers. We may only avail ourselves of such wisdom through patience.
John Baez said:
Doesn't that tend to lead to lots of different tools that are incompatible in various ways? That's a problem that often happens - e.g. the appalling diversity of computer languages, or climate modeling methods, and the difficulty of getting them to work smoothly together.
Maybe with image processing you just use one tool, output an image as a jpeg, then feed that into the next tool. But I can't imagine it's quite that easy. Even the free Irfanview tool I use gives me about 20 choices of image format: JLS, JP2, JNG, JPM, JXL... So I imagine medical imaging has dozens of specialized formats and standards.
It's an interesting question! I'm struggling to give you a good answer though, beyond "that's not really what it's like, in my experience". :sweat_smile:
Let me try. So, in my experience doing research on medical imaging techniques, here's how I made images:
So, in this whole process, I never really used any kind of standardized "medical imaging formats". I am using different "tools" in the sense that I'm using various MATLAB libraries or built-in functions, but I'm not sure that's quite the kind of thing you had in mind. [Edit: Perhaps the hardware-specific details of the output from the sensors could be viewed as a kind of "format"..]
I could try to interpret your question more broadly. For example, if I transmit a beam of ultrasound in a certain way, then my reconstruction process needs to be appropriate to that transmission. In that sense, other reconstruction approaches (which assume other kinds of ultrasound beams) are "incompatible" with the beam I transmitted. But in this case it makes sense that they aren't compatible, and I wouldn't really want to try and make them compatible, I guess?
There might be a way to think about "tools" and "compatibility" that could lead from your idea above to some really interesting things I could try to build. But I'm not immediately seeing such a way, so far.
It is possible (even probable!) that companies who make (non-research) medical imaging machines have standardized formats they output their images to. That could be interesting for me to learn about. But I've never really worked with or thought about those before.
Eric M Downes said:
Grothendieck once described his approach to cracking problems as a "rising sea"; for a long time nothing happens, but consistent deep thought wets the hard shell outside of the nut, until the shell can be opened not even requiring a sharp and precise tool, but simply by squeezing one's fingers.
I'm not sure I would want to eat the nut at that point :upside_down: