Category Theory
Zulip Server
Archive

You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.


Stream: community: discussion

Topic: articulating community values


view this post on Zulip Brendan Fong (Jun 30 2020 at 19:38):

Some of you might have noticed there's a scheduled section on Community Values at ACT 2020: http://act2020.mit.edu. (It's the first hour of Session 2: 1600 UTC on Thursday July 9th.) This was catalysed by recent events, and a realisation that as this community grows, we necessarily make shared decisions that relate to our values, such as around inclusivity efforts, invited speakers, and conference fees and funding.

As organisers of this year's ACT, we thought it'd be a useful community exercise to begin a dialogue to articulate these values, and perhaps ultimately produce a living, evolving, statement of values. This can help guide decision-making, and ensure we are intentional about aligning our actions with broadly supported community values (while also being aware and respectful of the diversity of values we represent).

So we'd love to hear some thoughts from you all here about the values you feel this community embodies, and the values you'd like to see embodied.

(We also discussed this a bit in Nina Otter's ACT@UCR discussion, which led to the creation of this Zulip stream. I spoke a bit more about why I feel this is important in this part: https://youtu.be/dn_whW1DIws?t=1573)

To kick off discussion, I've been discussing, reading, and reflecting over the past couple of weeks, and here's a partial list of things I found that were important to various people:

Which ones resonate with you, or not? What values would you add? And since phrases are themselves open to interpretation, is there a way you would elaborate on them?

view this post on Zulip Andrea Censi (Jul 06 2020 at 15:14):

Some thoughts with reference to the only controversial/vague value ("not doing harm to others through our research"), which I would favor expunging from the list.

For context, I understand this value was discussed in relation to the concrete decision of not publicizing the funding from private companies with some links to military applications.

The value as stated feels very naive. For example, what do you mean by harm, and how can applied category theory do harm?

I would like to propose a couple of points for considerations:

  1. Mathematical theories cannot be harmful by themselves. It is the applications that can be harmful. And in "applied category theory", "applied" doesn't really mean "applications", as in something that directly touches the world (and therefore can be judged ethically). Applications are done by engineers, not mathematicians. For reference, I work in a field (embodied autonomy) where the good/bad applications are almost isomorphic. It's almost as simple as changing the sign in a cost function - with a minus, the robot saving as many lives as possible; with a plus, it is killing as many as possible. In my field we have ethical quandaries. In my view, the ACT community hovers so far from the applications that it can just relax about this whole issue.

  2. Let's call "useful math" the math that in addition to be beautiful, directly enabled some useful application. Most of the useful math was used for both "good" and "bad" applications (put the definition of "good" and "bad" that you like). Knives do not kill people, people do; and knives are necessary to make good food. Should we not do any math that could be useful for a "bad" application?

For me these considerations settle the issue. Basic research like math and ACT are so far removed from applications that it does not make sense to judge the ethical impact. And yes, any basic theory will be useful for good and bad anyway.

I was not part of the discussion, but my feeling is that putting that value there was just a way to justify, a posteriori, the decision "we do not want to publicize our association with companies with links to the military". The real discussion should be about it, and I have several arguments about that as well, if we want to go there.

view this post on Zulip Fabrizio Genovese (Jul 06 2020 at 16:54):

i couldn't disagree more.

view this post on Zulip Fabrizio Genovese (Jul 06 2020 at 16:59):

Also, the goal of ACT for what I care _is_ to become really applied and useful in the future. I've been very vocal about how keeping ACT on the very pure side of things is actually going to backfire at us and harm the whole community eventually. If you embrace the idea that what we do should indeed become applied at some point, then the question becomes super relevant.

view this post on Zulip Fabrizio Genovese (Jul 06 2020 at 17:00):

Just to give some context, categorical quantum mechanics is now reaching the stage of becoming actually useful. So, the question of accepting or not military funding is actually super relevant

view this post on Zulip Fabrizio Genovese (Jul 06 2020 at 17:00):

Personally, I'm very much with Grothendieck on this. The sole idea that what I do may benefit some military organization in some way makes me feel sick.

view this post on Zulip Martti Karvonen (Jul 06 2020 at 18:17):

My opinions on this topic aren't fully-formed or confident, but I'd like to point out that the question of whose funding to accept is somewhat separate than the question of ethics of applications. As far as funding goes, note that Saunders Mac Lane did some algebraic topology and CT on a US air force grant, and the same air force has funded some more recent research into quantum programming languages/categorical semantics thereof. While USAF itself is not the nicest of institutions, it also seems to me that them spending money on CT (that get's published) is preferrable to many other uses they might have for money. I'd be interested in counterarguments to this view.

As far as judging the applications goes, I'd like to hear how one would actually go about doing it. The usual opinions I hear ("as the technology can be used both good for bad, the responsibilty is fully on the end-user"/"if there is an evil application, then the research shouldn't be done") seem a bit too simplistic to me - what other options are there?

view this post on Zulip Fabrizio Genovese (Jul 06 2020 at 18:25):

There is a huge discussion about this in the blockchain community. Programmers are literally shaping the financial systems of the future and the problem of governance is very much actual. Saying "we are just devs" doesn't cut it anymore. Essentially, many people involved in the field would like to avoid what happened with AI. People doing AI were thinking pretty much the same (it's just research) and now we have facebook and social media. Also, we have examples of AI being used to persecute minorities and political dissidents in authoritarian regimes. All in all, "it's only science" my ass, one has to evaluate the impact of the research being produced and take responsibility for it.
Alternatives abound. There's a lot of discussion about a sort of "Hippocratic oath for computer scientists" for instance. At Statebox, we are experimenting with the Hippocratic license for our code.

view this post on Zulip Fabrizio Genovese (Jul 06 2020 at 18:26):

Also, I get the whole argument of "by accepting military funding we are diverting it from the really deadly applications", but imho this is just a delusion people tell themselves to sleep at night. In the end, you are taking money from people whose goal is to optimize how to kill other people, and that's enough for me.

view this post on Zulip Morgan Rogers (he/him) (Jul 06 2020 at 18:28):

Andrea Censi said:

Mathematical theories cannot be harmful by themselves. It is the applications that can be harmful.

I think this represents a very problematic way of thinking. No idea exists in a vacuum; every discipline has abstract theory attached that can be artificially divorced from the community that practices it and the things it is used for, but that doesn't make those that advance that theory exempt of responsibility for the ways in which it is ultimately applied, especially if they benefit directly or indirectly from the pipeline from theory to applications, as is the case in the "accepting military funding" discussion.

view this post on Zulip Martti Karvonen (Jul 06 2020 at 18:36):

I agree that one should go further than "we're just devs/mathematicians", but given a technology that might get used for both good and bad, how does one actually do research concerning it responsibly? By trying to do some cost/benefit-analysis as far as the use-cases go? Refusing anything with some potential for harmful use? Insisting on publishing it (so that it's not a secret)? The Hippocratic licence above seems like a cool start, but what exactly prevents countries/private actors operating illegally from disregarding it? A lot of the technologies we're using every day have their origins in research funded by and for military.

view this post on Zulip Morgan Rogers (he/him) (Jul 06 2020 at 18:36):

Andrea Censi said:

Knives do not kill people, people do; and knives are necessary to make good food. Should we not do any math that could be useful for a "bad" application?

:grimacing: :grimacing: The fact that an evil is accepted as necessary doesn't stop it from being evil. We can and should do whatever we can to avoid the dangers posed by a given piece of technology or knowledge from being realised. We keep knives out of the hands of children, don't allow them to be wielded in public spaces, and take other such precautions. Crucially, we hold knife manufacturers responsible for explaining the risks that knives pose and these safety precautions. This applies more strongly to more complex equipment and ideas, or to situations where the risks are not as immediately apparent to the end user.

view this post on Zulip Morgan Rogers (he/him) (Jul 06 2020 at 18:41):

Martti Karvonen said:

A lot of the technologies we're using every day have their origins in research funded by and for military.

One important thing to keep in mind is that whenever new technology or knowledge is produced in this way, the military get access to it first. And they clearly have the budget to be a lot more... imaginative... about potential applications than we are as researchers. Just because you don't see how the research you're doing could be useful to those funding you doesn't mean that it isn't.

view this post on Zulip Martti Karvonen (Jul 06 2020 at 18:46):

Well, if all of the relevant research outputs get published, then military doesn't know anything that other's don't as well. Otherwise I do agree - just because you don't see how it might get applied doesn't guarantee much. However, I'm still not sure what one should do _concretely_ other than being mindful about this possibility.

view this post on Zulip Morgan Rogers (he/him) (Jul 06 2020 at 18:48):

Martti Karvonen said:

The Hippocratic licence above seems like a cool start, but what exactly prevents countries/private actors operating illegally from disregarding it?

One of the difficulties of policy-making is that you can't act against a problem in a systematic way (with regulation or enforcement) unless that problem breaks a law in the region where it occurs. And you can't make something illegal if you don't know about it. This is why we as researchers need to put in the extra effort to think about the consequences of our research, to make those potential consequences clear to those with the power and resources to act against them.

view this post on Zulip Martti Karvonen (Jul 06 2020 at 18:48):

While there's clear examples of some papers e.g in ML that are ethically questionable, as far as I can see no one is recommending a blanket ban on ML research. So as long as you're doing research where the applications aren't obvious/obiviously evil, what exactly is the difference between research that doesn't worry about its applications and research that is mindful about it?

view this post on Zulip Morgan Rogers (he/him) (Jul 06 2020 at 18:52):

Quite the opposite: if there are potential ethical problems in a discipline, it should be studied all the more intensely to resolve those ambiguities and/or take precautions as a community against concrete problems as they are identified.

view this post on Zulip John Baez (Jul 06 2020 at 18:58):

Andrea Censi wrote:

Applications are done by engineers, not mathematicians.

Unsurprisingly, a lot of people involved in applied category theory are doing applications. So, the ethics of applications in ACT is important. I don't think it matters much whether we call the people doing the applications engineers, mathematicians, or whatever you like.

For example, I call myself a "mathematical physicist" and I work in a math department, but I helped use category theory in a project funded by the US military to design search and rescue missions. My grad student quit this project, in part out of ethical concerns, and then I quit too for various reasons.

I mention this just to show that merely being a "mathematician" does not necessarily imply that one avoids applications. Indeed there's a thing called applied mathematics which is precisely about applications.

Of course mathematicians who want to can avoid working on applications.

view this post on Zulip John Baez (Jul 06 2020 at 19:02):

Basic research like math and ACT are so far removed from applications that it does not make sense to judge the ethical impact.

I disagree; some applied category theory may be far removed from applications, but a lot of the work is not. You will see a number of business presentations on Wednesday that are extremely applied:

• Arquimedes Canedo (Siemens Corporate Technology).

• Brendan Fong (Topos Institute).

• Jelle Herold (Statebox): Industrial strength CT.

• Steve Huntsman (BAE): Inhabiting the value proposition for category theory.

• Ilyas Khan (Cambridge Quantum Computing).

• Alan Ransil (Protocol Labs): Compositional data structures for the decentralized web.

• Alberto Speranzon (Honeywell).

• Ryan Wisnesky (Conexus): Categorical informatics at scale.

view this post on Zulip Jens Hemelaer (Jul 06 2020 at 19:03):

Fabrizio Genovese said:

Just to give some context, categorical quantum mechanics is now reaching the stage of becoming actually useful. So, the question of accepting or not military funding is actually super relevant

I think the question of whether you should accept military funding is always relevant, even if your research has no applications at all. Accepting funding from a military organization will improve the reputation of that organization, which can lead to them receiving more funding. Or it can lead to more mathematicians being willing to work for that organization. Accepting funding is always seen as an endorsement by the outside world.

view this post on Zulip Georgios Bakirtzis (Jul 06 2020 at 19:04):

Half these companies work with the American military. I think a lot of people do not have any insights into how the military research funding works (especially in the US), but they are speaking about it like they do.

view this post on Zulip John Baez (Jul 06 2020 at 19:06):

Yes, it would be interesting to get some people who do military-funded work to talk about it, e.g. David Spivak, Bob Coecke, me, etc. etc.

view this post on Zulip Oliver Shetler (Jul 06 2020 at 19:09):

Just some food for thought: military funding and military applications of basic science have drastically accelerated the use of technologies that ultimately eliminated the incentives that caused many forms of wide-spread systematic evil.

Side note: I'm not defending the misuse of new technologies. However, I'm pointing out that reasonable people can disagree on the issue of pacifism (I'm very familiar with the philosophy of pacifism, as my father's side of my family is Mennonite. I happen to agree with much of it, but I feel like this one-sided conversation needs a counterbalance).

A few examples:
The development of the technology to extract nitrogen from the air eliminated the incentives that drove the nineteenth century "dung wars" (skirmishes and wars over fertilizer that took place all over the world) and saved millions more from starvation. This technology was invented by Fritz Haber, the military chemist and war criminal. He designed it to help manufacture rockets, explosives and bullets.

Nuclear power came along with the military development of nuclear weapons (this may have been separable, but we don't know––it certainly happened faster because of military applications). This technology hasn't helped humanity that much, yet, but it provides a viable and scalable solution to the over-use of fossil fuels.

The development of the internet was deeply entangled with military funding of computer science research. This has led to a host of new social dynamics, some good and some bad. A bit pandora's box-ey. However, I'm sure you would agree that it has magnified good social trends more than bad ones on a global scale.

The development of GPS (developed for military navigation). This has saved a lot of people from a lot of ad-hoc violence by helping people find their destinations promptly and without error.

We are considering the question: is accepting military funding always bad because militaries hurt people?

The answer is non-obvious, and admitting when something is legitimately controversial is extremely important for rational discourse. That is, admitting when an issue is capital P Philosophically Problematic is essential for having integrity in political debates about how to police other peoples' choices. And that is what we are doing here. We are debating how to police other peoples' choices, even if only hypothetically at first.

view this post on Zulip Fabrizio Genovese (Jul 06 2020 at 19:13):

Martti Karvonen said:

I agree that one should go further than "we're just devs/mathematicians", but given a technology that might get used for both good and bad, how does one actually do research concerning it responsibly? By trying to do some cost/benefit-analysis as far as the use-cases go? Refusing anything with some potential for harmful use? Insisting on publishing it (so that it's not a secret)? The Hippocratic licence above seems like a cool start, but what exactly prevents countries/private actors operating illegally from disregarding it? A lot of the technologies we're using every day have their origins in research funded by and for military.

Well, if some nation violates the law you can't do much about it aside of suing them. E.g China often breaks patent laws and well, there's nothing we can do. But at least, using that licence we make a statement. Se say that we don't agree with something and we put some legal framework in place to be able to drag someone to court if they break some requirements.

view this post on Zulip Fabrizio Genovese (Jul 06 2020 at 19:14):

Oliver Shetler said:

Just some food for thought: military funding and military applications of basic science have drastically accelerated the use of technologies that ultimately eliminated the incentives that caused many forms of wide-spread systematic evil.

Side note: I'm not defending the misuse of new technologies. However, I'm pointing out that reasonable people can disagree on the issue of pacifism (I'm very familiar with the philosophy of pacifism, as my father's side of my family is Mennonite. I happen to agree with much of it, but I feel like this one-sided conversation needs a counterbalance).

A few examples:
The development of the technology to extract nitrogen from the air eliminated the incentives that drove the nineteenth century "dung wars" (skirmishes and wars over fertilizer that took place all over the world) and saved millions more from starvation. This technology was invented by Fritz Haber, the military chemist and war criminal. He designed it to help manufacture rockets, explosives and bullets.

Nuclear power came along with the military development of nuclear weapons (this may have been separable, but we don't know––it certainly happened faster because of military applications). This technology hasn't helped humanity that much, yet, but it provides a viable and scalable solution to the over-use of fossil fuels.

The development of the internet was deeply entangled with military funding of computer science research. This has led to a host of new social dynamics, some good and some bad. A bit pandora's box-ey. However, I'm sure you would agree that it has magnified good social trends more than bad ones on a global scale.

The development of GPS (developed for military navigation). This has saved a lot of people from a lot of ad-hoc violence by helping people find their destinations promptly and without error.

We are considering the question: is accepting military funding always bad because militaries hurt people?

The answer is non-obvious, and admitting when something is legitimately controversial is extremely important for rational discourse. That is, admitting when an issue is capital P Philosophically Problematic is essential for having integrity in political debates about how to police other peoples' choices. And that is what we are doing here. We are debating how to police other peoples' choices.

Absolutely, but we live in 2020, there are VC capital firms such as BlackRock that have assets under management worth 6 _trillion_ dollars (yes, they are super evil too), tons of research funding institutions, countless private companies willing to invest, all sorts of grants. So, with all this happening, is it really necessary, in 2020, to get funding from the military? I bet it's not.

view this post on Zulip Georgios Bakirtzis (Jul 06 2020 at 19:15):

Fabrizio Genovese said:

Absolutely, but we live in 2020, there are VC capital firms such as BlackRock that have assets under management worth 6 _trillion_ dollars (yes, they are super evil too), tons of research funding institutions, countless private companies willing to invest, all sorts of grants. So, with all this happening, is it really necessary, in 2020, to get funding from the military? I bet it's not.
````
In the US it almost certainly is.

view this post on Zulip Fabrizio Genovese (Jul 06 2020 at 19:16):

I agree that if the only food shop in town is McDonalds then it's either that or starving. But this is more like "there's mcdonalds, and plenty of local pubs, and supermarkets, and ecofriendly grocery stores, and you can even grow your own stuff in your garden", so really McDonald's is a choice, and not a forced one.

view this post on Zulip Fabrizio Genovese (Jul 06 2020 at 19:16):

Giorgos Bakirtzis said:

Fabrizio Genovese said:

Absolutely, but we live in 2020, there are VC capital firms such as BlackRock that have assets under management worth 6 _trillion_ dollars (yes, they are super evil too), tons of research funding institutions, countless private companies willing to invest, all sorts of grants. So, with all this happening, is it really necessary, in 2020, to get funding from the military? I bet it's not.
````
In the US it almost certainly is.
`````
It almost certnly isn't, since a lot of the things I listed happen to be incorporated in the US. Like BlackRock.

view this post on Zulip Georgios Bakirtzis (Jul 06 2020 at 19:17):

I think you don't understand the size of funding these companies are willing to "invest" in fundamental research versus the Office of the Secretary of Defense.

view this post on Zulip Georgios Bakirtzis (Jul 06 2020 at 19:17):

Although that's changing too, you will see a lot less US military funding in the next half decade.

view this post on Zulip Fabrizio Genovese (Jul 06 2020 at 19:18):

John Baez said:

Andrea Censi wrote:

Applications are done by engineers, not mathematicians.

Unsurprisingly, a lot of people involved in applied category theory are doing applications. So, the ethics of applications in ACT is important. I don't think it matters much whether we call the people doing the applications engineers, mathematicians, or whatever you like.

For example, I call myself a "mathematical physicist" and I work in a math department, but I helped use category theory in a project funded by the US military to design search and rescue missions. My grad student quit this project, in part out of ethical concerns, and then I quit too for various reasons.

I mention this just to show that merely being a "mathematician" does not necessarily imply that one avoids applications. Indeed there's a thing called applied mathematics which is precisely about applications.

Of course mathematicians who want to can avoid working on applications.

I am very happy to know that you had ethical concerns about that project, and that you acted accordingly. I remember thinking "are they sure they'll use this stuff for rescue missions?" when it was presented at NIST 2 years ago.

view this post on Zulip Jules Hedges (Jul 06 2020 at 19:20):

tl;dr I'm with Fabrizio. In particular "naive" (in Andrea's original post) is a standard tool used all the time to shut down good ideas. Naive is good

view this post on Zulip Fabrizio Genovese (Jul 06 2020 at 19:20):

Giorgos Bakirtzis said:

I think you don't understand the size of funding these companies are willing to "invest" in fundamental research versus the Office of the Secretary of Defense.

I think a very similar argument was made about Elsevier. "You can't publish without them, you don't really understand how big in the game they are". This was indeed true until it wasn't, and that happened precisely when people were fed up enough and some heroes put up libgen, openly breaking a lot of (unfair) laws.

view this post on Zulip Fabrizio Genovese (Jul 06 2020 at 19:21):

I would very much appreciate if we could make a statement about this now, when it really makes a difference, and not, say, in 10 yrs, just because it has become a mainstream thing to do.

view this post on Zulip Fabrizio Genovese (Jul 06 2020 at 19:21):

This is an emerging community, and it would be nice if it could "emerge" out of some healthy principles. :slight_smile:

view this post on Zulip Oliver Shetler (Jul 06 2020 at 19:21):

Fabrizio Genovese said:

Oliver Shetler said:

Just some food for thought: military funding and military applications of basic science have drastically accelerated the use of technologies that ultimately eliminated the incentives that caused many forms of wide-spread systematic evil.

Side note: I'm not defending the misuse of new technologies. However, I'm pointing out that reasonable people can disagree on the issue of pacifism (I'm very familiar with the philosophy of pacifism, as my father's side of my family is Mennonite. I happen to agree with much of it, but I feel like this one-sided conversation needs a counterbalance).

A few examples:
The development of the technology to extract nitrogen from the air eliminated the incentives that drove the nineteenth century "dung wars" (skirmishes and wars over fertilizer that took place all over the world) and saved millions more from starvation. This technology was invented by Fritz Haber, the military chemist and war criminal. He designed it to help manufacture rockets, explosives and bullets.

Nuclear power came along with the military development of nuclear weapons (this may have been separable, but we don't know––it certainly happened faster because of military applications). This technology hasn't helped humanity that much, yet, but it provides a viable and scalable solution to the over-use of fossil fuels.

The development of the internet was deeply entangled with military funding of computer science research. This has led to a host of new social dynamics, some good and some bad. A bit pandora's box-ey. However, I'm sure you would agree that it has magnified good social trends more than bad ones on a global scale.

The development of GPS (developed for military navigation). This has saved a lot of people from a lot of ad-hoc violence by helping people find their destinations promptly and without error.

We are considering the question: is accepting military funding always bad because militaries hurt people?

The answer is non-obvious, and admitting when something is legitimately controversial is extremely important for rational discourse. That is, admitting when an issue is capital P Philosophically Problematic is essential for having integrity in political debates about how to police other peoples' choices. And that is what we are doing here. We are debating how to police other peoples' choices.

Absolutely, but we live in 2020, there are VC capital firms such as BlackRock that have assets under management worth 6 _trillion_ dollars (yes, they are super evil too), tons of research funding institutions, countless private companies willing to invest, all sorts of grants. So, with all this happening, is it really necessary, in 2020, to get funding from the military? I bet it's not.

Nothing is ever necessary. Even choices under duress are choices.

If I understand you correctly, you are making the weaker claim that getting military funding is not sufficiently helpful to justify the moral cost.

If that's what you are saying, you need to be clear about reasons why. In the US, the government may be about to enter into another cold war. It's possible that this will drive a dramatic uptick in science investment, like it did in the previous conflict with Russia (both countries invested in basic science and both countries produced invaluable advances as a result).

I probably won't seek or accept military funding, but I'm not sure how stigmatizing the choice to accept military funding will affect overall scientific progress. It could have no impact. It could stunt the growth of fields where the anti-military political norm rules. Are you saying you have such strong evidence in favor of your view that you feel entitled to police people for making different choices than you?

view this post on Zulip Georgios Bakirtzis (Jul 06 2020 at 19:22):

I have not made a normative claim about accepting military funding. I am merely noting that if you knew the funding trajectory from the OsD to researchers and what the responsibilities of the researchers actually are twoards the military you might not have such a black and white view of it. Would I prefer NOT to accept military funding; sure.

view this post on Zulip Jules Hedges (Jul 06 2020 at 19:24):

That said, if we put out a statement that says something flat like "applied category theorists must not accept military funding" then it will inevitably be ignored by some people, so in that sense it's a waste of time

view this post on Zulip Fabrizio Genovese (Jul 06 2020 at 19:25):

Oliver Shetler said:

Fabrizio Genovese said:

Oliver Shetler said:

Just some food for thought: military funding and military applications of basic science have drastically accelerated the use of technologies that ultimately eliminated the incentives that caused many forms of wide-spread systematic evil.

Side note: I'm not defending the misuse of new technologies. However, I'm pointing out that reasonable people can disagree on the issue of pacifism (I'm very familiar with the philosophy of pacifism, as my father's side of my family is Mennonite. I happen to agree with much of it, but I feel like this one-sided conversation needs a counterbalance).

A few examples:
The development of the technology to extract nitrogen from the air eliminated the incentives that drove the nineteenth century "dung wars" (skirmishes and wars over fertilizer that took place all over the world) and saved millions more from starvation. This technology was invented by Fritz Haber, the military chemist and war criminal. He designed it to help manufacture rockets, explosives and bullets.

Nuclear power came along with the military development of nuclear weapons (this may have been separable, but we don't know––it certainly happened faster because of military applications). This technology hasn't helped humanity that much, yet, but it provides a viable and scalable solution to the over-use of fossil fuels.

The development of the internet was deeply entangled with military funding of computer science research. This has led to a host of new social dynamics, some good and some bad. A bit pandora's box-ey. However, I'm sure you would agree that it has magnified good social trends more than bad ones on a global scale.

The development of GPS (developed for military navigation). This has saved a lot of people from a lot of ad-hoc violence by helping people find their destinations promptly and without error.

We are considering the question: is accepting military funding always bad because militaries hurt people?

The answer is non-obvious, and admitting when something is legitimately controversial is extremely important for rational discourse. That is, admitting when an issue is capital P Philosophically Problematic is essential for having integrity in political debates about how to police other peoples' choices. And that is what we are doing here. We are debating how to police other peoples' choices.

Absolutely, but we live in 2020, there are VC capital firms such as BlackRock that have assets under management worth 6 _trillion_ dollars (yes, they are super evil too), tons of research funding institutions, countless private companies willing to invest, all sorts of grants. So, with all this happening, is it really necessary, in 2020, to get funding from the military? I bet it's not.

Nothing is ever necessary. Even choices under duress are choices.

If I understand you correctly, you are making the weaker claim that getting military funding is not sufficiently helpful to justify the moral cost.

If that's what you are saying, you need to be clear about reasons why. In the US, the government may be about to enter into another cold war. It's possible that this will drive a dramatic uptick in science investment, like it did in the previous conflict with Russia (both countries invested in basic science and both countries produced invaluable advances as a result).

I probably won't seek or accept military funding, but I'm not sure how stigmatizing the choice to accept military funding will affect overall scientific progress. It could have no impact. It could stunt the growth of fields where the anti-military political norm rules. Are you saying you have such strong evidence in favor of your view that you feel entitled to police people for making different choices than you?

Policing other people is by far the thing I hate the most, EVER. We have already enough twitter police doing that and it's really not my cup of tea. For what I care, a specific person or research group can get funding in any way they want. And I can guarantee you I will _never_ be the one going around telling them that they are wrong. We are talking about making a statement as a community, which is a completely different thing.

view this post on Zulip Fabrizio Genovese (Jul 06 2020 at 19:27):

Jules Hedges said:

That said, if we put out a statement that says something flat like "applied category theorists must not accept military funding" then it will inevitably be ignored by some people, so in that sense it's a waste of time

We can say that the ACT community doesn't like the idea of getting funding from the military or other ethically dubious venues (e.g. fossil fuel companies), that it condemns it, but that it also understands that this is a forced choice for some research groups. As such, it leaves the choice to the single scientist/research group, but strongly suggests to consider these forms of funding as the very last option available.

view this post on Zulip Fabrizio Genovese (Jul 06 2020 at 19:28):

I had never in mind something like "If you get funding from X then you are not our friend and you cannot hang out with us".

view this post on Zulip Oliver Shetler (Jul 06 2020 at 19:28):

Fabrizio Genovese said:

Policing other people is by far the thing I hate the most, EVER. We have already enough twitter police doing that and it's really not my cup of tea. For what I care, a specific person or research group can get funding in any way they want. And I can guarantee you I will _never_ be the one going around telling them that they are wrong. We are talking about making a statement as a community, which is a completely different thing.

How is making a statement on behalf of many people not the same thing? It either singles out people who don't agree with the statement, if you ask for endorsement, or it silently alienates people who disagree. It seems to me that making group statements is generally something you only want to do when you are happy to alienate people who disagree on a particular issue. That's a legitimate choice that communities have to make, but you've got to pick your battles.

view this post on Zulip Jules Hedges (Jul 06 2020 at 19:30):

It's only policing if you have a truncheon. The alternative is a statement with no bite, which is what would actually happen

view this post on Zulip Fabrizio Genovese (Jul 06 2020 at 19:30):

It's really not the same thing. Also I can guarantee you that person working in group A probably hasn't the slightest clue of where the funding of people working in group B comes from. I don't think this will lead to any form of alienation.

view this post on Zulip Fabrizio Genovese (Jul 06 2020 at 19:32):

What I'd like is really something such as "Hey, these forms of funding really suck. They should be your last choice, and if you go for them, you should actively try to search/ask for help for alternative funding."

view this post on Zulip Jules Hedges (Jul 06 2020 at 19:32):

If I may name names, as far as I know Bob had military funding at some point. If he isn't "the community" then I don't know who is, and I'm pretty sure he'd take exactly zero notice of any statements

view this post on Zulip Fabrizio Genovese (Jul 06 2020 at 19:33):

Precisely

view this post on Zulip Fabrizio Genovese (Jul 06 2020 at 19:35):

My PhD was also partly funded by the US air force. I had no choice since I didn't have any other form of funding (at some point I was really eating once a day, mainly canned food). Still, I felt very much ashamed about it, I tried to do my best to finish the PhD in record time and I've never been so glad in my life that something I did (my thesis) was beyond useless.

view this post on Zulip Oliver Shetler (Jul 06 2020 at 19:35):

Jules Hedges said:

If I may name names, as far as I know Bob had military funding at some point. If he isn't "the community" then I don't know who is, and I'm pretty sure he'd take exactly zero notice of any statements

This is what I used to think. Maybe it's true in his case, but I'm coming from the perspective of someone who has witnessed a lot of thought policing happening at American universities in particular. In the past these fluff ball statements about this cause or that cause would get filtered out of my mind like spam. If I thought anything at all, I would think "administrators and editors, etc., need something to do when things are running smoothly." But it's pretty clear that these sorts of wide-net group statements and community guidelines do alienate people. The practice has synergised with other political forces so potently that it has cleaved a huge chunk of the population away from academia, and as a consequence, universities recruit far less talent than they could.

I'm not suggesting that all group statements or moral policing is problematic either. I'm just saying that it makes sense to recognize when you are alienating people, even a little, and decide deliberately when you want to alienate people and only do it then.

view this post on Zulip Fabrizio Genovese (Jul 06 2020 at 19:37):

I have the feeling that the way Americans implement this sort of policies is... strange? All the twitter policing I see is mainly done by American university students indeed. I can understand that probably, given the cultural difference, such statements could lead to alienation in the US. Still, I'm not American, so I don't know how to solve this.

view this post on Zulip Oliver Shetler (Jul 06 2020 at 19:38):

Fabrizio Genovese said:

I have the feeling that the way Americans implement this sort of policies is... strange? All the twitter policing I see is mainly done by American university students indeed. I can understand that probably, given the cultural difference, such statements could lead to alienation in the US. Still, I'm not American, so I don't know how to solve this.

I think it's simmering below the surface in Europe as well. This problem was invisible in the US until it wasn't. It was a phase shift that had long-underlying causes.

view this post on Zulip Jules Hedges (Jul 06 2020 at 19:39):

I'd guess it's also more serious in the US just because their military have more money than they know how to spend

view this post on Zulip Oliver Shetler (Jul 06 2020 at 19:44):

Jules Hedges said:

I'd guess it's also more serious in the US just because their military have more money than they know how to spend

Maybe, although I took this moment to speak up because this is a much less charged issue than anything else I can think of. It's not a big deal, to be honest. I just wanted to raise a little bit of awareness about how these small acts of divisive rhetoric accumulate. I don't personally feel particularly strongly about the issue. If the ACT people want to make a statement like this, I doubt it would make many waves. But the overall trend of mixing detachable politics with objective research is concerning to me.

view this post on Zulip Fabrizio Genovese (Jul 06 2020 at 19:45):

I have the feeling that in the US any kind of statement regarding society tends to become black and white very quickly. For instance, it seems to me that "politics through dialogue" is a long gone practice. People just keep insulting each other on twitter like they were football supporters

view this post on Zulip Fabrizio Genovese (Jul 06 2020 at 19:45):

This is happening also here in Europe of course, but it's still milder, I think

view this post on Zulip Morgan Rogers (he/him) (Jul 06 2020 at 19:47):

Martti Karvonen said:

So as long as you're doing research where the applications aren't obvious/obviously evil, what exactly is the difference between research that doesn't worry about its applications and research that is mindful about it?

Imagine that your research somehow becomes the root of a large scale catastrophe. Years later, a documentary is made chronicling the events leading up to this disaster. In the former case, you are a hapless scientist shrugging their shoulders and shaking their head sympathetically. In the latter, in the best case scenario that you foresaw the possibility of the disaster, you're the scientist who did everything in their power (or at least something) to warn the world of what could happen.
If we don't take ownership of the ethical consequences of our work, it's unlikely that anyone will. The default state is apathy.

@Oliver Shetler I think you're right; there's no reason to reach for blanket statements about the moral implications of accepting military funding. There is every reason, however, to personally examine the ethics of where your funding is coming from and who (if anyone) is being advantaged by you being paid that way.

view this post on Zulip Oliver Shetler (Jul 06 2020 at 19:48):

@Fabrizio Genovese Yes. False dichotomy seems to be the defining fallacy of our era.

view this post on Zulip Oliver Shetler (Jul 06 2020 at 19:53):

@[Mod] Morgan Rogers You know what kind of a statement I would be excited to support? One that explicitly says, something (more eloquent but) like "As applications of CT mature, we invite you to explicitly consider the implications of the funding sources you accept. There are no easy answers, but we all agree that we should not brush hard choices under the rug. Each of us should face the cognitive dissonance of confronting the implications of our choices and explicitly grappling with them. Here is a list of moral issues for you to consider." And then maybe some format for rational discussion.

view this post on Zulip John Baez (Jul 06 2020 at 20:09):

I don't know if Fabrizio and Shetler were talking about "twitter policing" regarding military funding in particular, or just in general. I've seen a lot of it in general but almost none for military funding.

view this post on Zulip John Baez (Jul 06 2020 at 20:12):

Oliver Shetler said:

[Mod] Morgan Rogers You know what kind of a statement I would be excited to support? One that explicitly says, something (more eloquent but) like "As applications of CT mature, we invite you to explicitly consider the implications of the funding sources you accept. There are no easy answers, but we all agree that we should not brush hard choices under the rug. Each of us should face the cognitive dissonance of confronting the implications of our choices and explicitly grappling with them. Here is a list of moral issues for you to consider." And then maybe some format for rational discussion.

That's the sort of thing that a lot of people could easily support. Some might consider it "toothless", but if you try to get a really effective policy against military funding for ACT, the community could split into a community that doesn't take funding and a community that does.

view this post on Zulip Martti Karvonen (Jul 06 2020 at 20:17):

[Mod] Morgan Rogers said:

Martti Karvonen said:

So as long as you're doing research where the applications aren't obvious/obviously evil, what exactly is the difference between research that doesn't worry about its applications and research that is mindful about it?

Imagine that your research somehow becomes the root of a large scale catastrophe. Years later, a documentary is made chronicling the events leading up to this disaster. In the former case, you are a hapless scientist shrugging their shoulders and shaking their head sympathetically. In the latter, in the best case scenario that you foresaw the possibility of the disaster, you're the scientist who did everything in their power (or at least something) to warn the world of what could happen.
If we don't take ownership of the ethical consequences of our work, it's unlikely that anyone will. The default state is apathy.

Thanks, this was helpful. So, given research where either the applications are unclear or they're clearly both good and bad, you should do your best (or at least something/enough) to avoid the bad outcomes. There's still some cases where the meaning of this is a bit unclear to me - let's say your research improves GPUs and thus most of ML becomes slightly more efficient. Does your responsibility roughly amount to trying to get the more applied ML researchers care more about their research ethics?

view this post on Zulip Oliver Shetler (Jul 06 2020 at 20:35):

@John Baez We weren't talking about twitter policing so much as the general trend of strongly suggesting or imposing non-obvious ideological commitments on people in academic circles.

As for the idea that such a statement would be "toothless." Since when are we trying to bite people we disagree with? Especially on issues that rational people can genuinely disagree on? (Not that you, personally are saying this. But since when is this level of animosity normal communal behavior?)

I don't think it's a cop-out to ask people to take their own choices seriously. When dealing with cognitive dissonance, there are three main strategies: (1) Ignore the feeling until it goes away. (2) change your beliefs to match your behavior. (3) Change your behavior to match your beliefs. Strategy (1) is by far the most common strategy for all of us. Strongly encouraging people to do (2) or (3) in stead of (1) is a serious, difficult thing to ask of people. When it actually happens, its harder and much more benneficial to a community than getting that dopamine hit from regulating things just the way you want them, or making (unenforced) statements that aim at a particular outcome. Your whole career in mathematics is built on faith in rationality as a means of discovery. Why should it end there?

view this post on Zulip Rich Hilliard (Jul 06 2020 at 20:40):

@[Mod] Morgan Rogers Reading this thread, I realize there are at least 2 more aspects (in addition to funding sources) to contemplate:
"As applications of CT mature, we invite you to explicitly consider the implications of
1) the work that you do;
2) the funding sources you accept; and
3) the dissemination of that work to others. ..."

view this post on Zulip John Baez (Jul 06 2020 at 20:47):

Oliver Shetler said:

John Baez We weren't talking about twitter policing so much as the general trend of strongly suggesting or imposing non-obvious ideological commitments on people in academic circles.

What's "obvious" depends a lot on the community, but yes - Americans have divided into two warring camps, we may even be approaching something like a civil war, so there's a lot of energy being put into making people prove their commitment to one of these camps or another.

view this post on Zulip Oliver Shetler (Jul 06 2020 at 21:01):

@John Baez This is true. Lots of purity tests happenning.

Just want to clarify something, and then I've reached my quota for controversial conversations this month.

"Non-obvious" is a hard thing to pin down, but I don't mean whether an issue is culturally controversial (or how many people feel different ways). I mean more that if you follow best practices in being rational (these standards are generally less controversial than any given issue), is it possible for two people, in good-faith, to disagree on an issue?

This standard changes over time, but only because of advances in critical thinking. Not because of other reasons. That is, if you look at how a person came to his or her conclusion, can you find major fault in the method, rather than just the conclusion?

view this post on Zulip John Baez (Jul 06 2020 at 21:22):

Okay - I agree with there's a concept of "non-obvious" there. Alas, most people who support the purity tests you call "non-obvious" consider them to be "obvious" according to their own definition of that term. Rational decision-making of the sort you advocate plays a rather minor role in US politics today. I wish ACT could change that.

view this post on Zulip Fabrizio Genovese (Jul 06 2020 at 21:39):

Oliver Shetler said:

[Mod] Morgan Rogers You know what kind of a statement I would be excited to support? One that explicitly says, something (more eloquent but) like "As applications of CT mature, we invite you to explicitly consider the implications of the funding sources you accept. There are no easy answers, but we all agree that we should not brush hard choices under the rug. Each of us should face the cognitive dissonance of confronting the implications of our choices and explicitly grappling with them. Here is a list of moral issues for you to consider." And then maybe some format for rational discussion.

I like this statement, but for it to be effective it should come with some policies that we explictly sustain. Hippocratic licenses for software could be one of these. Also, it would be very useful if we could share resources among us to set up a network for "clean funding", that is, some kind of repository where we list all the institutions/companies willing to fund ACT projects that aren't ethically desplicable.

view this post on Zulip John Baez (Jul 06 2020 at 21:52):

I wrote:

Some might consider it "toothless"...

Oliver wrote:

As for the idea that such a statement would be "toothless." Since when are we trying to bite people we disagree with?

The word "toothless" doesn't imply that someone is trying and failing to bite someone they disagree with. One says e.g. that a law against pollution is "toothless" if it doesn't come with any penalty for breaking it. So I'm saying: some people might want some sort of code of conduct among ACT people that would involve some penalty - e.g. disapproval by their peers - if they violate it.

But I was also saying that if such a code of conduct forbade military funding, it would just split the community, giving those who dislike military funding even less of a chance to persuade others of their viewpoint. This is why I like your approach.

view this post on Zulip Fabrizio Genovese (Jul 06 2020 at 22:08):

I get your point, my idea wasn't about forbidding things but about using a soft approach: Everyone can do whatever, but we invite everyone to be mindful about their funding sources AND provide a network to help people finding "ethically ok" funding sources if they need them.

view this post on Zulip John Baez (Jul 06 2020 at 22:13):

It would be really great to take positive action in this way: not just scolding people for being naughty, but creating a network where people can find "ethically ok" funding sources and teams working on projects that will help the world.

view this post on Zulip Fabrizio Genovese (Jul 06 2020 at 22:15):

This "founding sources" discussion actually gets even broader: For example, I've spoken with quite a few people in different unis and we are all converging towards a common research theme, albeit from different perspectives. At this point, it would be very nice to submit a coordinated grant application to start a common research project, but I've never written such a grant and I really don't know where to start. I think some form of mentorship for this sort of menial, beaurocratic work would be invaluable, especially for young people like me. In this sense, people in the ACT community could literally teach to other people how to get funding, and in this respect ethics makes a difference: I could be taught that the military is the first institution I have to write to for funding, or I could be taught that I could start somewhere else. Having such a network in place could really influence things positively.

view this post on Zulip Oliver Shetler (Jul 06 2020 at 23:23):

John Baez said:

I wrote:

Some might consider it "toothless"...

Oliver wrote:

As for the idea that such a statement would be "toothless." Since when are we trying to bite people we disagree with?

The word "toothless" doesn't imply that someone is trying and failing to bite someone they disagree with. One says e.g. that a law against pollution is "toothless" if it doesn't come with any penalty for breaking it. So I'm saying: some people might want some sort of code of conduct among ACT people that would involve some penalty - e.g. disapproval by their peers - if they violate it.

But I was also saying that if such a code of conduct forbade military funding, it would just split the community, giving those who dislike military funding even less of a chance to persuade others of their viewpoint. This is why I like your approach.

Yeah. I wish I hadn't said that. The paragraph was unnecessary. Disregard. (Though I should say that I was joking about regulators biting people. It was a metaphor––was there really a risk of people not understanding that?)

I'm just a fan of focusing on the process people use to make decisions, rather than the exact choices people make. If there was a way to regulate that, I would be for it. In this case, I'm not sure what that would look like though.

view this post on Zulip Oliver Shetler (Jul 06 2020 at 23:28):

Fabrizio Genovese said:

This "founding sources" discussion actually gets even broader: For example, I've spoken with quite a few people in different unis and we are all converging towards a common research theme, albeit from different perspectives. At this point, it would be very nice to submit a coordinated grant application to start a common research project, but I've never written such a grant and I really don't know where to start. I think some form of mentorship for this sort of menial, beaurocratic work would be invaluable, especially for young people like me. In this sense, people in the ACT community could literally teach to other people how to get funding, and in this respect ethics makes a difference: I could be taught that the military is the first institution I have to write to for funding, or I could be taught that I could start somewhere else. Having such a network in place could really influence things positively.

Maybe an ACT "field trip" to one of those Profellow workshops on grant applications next time they do one?

view this post on Zulip Fabrizio Genovese (Jul 07 2020 at 00:14):

Lol, I am so bad at grant-related stuff that I fear I don't even know what you are talking about... :frown:

view this post on Zulip Oliver Shetler (Jul 07 2020 at 01:30):

@Fabrizio Genovese It's a website / company that helps people find funding and offers various services and advice around that theme. Just google Profellow.

Also, take a gander at The Professor Is In by Karen Kelsky. It's a book on how to take control of your academic career for students in the humanities, but the advice is almost exactly valid for pure math students and still usefil for applied people.

Maybe also take a look at Marketing for Scientists Mark Kuchner.

view this post on Zulip Fabrizio Genovese (Jul 07 2020 at 01:31):

These are all super useful, thanks!

view this post on Zulip David Tanzer (Jul 07 2020 at 04:56):

Science (including math) is essentially a whole. It will inevitably be appropriated by the powers that be. I only have a quantum of life energy for trying to make things better. My orientation is not towards trying to anticipate all possible outcomes and resist them, but rather to do what I can to help science along its way, whether through research or other means of support, especially in the hope that it will develop in directions that will help us to figure out better ways to do things than the status quo. Because it is a whole, a contribution to sector A today may lead to a breakthrough in sector Z tomorrow.

view this post on Zulip David Tanzer (Jul 07 2020 at 05:14):

It's all rather nuanced. US military research, for example, has promoted a sane assessment of the reality of climate change, and so is a progressive force, in this respect, compared to the civilian government.

view this post on Zulip David Tanzer (Jul 07 2020 at 05:16):

To me the motto of science is stand by the truth. Included in that truth is the need, today more than ever, for fundamental change.

view this post on Zulip Jules Hedges (Jul 07 2020 at 09:50):

Fabrizio Genovese said:

At this point, it would be very nice to submit a coordinated grant application to start a common research project, but I've never written such a grant and I really don't know where to start. I think some form of mentorship for this sort of menial, beaurocratic work would be invaluable, especially for young people like me. In this sense, people in the ACT community could literally teach to other people how to get funding

I could probably help with this, I think I have more experience with grant writing than most people my age. I was planning to submit an EU-wide travel network grant for ACT this year (a thing called a COST grant, administered separately to the ERC), but then the virus got in the way, my plan was to spend 3 weeks in Tallinn a few months ago picking Pawel's brain for that

view this post on Zulip Christina Vasilakopoulou (Jul 07 2020 at 18:33):

Dear all, at the following link

https://docs.google.com/document/d/1Y5gDffOQGeNc_EL2OXCdWDhWDEzL3U4xvngZI5BO4c4/edit

you will find a draft statement that discusses some core community values. It was edited and reviewed by the local and steering committee, and briefly circulated in the program committee. This statement is meant to be the beginning of Thursday's "community values" discussion. Comments and feedback are welcome!

view this post on Zulip James Fairbanks (Jul 07 2020 at 18:38):

Hi Chrisina, that link shows an agenda for a meeting, did you mean to past a different link?

view this post on Zulip Brendan Fong (Jul 07 2020 at 18:58):

Oops, I think the link is here: https://docs.google.com/document/d/1Y5gDffOQGeNc_EL2OXCdWDhWDEzL3U4xvngZI5BO4c4/edit

view this post on Zulip Oliver Shetler (Jul 07 2020 at 19:16):

"We recognise that certain circumstances can make it more difficult for some people to join our community than others, and we seek to provide ways to overcome these difficulties. Thus we are committed to outreach efforts, support open access publishing, reduce costs of participation, work hard to change our culture to be ever more inclusive, and take a broad perspective on ‘merit’." (Inclusion and Equality section)

Point of clarification:

Does this mean the ACT community is opposed to exploitative practices by universities such as the over use of adjuncts who have no prospect of tenure? Or the use of opaque department ranking systems for PhD candidates and tenure candidates? (i.e. is the ACT community pro-union, pro-transparency, etc.)

Or does this mean more that the ACT community is favor of subsidizing the costs caused by these practices? (i.e. diversity-oriented grants, mentorship, etc. that offsets the anti-diversity structural biases in universities)

The way my question is phrased might sound like a loaded question, but it's not. Taking the first route can be very costly to the community. The second route has defensible tradeoffs. I'm just trying to operationalize the statement since it's not clear which approach is being favored.

view this post on Zulip James Fairbanks (Jul 07 2020 at 20:51):

I read it as the latter which is part of the standard approach to Diversity Equity and Inclusion. Although

view this post on Zulip James Fairbanks (Jul 07 2020 at 20:51):

I imagine that most members of the community would subscribe to those views in the former interpretation too.

view this post on Zulip Oliver Shetler (Jul 07 2020 at 21:35):

I suppose it's something for people to think on, and eventually decide on.

It's not a dichotomy. There are a lot of valid combinations of values and concessions.

I'm just advocating that people really face the choice, explicitly reason about these unspoken things, and own their decisions.