You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
Indeed it's often said that "the international language of science is broken English".
oh, I beg to differ strongly @Kevin Carlson and @David Michael Roberts . I know exactly what I want to say in English and I do find that AI rewritings can be very helpful to achieve what I want. Of course you need to know what you want to say, and you should not parrot whatever your LLM suggests. But indeed they can be helpful to convey all the nuances that a non-native speaker has difficulties with.
I think that traditional translation tools shouldn't be given up on. They are not perfect, but they aren't going to give the text the traditional AI polish. Translating individual sentences (or small group of sentences) rather than a whole long post (or, in the case of MathOverflow, a question or answer) feels perfectly innocent to me. I've done it to focus in on specific passages in a non-English language mathematics paper, when a blanket "please translate these few pages" gives a result looks a bit off. (I'm not wholly ignorant of the source language, I can also parse the mathematics to get an idea of what is nonsense)
I don't think traditional translation tools should be given up on, either. but AI toools can help make the translations better, if you know what you're talking about.
I agree that for people who know what they are doing, and can assess the output, AI tool are indeed of use.
Depends what you mean by "traditional translation tools", but my understanding is that the transformer architecture (ie the underlying architecture of LLMs) has been used in Google Translate and similar tools since several years earlier than LLMs have been around
Jules Hedges said:
Depends what you mean by "traditional translation tools", but my understanding is that the transformer architecture (ie the underlying architecture of LLMs) has been used in Google Translate and similar tools since several years earlier than LLMs have been around
Indeed one gloss on the whole LLM story is one days someone got the bright idea to use a transformer to translate from English to English and...
@Jules Hedges well, I've used an LLM to translate a 1930s French and a 1940s Russian paper to English, and in the process it attempted to also update the mathematical idiom to what is current, which defeated the purpose of the exercise, since it interpolated some ideas not necessarily in the original. A service like Google Translate or DeepL never did that. I worry slightly that people eg writing MathOverflow questions in their own language (with bad mathematics) and translating it to English, the mathematics gets silently updated as well.
And there was an example of this the other day (taking the user, who was explaining what the AI did, on trust). The actual content was changed (to something incorrect), and it wasn't noticed by the OP until later, but other people saw it and pointed it out.
Yeah it's likely that translation services are finetuning their models to avoid this kind of behaviour. So it makes sense to use those rather than vanilla LLMs if one is interested in literal translatiom