What I find interesting is the implicit priorisation: explainability, (human) accountability, lawfulness, fairness, safety, sustainability, data privacy and non-military use.
Human oversight: The use of AI must always remain under human control. Its functioning and outputs must be consistently and critically assessed and validated by a human.
Quite. One would hope, though, that it would be clear to prestigious scientific research organizations in particular, just like everything else related to source criticism and proper academic conduct.
Sure, but the way you maintain this standard is by codifying rules that are distinct from the "lower" practices you find elsewhere.
In other words, because of the huge DOGE clusterfuck demonstrated how horrible practices people will actually enact, you need to put this into the principles.
Someone's inputs is someone else's outputs, I don't think you have spotted an interesting gap. Certainly just looking at the dials will do for monitoring functioning, but falls well short of validating the system performance.
The real interesting thing is how does that principle interplay with their pillars and goals i.e. if the goal is to "optimize workflow and resource usage" then having a human in the loop at all points might limit or fully erode this ambition. Obviously it not that black and white, certain tasks could be fully autonomous where others require human validation and you could be net positive - but - this challenge is not exclusive to CERN that's for sure.
It's still just a platitude. Being somewhat critical is still giving some implicit trust. If you didn't give it any trust at all, you wouldn't use it at all! So they endorse trusting it is my read, exactly the opposite of what they appear to say!
It's funny how many official policies leave me thinking that it's a corporate cover-your-ass policy and if they really meant it they would have found a much stronger and plainer way to say it
"You can use AI but you are responsible for and must validate its output" is a completely reasonable and coherent policy. I'm sure they stated exactly what they intended to.
If you have a program that looks at CCTV footage and IDs animals that go by.. is a human supposed to validate every single output? How about if it's thousands of hours of footage?
I think parent comment is right. It's just a platitude for administrators to cover their backs and it doesn't hold to actual usecases
That doesn't follow. Say you write a proof for a something I request, I can then check that proof. That doesn't mean I don't derive any value from being given the proof. A lack of trust does not imply no use.
You don't even need to go as far as saying someone didn't follow the policy, you can just say you need to review the policies. That way, conveniently enough, nobody is really ever at fault!
It is a organisation wide document of "General principles", how could it possibly have something more specific to say that about the inherently context specific trade-offs of each specific use of AI?
Organizations above a certain size absolutely cannot help themselves but publish this stuff. It is the work of senior middle managers. Ark Fleet Ship B.
I work in a corporate setting that has been working on a "strategy rebrand" for over a year now and despite numerous meeting, endless powerpoint, and god knows how much money to consultants, I still have no idea what any of this has to do with my work.
‘Sustainability: The use of AI must be assessed with the goal of mitigating environmental and social risks and enhancing CERN's positive impact in relation to society and the environment.’ [1]
‘CERN uses 1.3 terawatt hours of electricity annually. That’s enough power to fuel 300,000 homes for a year in the United Kingdom.’ [2]
I think AI is the least of their problems, seeing as they burn a lot of trees for the sake of largely impractical pure knowledge.
Humans have poured resources into the pursuit of largely impractical pure knowledge for millenia. This has been said of an incredible number of human scientific endeavors, before they found use in other domains.
I presume that this policy is not about building data-centres but about the use of AI by CERN employees, so essentially about marginal cost of generating an additional Python script, or something. Don't know if this calculation ever makes sense on the global scale, but if one’s job is to literally spend energy to produce knowledge, it becomes even less straightforward.
All this impractical knowledge people accumulated over centuries gave you cars, planes, computers, air condition, antibiotics, iphones, and, in fact, everything you have when human kind left the trees. So I would rather burn this 1,3 terawatt on this than on, say, running Facebook or bitcoins mining.
In such scientific environment, There are gentlemen agreements about many things that boils down to "Don't be an asshole" or "Be considerate of the others" with some hard requirements at this or that point for things that are very serious.
What's so special about military research or AI that the two can't be done together even though the organization is not in principle opposed to either?
CERN is in principle opposed to military research. That and stuff like lawfulness, fairness, sustainability, privacy are just general CERN principles restated for fluff.
One reason I can think of is with regard to confidentiality. A lot of AI services are controlled by companies in the US or China, and they may not want military research to leak to these countries.
Classified project obviously have stricter rules, such as airgaps, but sometimes, the limits are a bit fuzzy, like a non-classified project that supports a classified project. And I may be wrong but academics don't seem to be the type who are good at keeping secrets nor see the security implication of their actions. Which is a good thing in my book, science is about sharing, not keeping secrets! So no AI for military projects could be a step in that direction.
> CERN’s convention states: “The Organization shall have no concern with work for military requirements and the results of its experimental and theoretical work shall be published or otherwise made generally available.”
CERN was founded after WW2 in Europe, and like all major European institutions founded at the time, it was meant to be a peaceful institution.
> Responsibility and accountability: The use of AI, including its impact and resulting outputs throughout its lifecycle, must not displace ultimate human responsibility and accountability.
This is critical to understand if the mandate to use AI comes from the top: make sure to communicate from day 1, that you are using AI as mandated and not increasing the productivity as mandated.
Play it dumb, protect yourself from "if it's not working out then you are using it wrong" attacks.
This corporate crap makes me want to puke. It is a consequence of the forced bureaucracy from European regulations, particularly the EU AI act which is not well thought out and actively adds liability and risk to anyone on the continent touching AI including old school methods such as bank credit scoring systems.
The content is corporate. The EU AI Act is extra judicial. You don't have to be in the EU to adopt this very set of "AI Principles", but if you don't, you carry liability.
What I find interesting is the implicit priorisation: explainability, (human) accountability, lawfulness, fairness, safety, sustainability, data privacy and non-military use.
I found this principle particularly interesting:
Interesting in what sense? Isn't it just stating something plainly obvious?
It is, but unfortunately the fact that to you - and me - it is obvious does not mean it is obvious to everybody.
Quite. One would hope, though, that it would be clear to prestigious scientific research organizations in particular, just like everything else related to source criticism and proper academic conduct.
Did you forget the entire DOGE episode where every government worker in the US had to send an weekly email to an LLM to justify their existence?
I'd hold CERN to a slightly higher standard than DOGE when it comes to what's considered plainly obvious.
Sure, but the way you maintain this standard is by codifying rules that are distinct from the "lower" practices you find elsewhere.
In other words, because of the huge DOGE clusterfuck demonstrated how horrible practices people will actually enact, you need to put this into the principles.
I want to see how obvious this becomes when you start to add agents left and right that make decisions automagically...
Where is “human oversight” in an automated workflow? I noticed the quote didn’t say “inputs”.
And with testing and other services, I guess human oversight can be reduced to _looking at the dials_ for the green and red lights?
Someone's inputs is someone else's outputs, I don't think you have spotted an interesting gap. Certainly just looking at the dials will do for monitoring functioning, but falls well short of validating the system performance.
The real interesting thing is how does that principle interplay with their pillars and goals i.e. if the goal is to "optimize workflow and resource usage" then having a human in the loop at all points might limit or fully erode this ambition. Obviously it not that black and white, certain tasks could be fully autonomous where others require human validation and you could be net positive - but - this challenge is not exclusive to CERN that's for sure.
Do they hold the CERN Roomba to the same standard? If it cleans the same section of carpet twice is someone going to have to do a review?
It's still just a platitude. Being somewhat critical is still giving some implicit trust. If you didn't give it any trust at all, you wouldn't use it at all! So they endorse trusting it is my read, exactly the opposite of what they appear to say!
It's funny how many official policies leave me thinking that it's a corporate cover-your-ass policy and if they really meant it they would have found a much stronger and plainer way to say it
"You can use AI but you are responsible for and must validate its output" is a completely reasonable and coherent policy. I'm sure they stated exactly what they intended to.
If you have a program that looks at CCTV footage and IDs animals that go by.. is a human supposed to validate every single output? How about if it's thousands of hours of footage?
I think parent comment is right. It's just a platitude for administrators to cover their backs and it doesn't hold to actual usecases
That doesn't follow. Say you write a proof for a something I request, I can then check that proof. That doesn't mean I don't derive any value from being given the proof. A lack of trust does not imply no use.
> So they endorse trusting it is my read, exactly the opposite of what they appear to say!
They endorse limited trust, not exactly a foreign concept to anyone who's taken a closer look at an older loaf of bread before cutting a slice to eat.
I think you're more reading what you want to read out of that - but that's the problem, it's too ambiguous to be useful
Feels like the useless kind of corporate policy, expressed in terms of the loftiest ideals instead of how to make real trade offs with costs
99% of corporate policies are to be able to point to a document that says "well it's not my fault, I made the policy and someone didn't follow it".
You don't even need to go as far as saying someone didn't follow the policy, you can just say you need to review the policies. That way, conveniently enough, nobody is really ever at fault!
It is a organisation wide document of "General principles", how could it possibly have something more specific to say that about the inherently context specific trade-offs of each specific use of AI?
Organizations above a certain size absolutely cannot help themselves but publish this stuff. It is the work of senior middle managers. Ark Fleet Ship B.
I work in a corporate setting that has been working on a "strategy rebrand" for over a year now and despite numerous meeting, endless powerpoint, and god knows how much money to consultants, I still have no idea what any of this has to do with my work.
‘Sustainability: The use of AI must be assessed with the goal of mitigating environmental and social risks and enhancing CERN's positive impact in relation to society and the environment.’ [1]
‘CERN uses 1.3 terawatt hours of electricity annually. That’s enough power to fuel 300,000 homes for a year in the United Kingdom.’ [2]
I think AI is the least of their problems, seeing as they burn a lot of trees for the sake of largely impractical pure knowledge.
[1] https://home.web.cern.ch/news/official-news/knowledge-sharin... [2] https://home.cern/science/engineering/powering-cern
Humans have poured resources into the pursuit of largely impractical pure knowledge for millenia. This has been said of an incredible number of human scientific endeavors, before they found use in other domains.
Also, the web was invented at CERN.
That is equivalent to a continuous draw of 150 MW. Not great, not terrible.
Far less power than those projected gigawatt data centers that are surely the one thing keeping AI companies from breaking even.
I presume that this policy is not about building data-centres but about the use of AI by CERN employees, so essentially about marginal cost of generating an additional Python script, or something. Don't know if this calculation ever makes sense on the global scale, but if one’s job is to literally spend energy to produce knowledge, it becomes even less straightforward.
How did that turn into "not great, not terrible"? That's still 300,000 homes that could otherwise be powered. It's an enormous amount of electricity!
All this impractical knowledge people accumulated over centuries gave you cars, planes, computers, air condition, antibiotics, iphones, and, in fact, everything you have when human kind left the trees. So I would rather burn this 1,3 terawatt on this than on, say, running Facebook or bitcoins mining.
It's about as detailed and helpful as saying, "Don't be an asshole"
In such scientific environment, There are gentlemen agreements about many things that boils down to "Don't be an asshole" or "Be considerate of the others" with some hard requirements at this or that point for things that are very serious.
What's so special about military research or AI that the two can't be done together even though the organization is not in principle opposed to either?
CERN is in principle opposed to military research. That and stuff like lawfulness, fairness, sustainability, privacy are just general CERN principles restated for fluff.
One reason I can think of is with regard to confidentiality. A lot of AI services are controlled by companies in the US or China, and they may not want military research to leak to these countries.
Classified project obviously have stricter rules, such as airgaps, but sometimes, the limits are a bit fuzzy, like a non-classified project that supports a classified project. And I may be wrong but academics don't seem to be the type who are good at keeping secrets nor see the security implication of their actions. Which is a good thing in my book, science is about sharing, not keeping secrets! So no AI for military projects could be a step in that direction.
> CERN’s convention states: “The Organization shall have no concern with work for military requirements and the results of its experimental and theoretical work shall be published or otherwise made generally available.”
CERN was founded after WW2 in Europe, and like all major European institutions founded at the time, it was meant to be a peaceful institution.
Sorry, looks like I misunderstood what "having no concern" means.
Yeah it's written as in, "we don't concern ourselves with that", i.e. it's none of their business
It's a bit of a fig leaf though, any high energy physics has military implications.
What does the LHC physics program have to do with military applications?
Research on interactions between particles can probably be helpful for nuclear weapons R&D.
You'd be surprised how creative the military can be when there's demand
Doesn't all of physics have some military implications?
But at least they make everything public knowledge, instead of keeping it secret and only selling it to one nation.
from that picture it looks like they want to do everything with AI. this is very sad.
> Responsibility and accountability: The use of AI, including its impact and resulting outputs throughout its lifecycle, must not displace ultimate human responsibility and accountability.
This is critical to understand if the mandate to use AI comes from the top: make sure to communicate from day 1, that you are using AI as mandated and not increasing the productivity as mandated. Play it dumb, protect yourself from "if it's not working out then you are using it wrong" attacks.
This corporate crap makes me want to puke. It is a consequence of the forced bureaucracy from European regulations, particularly the EU AI act which is not well thought out and actively adds liability and risk to anyone on the continent touching AI including old school methods such as bank credit scoring systems.
CERN is neither corporate, nor in the EU.
The content is corporate. The EU AI Act is extra judicial. You don't have to be in the EU to adopt this very set of "AI Principles", but if you don't, you carry liability.
blah, blah,people will simply use it as they see fit
So general that it says nothing. Very corporate.