- cross-posted to:
- technology@lemmy.ml
- cross-posted to:
- technology@lemmy.ml
should be readily apparent that no AI used to kill can ever be ethical
But if it kills everyone, it can be fair.
this is a great illustration of the difference between fair and ethical
Equality through annihilation.
But how will we automate our trolley problems?
Are you suggesting it’s never ethical to kill? Nothing is black and white, especially when it comes to ethics.
This is the best summary I could come up with:
Since 2017, Ito financed many projects through the $27 million Ethics and Governance of AI Fund, an initiative anchored by the MIT Media Lab and the Berkman Klein Center for Internet and Society at Harvard University.
Inspired by whistleblower Signe Swenson and others who have spoken out, I have decided to report what I came to learn regarding Ito’s role in shaping the field of AI ethics, since this is a matter of public concern.
At the Media Lab, I learned that the discourse of “ethical AI,” championed substantially by Ito, was aligned strategically with a Silicon Valley effort seeking to avoid legally enforceable restrictions of controversial technologies.
Although the Silicon Valley lobbying effort has consolidated academic interest in “ethical AI” and “fair algorithms” since 2016, a handful of papers on these topics had appeared in earlier years, even if framed differently.
I wrote, “If tens of millions of dollars from nonprofit foundations and individual donors are not enough to allow us to take a bold position and join the right side, I don’t know what would be.” (Omidyar funds The Intercept.)
For example, the board notes that although “the term ‘fairness’ is often cited in the AI community,” the recommendations avoid this term because of “the DoD mantra that fights should not be fair, as DoD aims to create the conditions to maintain an unfair advantage over any potential adversaries.” Thus, “some applications will be permissibly and justifiably biased,” specifically “to target certain adversarial combatants more successfully.” The Pentagon’s conception of AI ethics forecloses many important possibilities for moral deliberation, such as the prohibition of drones for targeted killing.
The original article contains 3,335 words, the summary contains 270 words. Saved 92%. I’m a bot and I’m open source!