An MIT SMR initiative exploring how technology is reshaping the practice of management.
Already a member?
It’s been more than 50 years since HAL, the malevolent computer in the movie 2001: A Space Odyssey, first terrified audiences by turning against the astronauts he was supposed to protect. That cinematic moment captures what many of us still fear in AI: that it may gain superhuman powers and subjugate us. But instead of worrying about futuristic sci-fi nightmares, we should instead wake up to an equally alarming scenario that is unfolding before our eyes: We are increasingly, unsuspectingly yet willingly, abdicating our power to make decisions based on our own judgment, including our moral convictions. What we believe is “right” risks becoming no longer a question of ethics but simply what the “correct” result of a mathematical calculation is.
Day to day, computers already make many decisions for us, and on the surface, they seem to be doing a good job. In business, AI systems execute financial transactions and help HR departments assess job applicants. In our private lives, we rely on personalized recommendations when shopping online, monitor our physical health with wearable devices, and live in homes equipped with “smart” technologies that control our lighting, climate, entertainment systems, and appliances.
Get Updates on Leading With AI and Data
Get monthly insights on how artificial intelligence impacts your organization and what it means for your company and customers.
Please enter a valid email address
Thank you for signing up
Unfortunately, a closer look at how we are using AI systems today suggests that we may be wrong in assuming that their growing power is mostly for the good. While much of the current critique of AI is still framed by science fiction dystopias, the way it is being used now is increasingly dangerous. That’s not because Google and Alexa are breaking bad but because we now rely on machines to make decisions for us and thereby increasingly substitute data-driven calculations for human judgment. This risks changing our morality in fundamental, perhaps irreversible, ways, as we argued in our recent essay in Academy of Management Learning & Education (which we’ve drawn on for this article).1
When we employ judgment, our decisions take into account the social and historical context and different possible outcomes, with the aim, as philosopher John Dewey wrote, “to carry an incomplete situation to its fulfilment.”2 Judgment relies not only on reasoning but also, and importantly so, on capacities such as imagination, reflection, examination, valuation, and empathy. Therefore, it has an intrinsic moral dimension.
Read the Full Article
Already a subscriber?
1. C. Moser, F. den Hond, and D. Lindebaum, “Morality in the Age of Artificially Intelligent Algorithms,” Academy of Management Learning & Education, April 7, 2021, https://journals.aom.org.
2. J. Dewey, “Essays in Experimental Logic” (Chicago: University of Chicago Press, 1916), 362.
3. B.C. Smith, “ The Promise of Artificial Intelligence: Reckoning and Judgment” (Cambridge, Massachusetts: MIT Press, 2019).
4. K.B. Forrest, “When Machines Can Be Judge, Jury, and Executioner: Justice in the Age of Artificial Intelligence” (Singapore: World Scientific Publishing, 2021).
5. J. MacCormick, “Nine Algorithms That Changed the Future” (Princeton, New Jersey: Princeton University Press, 2012), 3.
6. E. Morozov, “To Save Everything, Click Here: The Folly of Technological Solutionism” (New York: PublicAffairs, 2013).