News

New mathematical principle used to prevent AI from making unethical decisions

A collaboration of researchers have found a mathematical principle can help regulators police Artificial Intelligence systems’ biases towards making unethical choices.

In a research paper published earlier this month in the Royal Society Open Science, statisticians from University of Warwick, Imperial, EPFL and Sciteb Ltd have come together to create a new “Unethical Optimisation Principle” and provide a simple formula to estimate its impact.

Artificial intelligence is increasingly deployed across many sectors, creating an environment in which decisions are increasingly made without human intervention. The researchers argue there is a very strong incentive to know under what circumstances AI systems might adopt an unethical strategy and reduce that risk or eliminate entirely if possible.

The four authors of the paper are Nicholas Beale of Sciteb Ltd; Heather Battey of the Department of Mathematics, Imperial College London; Anthony C. Davison of the Institute of Mathematics, Ecole Polytechnique Fédérale de Lausanne; and Professor Robert MacKay of the Mathematics Institute of the University of Warwick.

Professor Robert MacKay of the Mathematics Institute of the University of Warwick said: “Our suggested ‘Unethical Optimisation Principle’ can be used to help regulators, compliance staff and others to find problematic strategies that might be hidden. Optimisation can be expected to choose disproportionately many unethical strategies, inspection of which should show where problems are likely to arise and thus suggest how the AI search algorithm should be modified to avoid them in future.”

Co-author of the paper Dr Heather Battey, from the Department of Mathematics at Imperial, said: “Our work shows that certain types of artificial intelligence systems can significantly amplify the risk of choosing unethical strategies relative to a less sophisticated system that would pick a strategy arbitrarily.”

“This suggests that it may be necessary to rethink the way AI operates in very large strategy spaces, so that unethical outcomes are rejected by the optimisation/learning process.”

Paper available to view at: https://dx.doi.org/10.1098/rsos.200462