Scoop has an Ethical Paywall
Work smarter with a Pro licence Learn More

Video | Agriculture | Confidence | Economy | Energy | Employment | Finance | Media | Property | RBNZ | Science | SOEs | Tax | Technology | Telecoms | Tourism | Transport | Search

 

Regulator needed to avoid risks in government AI use

27 May 2019


University of Otago
New Zealand Law Foundation

Guidelines and regulator needed to avoid risks in government AI use, study finds

New Zealand is a world leader in government algorithm use – but measures are needed to guard against their dangers.

This is the conclusion of a New Zealand Law Foundation-funded report from the University of Otago’s Artificial Intelligence and Law in New Zealand Project (AILNZP), which was released this week.

The study points to examples from other countries where algorithms have led to troubling outcomes. In the USA, an algorithm that had been used for years in the youth justice system turned out never to have been properly tested for accuracy.

In other cases, there has been concern about producing racially biased outcomes. The COMPAS algorithm, for instance, has been widely criticised for overstating the risk of black prisoners reoffending, compared with their white counterparts – an outcome that can result in them being kept in prison for longer.

Report co-author Professor James Maclaurin says government agencies’ use of AI algorithms is increasingly coming under scrutiny.

“On the plus side, AI can enhance the accuracy, efficiency and fairness of decisions affecting New Zealanders, but there are also worries about things like accuracy, transparency, control and bias.”

“We might think that a computer programme can’t be prejudiced,” says co-author Associate Professor Ali Knott. “But if the information fed to it is based on previous human decisions, then its outputs could be tainted by historic human biases. There’s also a danger that other, innocent-looking factors - postcode for instance - can serve as proxies for things like race.”

Advertisement - scroll to continue reading

Are you getting our free newsletter?

Subscribe to Scoop’s 'The Catch Up' our free weekly newsletter sent to your inbox every Monday with stories from across our network.

Checking algorithmic decisions for these sorts of problems means that the decision-making needs to be transparent. “You can’t check or correct a decision if you can’t see how it was made,” says Knott. “But in some overseas cases, that’s been impossible, because the companies who design the algorithms won’t reveal how they work.”

So far, New Zealand has done better with this, Maclaurin says.

“Unlike some countries that use commercial AI products, we’ve tended to build our government AI tools in-house, which means we know how they work. That’s a practice we strongly recommend our government continues.”

Guarding against unintended algorithmic bias, though, involves more than being able to see how the code works.

“Even with the best of intentions, problems can sneak back in if we’re not careful,” warns co-author Associate Professor Colin Gavaghan.

For that reason, the report recommends that New Zealand establishes a new, independent regulator, to oversee the use of algorithms in government.

“We already have rights against discrimination and to be given reasons for decisions, but we can’t just leave it up to the people on the sharp end of these decisions to monitor what’s going on. They’ll often have little economic or political power. And they may not know whether an algorithm’s decisions are affecting different sections of the population differently,” he says.

The report also warns against “regulatory placebos” – measures that make us feel like we’re being protected without actually making us any safer.

“For instance, there’s been a lot of talk about keeping a “human in the loop” – making sure that no decisions are made just by the algorithm, without a person signing them off.

“But there’s good evidence that humans tend to become over trusting and uncritical of automated systems – especially when those systems get it right most of the time. There’s a real danger that adding a human ‘in the loop’ will just offer false reassurance.”

“These are powerful tools, and they’re making recommendations that can affect some of the most important parts of our lives. If we’re serious about checking their accuracy, avoiding discriminatory outcomes, promoting transparency and the like, we really need effective supervision,” Gavaghan says.

The report recommends that predictive algorithms used by government, whether developed commercially or in-house, must:

• feature in a public register,
• be publicly inspectable, and
• be supplemented with explanation systems that allow lay people to understand how they reach their decisions.

Their accuracy should also be regularly assessed, with these assessments made publicly available.

Law Foundation Director Lynda Hagen says the project received the “lion’s share” of funding distributed under the Foundation’s Information Law and Policy Project ($432,217).

“We did this because we think artificial intelligence and its impacts are not well-understood in New Zealand, so research was urgently needed.

“We welcome the release of this Phase 1 report which provides the first significant, independent and multi-disciplinary analysis of the use of algorithms by New Zealand government agencies. The information from this work will better inform the development of stronger policy and regulation.”

The report’s co-authors were Colin Gavaghan, Alistair Knott, James MacLaurin, John Zerilli and Joy Liddicoat, all of the University of Otago.


BACKGROUND

1. The ‘Artificial Intelligence and Law in New Zealand’ (AILNZ) Project

The report ‘Government Use of Artificial intelligence in New Zealand’ is the culmination of Phase 1 of a 3-year study into the use of artificial intelligence in New Zealand.

The project examines the opportunities and challenges presented by artificial intelligence (AI). It takes a multi-disciplinary approach, with experts from law, philosophy and computer science leading the work. This approach is necessary to ensure that the rules around AI are a good fit for the technology, and also capture the right sort of moral and political values.

The project has an international dimension. The researchers have engaged with experts from Europe, North and South America, Asia and Africa, and gained a good sense of how these matters are being seen and responded to in other parts of the world.

The researchers have also had fruitful discussions with various government departments and agencies, particularly the Department for Internal Affairs, and gathered valuable insights as a result.

Looking ahead to Phase 2 of the project, the researchers will examine the impacts of AI on work and employment. Much attention has been paid to claims that AI will replace human workers, rendering some occupations obsolete. Potentially as important is the manner in which AI will change the world of work. Questions that will be investigated include: how well prepared are our workplaces, our laws and our society for the changes brought by the Fourth Industrial Revolution?

For more information about the AILNZ Project, click: https://www.otago.ac.nz/law/research/emergingtechnologies/otago037164.html

The AILNZ Project is funded by the New Zealand Foundation under their Information Law and Policy Project (see below for further details).

2. Bios of Researchers

Principal Investigators

Associate Professor Colin Gavaghan is the first director of the New Zealand Law Foundation sponsored Centre for Law and Policy in Emerging Technologies. The Centre examines the legal, ethical and policy issues around new technologies. In addition to emerging technologies, Colin lectures and writes on medical and criminal law. He is the author of Defending the Genetic Supermarket (RoutledgeCavendish, 2007). Colin is Deputy Chair of the Advisory Committee on Assisted Reproductive Technology, and a member of the Advisory Board of the International Neuroethics Network. He was an expert witness in the High Court case of Lecretia Seales v Attorney General, and has advised members of parliament on draft legislation.

Associate Professor Alistair Knott works in the Computer Science department at the University of Otago. He has been an AI researcher for over 20 years, working mainly on models of language and brain function. He has over 100 publications in these areas, including the book 'Sensorimotor Cognition and Natural Language Syntax’ (MIT Press, 2012). He has given many public talks on his research, including talks at TEDxAuckland (2012) and TEDx Athens (2013). Ali has been interested in the ethical and social implications of AI throughout his career. He is co-founder of Otago University’s Centre for AI and Public Policy, and Otago’s AI and Society discussion group.

Professor James Maclaurin is a member of the Philosophy programme and Associate Dean for Research in the Humanities at the University of Otago. His research focuses on conceptual and ethical issues posed by scientific innovation as well as the process of distilling academic research into public policy in disciplines such as public health, ecology, computer and information science. He is co-author of What is Biodiversity? (University of Chicago Press 2008). He is a member of the ACOLA/RSNZ expert working group on the Effective and Ethical Development of AI in Australia and New Zealand and of the New Zealand Bioethics Panel for Predator Free New Zealand 2050. He is co-director of the University of Otago’s Centre for Artificial Intelligence and Public Policy.

Researchers

Joy Liddicoat is a researcher with the Centre for Law and Policy in Emerging Technologies. Joy specialises in human rights and technology researching on the Internet and rights to privacy, freedom of expression and women’s human rights. Joy is also Vice President of InternetNZ (https://internetnz.nz/) which is responsible for domain name policy for the country code top level domain ‘.nz’.

Dr John Zerilli is a philosopher, cognitive scientist and lawyer in the Department of Philosophy at the University of Otago. He has published numerous articles, canvassing philosophy, cognitive science, law and political economy. From June 2019, he will be a Research Associate at the Leverhulme Centre for the Future of Intelligence, in the University of Cambridge.

3. The Information Law and Policy Project (ILAPP)

The Information Law and Policy Project (ILAPP), “Adapting New Zealand for the Information Age”, was established in August 2016 by the New Zealand Law Foundation. The Law Foundation established a $2 million fund to enable independent research that will better prepare New Zealand for the challenges of the Information Age. Through ILAPP funding, research projects examine and provide recommendations for law and policy around IT, Data, Information, Artificial Intelligence and Cyber issues to help build New Zealand’s digital capability and preparedness.

ILAPP is intended to support the growth, understanding and resilience of New Zealand and prepare the country for future digital competence. ILAPP supports and feeds into work the public and private sector is undertaking, but remains independent.

Since its launch in 2016, ILAPP has brought together teams of multi-disciplinary experts to examine challenges and opportunities in areas such as global information, cyber-security, data exploitation, and technology-driven social change. Seven Research Themes were identified to help guide and focus research carried out under ILAPP:

· Theme 1: The global nature of information, how we manage it and trade in it
· Theme 2: Cyber security & crime
· Theme 3: Social change following technological change
· Theme 4: Ownership/Exploitation of data
· Theme 5: Philosophical notions
· Theme 6: The Ethics of inference
· Theme 7: The exclusionary effect of technology

For more details about the New Zealand Law Foundation’s Information Law and Policy Project, click https://www.lawfoundation.org.nz/?page_id=7029

For a list of all projects funded by New Zealand Law Foundation’s Information Law and Policy Project, click https://www.lawfoundation.org.nz/?page_id=6882

4. The New Zealand Law Foundation

The New Zealand Law Foundation – Te Manatū a Ture o Aotearoa – is an independent charitable trust that provides grants for legal research, public education on legal matters and legal training.

Since 1992, the Law Foundation has provided over $30 million in funding for legal research and law scholarships. Its funded projects have produced independent legal thinking, with many influencing major and emerging public policy issues.

For more information, visit: www.lawfoundation.org.nz


ends

© Scoop Media

Advertisement - scroll to continue reading
 
 
 
Business Headlines | Sci-Tech Headlines

 
 
 
 
 
 
 
 
 
 
 
 
 

Join Our Free Newsletter

Subscribe to Scoop’s 'The Catch Up' our free weekly newsletter sent to your inbox every Monday with stories from across our network.