Scoop has an Ethical Paywall
Work smarter with a Pro licence Learn More

Video | Agriculture | Confidence | Economy | Energy | Employment | Finance | Media | Property | RBNZ | Science | SOEs | Tax | Technology | Telecoms | Tourism | Transport | Search

 

Call For Framework To Evaluate New AI Tech

Two New Zealand researchers are leading the charge for an international AI framework to evaluate new digital technologies.

Sir Peter Gluckman, director of think tank Koi Tū: The Centre for Informed Futures at Waipapa Taumata Rau, University of Auckland and Hema Sridhar, Koi Tū strategic adviser for technological futures, are the lead authors on a discussion paper to inform the multiple global and national discussions taking place related to AI.

The paper, “A framework for evaluating rapidly developing digital and related technologies: AI, large language models and beyond, was released by the International Science Council ahead of the AI Safety Summit being held in the UK next week.

The authors say it is critical that rapidly developing technologies are subject to broad evaluation to maximise the benefit and minimise the very real risks.

The analytical framework was not limited to AI and could be applied to any rapidly emerging technology, such as gene editing or quantum. The issues are grouped into categories, including wellbeing (including individuals or self, society and social life, and civic life), trade and economy, environmental, geo-strategic and geo-political, and technological (system characteristics, design and use).

Advertisement - scroll to continue reading

Are you getting our free newsletter?

Subscribe to Scoop’s 'The Catch Up' our free weekly newsletter sent to your inbox every Monday with stories from across our network.

Sir Peter says the desire to regulate and govern technology is understandable but we are at a critical time when comprehensive discussion about the future of technology is needed and the discussion cannot be captured only by governments and industry.

“The conversation needs to go beyond the simplistic – this tech will create a nirvana or this will destroy the world. The reality is in the history of humankind, all technologies get used. They always get used for good purposes and bad purposes. But having this sort of framework allows us to have the discussions about how to take any new technology and make it most likely that the good and beneficial purposes will be supported and the negative will be prevented.”

Sir Peter says the definition of what a negative purpose has changed.

“Negative used to simply be that it would produce a bomb or a weapon and kill people. Negative now means what it will do to society, what it will do to mental health, what it will mean for society. So the raft of downsides has changed.”

Ms Sridhar, who leads Koi Tū’s work on the impact of technology on society, says the framework acts like a checklist and will be useful for all policymakers, decision-makers and the private sector.

“It's useful for companies too because they should be thinking now about what they need to address and how to get social licence to use their technologies.

“We're not saying technology is good or bad. We're saying technologies are going be used. It's about how you make it most likely that societies will benefit and not be harmed by the technology.”

“It gives a layer of objectiveness to an area that has traditionally been quite subjective. And in many cases, we’re seeing the capability of the technology is evolving. So the framework gives an objective way in which you can say, here's what we assessed as of today, and in a year or two you can review it and see if these risks have manifested, or if they haven't. And then make sure that the measures are appropriate for what it actually is,” she says.

© Scoop Media

Advertisement - scroll to continue reading
 
 
 
Business Headlines | Sci-Tech Headlines

 
 
 
 
 
 
 
 
 
 
 
 
 

Join Our Free Newsletter

Subscribe to Scoop’s 'The Catch Up' our free weekly newsletter sent to your inbox every Monday with stories from across our network.