Menu

Wananga landing Wananga landing
News

Maths and science-driven AI rules aim to protect society

19 March 2024

Using real world examples, Canterbury researchers are using insights from maths and computer science to propose new regulations for Artificial Intelligence (AI).

APPLY NOW

Photo caption: University of Canterbury (UC) Senior Lecturer Dr Olivia J Erdélyi

With uncertainty around which laws to apply when it comes to AI, Te Whare Wānanga o Waitaha | University of Canterbury (UC) Senior Lecturer Dr Olivia J Erdélyi says mathematical modelling can identify gaps in the legislation and help shape policy that will protect society.  

Focussing on legal uncertainty, Dr Erdélyi from UC’s Faculty of Law, says “unless you have a particular provision that deals with an AI-related problem in a relatively clear manner it is very hard to predict how the courts would decide in any given situation. 

Using the example of the Cambridge Analytica scandal, a political consulting firm that used personal data from Facebook users to influence the 2016 United States presidential election, Dr Erdélyi’s research used mathematical modelling to illustrate how anonymised data— data that cannot identify a person—can effectively target and sway swing voters.

“We showed that if you gather separate datasets from individuals and merge them there are connections that AI processing can exploit, turning anonymised data into something identifiable, revealing a gap in privacy and data protection regulation.

"These regulations are only triggered where personally identifiable information is collected and processed, yet the Cambridge Analytica incident shows that a privacy breach with potentially devastating effects is also possible if the data is initially collected in anonymised form.

“The strength of the team’s approach is that they combine maths and computer science with other disciplines to propose policy.”

To create useable policy around AI, Dr Erdélyi says you can only assess options if you understand how the technology works.

“You don't have to understand all the little facets of the maths behind AI, but you must have a decent understanding about how they work. It is impossible to solve these problems without interdisciplinary and multi-stakeholder collaboration, and that entails listening to and understanding each other.” 

UC Mathematics and Statistics Associate Professor Gábor Erdélyi, also Dr Erdélyi’s husband, says that despite the benefits of joining forces, interdisciplinary collaboration can be difficult.

“Avenues for collaboration are not straightforward and there can be significant communication barriers hampering efforts.

“It’s a two-way street. People from scientific fields need to put greater emphasis on communicating deeply technical matters in a form accessible to lay audiences, and policymakers need to listen to what they say,” he says. 

In the meantime, while we are waiting for AI legislation, Dr Erdélyi says existing laws can provide a starting point.

“To an extent, existing laws can be adapted to address AI-related challenges, but we also need to design new ones to avoid imperfect solutions which can lead to many issues.”

Aotearoa New Zealand is yet to develop an AI strategy, but Dr Erdélyi says that while there is scope to use parts of international policies, it is important to have New Zealand policies in place. 

“It’s a balancing act: While it is wise to wait for some international consensus, we do need some binding laws that provide protection for people which we can litigate, and that requires national action.”

SDG 16 Sustainable Development Goal (SDG) 16 - Peace, justice and strong institutions.

Media contact
 
  • Email
  • Phone: (03) 369 3631 or 027 503 0168
What to read next
Privacy Preferences

By clicking "Accept All Cookies", you agree to the storing of cookies on your device to enhance site navigation, analyse site usage, and assist in our marketing efforts.