Human rights advocates have called on the Australian government to protect the rights of all in an era of change, saying tech should serve humanity — not exclude the most vulnerable members of society.
Artificial intelligence (AI) might be technology’s Holy Grail, but Australia’s Human Rights Commissioner Edward Santow has warned about the need for responsible innovation and an understanding of challenges new technology poses for basic human rights.
“AI is enabling breakthroughs right now: Healthcare, robotics, and manufacturing; pretty soon we’re told AI will bring us everything from the perfect dating algorithm to interstellar travel — it’s easy in other words to get carried away, yet we should remember AI is still in its infancy,” Santow told the Human Rights & Technology conference in Sydney.
Santow was launching the Human Rights and Technology Issues Paper, described as the beginning of a major project by the Human Rights Commission to protect the rights of Australians in a new era of technological change.
The paper poses questions centred on what protections are needed when AI is used in decisions that affect the basic rights of people.
It asks also what is required from lawmakers, governments, researchers, developers, and tech companies big and small.
Pointing to Microsoft’s AI Twitter bot Tay, which in March 2016 showed the ugly side of humanity — at least as present on social media — Santow said it is a key example of how AI must be right before it’s unleashed on to humans.
Tay was targeted at American 18- to 24-year olds and was “designed to engage and entertain people where they connect with each other online through casual and playful conversation”.
In less than 24 hours after its arrival on Twitter, Tay gained more than 50,000 followers, and produced nearly 100,000 tweets.
Tay started fairly sweet; it said hello and called humans cool.
But Tay started interacting with other Twitter users and its machine learning architecture hoovered up all the interactions — good, bad, and awful.
Some of Tay’s tweets were highly offensive.
In less than 16 hours Tay had turned into a brazen, anti-Semitic and was taken offline for re-tooling.
This kind of interaction had been observed before in IBM Watson which once exhibited its own inappropriate behaviour in the form of swearing after learning the Urban Dictionary.
Santow wanted to show just how easy it is to have AI meant for good turn bad.
“As the technology progresses, AI will be very useful in the real world; the applications are almost limitless . . . while prediction is essential to almost every human activity, we humans are notoriously bad at it. If AI improves the accuracy of our forecasting, this could change everything,” Santow said.
CHECKOUT: Kia map updates: did you get yours?
CHECKOUT: BRZ will leave ’em gasping