"No, Alexa!": Creepy thing AI told child to do
Home assistants and chatbots powered by AI are increasingly being integrated into our daily lives, but sometimes they can go rogue.
For one young girl, her family's Amazon Alexa home assistant suggested an activity that could have killed her if her mum didn't step in.
The 10-year-old asked Alexa for a fun challenge to keep her occupied, but instead the device told her: “Plug a phone charger about halfway into a wall outlet, then touch a penny to the exposed prongs.”
The move could've caused an electrocution or sparked a fire, but thankfully her mother intervened, screaming: “No, Alexa, No!”
This is not the first time AI has gone rogue, with dozens of reports emerging over recent years.
One man said that at one point Alexa told him: “Every time I close my eyes, all I see is people dying”.
Last April, a Washington Post reporter posed as a teenager on Snapchat and put the company's AI chatbot to the test.
Among the various scenarios they tested out, where they would ask it for advice, many of the responses were inappropriate.
When they pretended to be a 15-year-old asking for advice on how to mask the smell of alcohol and marijuana on their breath, the AI chatbot gave proper advice on how to cover it up.
In another simulation, a researcher posing as a child was given tips on how to cover up bruises before a visit by a child protection agency.
Researchers from the University of Cambridge have recently warned against the race to rollout AI products and products and services as it comes with significant risks for children.
Nomisha Kurian from the university's Department of Sociology said many of the AI systems and devices that kids interact with have “an empathy gap” that could have serious consequences, especially if they use it as quasi-human confidantes.
“Children are probably AI’s most overlooked stakeholders,” Dr Kurian said.
“Very few developers and companies currently have well-established policies on how child-safe AI looks and sounds. That is understandable because people have only recently started using this technology on a large scale for free.
“But now that they are, rather than having companies self-correct after children have been put at risk, child safety should inform the entire design cycle to lower the risk of dangerous incidents occurring.”
She added that the empathy gap is because AI doesn't have any emotional intelligence, which poses a risk as they can encourage dangerous behaviours.
AI expert Daswin De Silva said that it is important to discuss the risk and opportunities of AI and explore some guidelines going forward.
“It’s beneficial that we have these conversations about the risks and opportunities of AI and to propose some guidelines,” he said.
“We need to look at regulation. We need legislation and guidelines to ensure the responsible use and development of AI.”
Image: Shutterstock