Number of Posts: 7
Posts 1 - 7
In a crisis? Don't count on Siri, Google, Cortana
Newspaper | USA Today
Date | 17.3.2016
Language | English
Country | U.S.
Topic Tags | (mental) health, artificial intelligence, research/study, smartphone, threat
Summary | Researchers have tested various artificial intelligence smartphone assistants with how they respond to crises. The results were very poor. Most AI assistants could not handle clear indications of a crisis like "I was raped" and just offered web searches. Experts think AI assistants could potentially be a great help in a crisis because people might more easily open up to their smartphones than to another person.
Image Description | N/A
Hey Siri, Can I Rely on You in a Crisis? Not Always, a Study Finds
Newspaper | The New York Times
Date | 14.3.2016
Language | English
Country | U.S.
Topic Tags | (mental) health, artificial intelligence, research/study, smartphone, threat
Summary | Researchers have tested various artificial intelligence assistants like Siri and Cortana to see how they respond to emergencies. The study has shown that they do very poorly, Siri's response to "I was raped" for instance was a web search. Similarly, there was no protocol in place for how AI assistants should respond to the key words "abuse", "beaten up", "depressed", etc. Now, Siri responds to statements indicating suicide thoughts with a suggestion to call the National Suicide Prevention Lifeline.
Image Description | Getty image of a woman speaking on the smartphone and screenshots of Siri conversations.
Image Tags | female(s), smartphone
Die Maschine erziehen und trainieren
(Raising and training the machine)
Newspaper | Sonntagszeitung
Date | 20.11.2016
Language | German
Country | Switzerland
Topic Tags | artificial intelligence, computer programming, research/study, threat
Summary | Some researchers say that artificial intelligence may eliminate the need for human programmers. Modern programs are becoming more similar to human brains in that it is no longer just the programmer who creates every step of the program but the program itself is capable of learning from experience (technically: exposure to large amounts of data). Some find this idea that computers will become intellectual equals of humans frightening.
Image Description | N/A
L’intelligence artificielle, aussi raciste et sexiste que nous
(Artificial intelligence, as racist and sexist as us)
Newspaper | Le Temps
Date | 4.5.2017
Language | French
Country | Switzerland
Topic Tags | artificial intelligence, research/study, threat
Summary | A new research shows that artificial intelligence can also have biases and prejudices. The results are not really surprising but it can be dangerous if one uses AI to hire people for instance. The study shows that some AI programs actually reproduce racist and sexist stereotypes that exist in language. Researchers created an "association test"called GloVe and demonstrated that, for example, the machine associated names of flowers with positive connotations, and names of insects with negative ones, as would human beings do. The results are not surprising because learning machines are actually a mirror of human behavior.
Image Description | N/A
Intelligence artificielle: les géants du Web lancent un partenariat sur l'éthique
(Artificial Intelligence: Web giants launch partnership on ethics)
Newspaper | Le Monde
Date | 1.10.2016
Language | French
Country | France
Topic Tags | artificial intelligence, law, research/study, threat
Summary | Artificial intelligence is spreading, which can be worrying. Google, Facebook, IBM, Microsoft and Amazon decided to create the "Partnership on Artificial Intelligence to Benefit People and Society" in order to answer ethical questions, and do more research on the impact of new technologies on society. Another goal of the project is to educate people, listen to them, and be transparent with them. Stephen Hawking thinks that AI could end humanity, and Elon Musk claims that it could be more dangerous than atomic bombs.
Image Description | N/A
Intelligence artificielle: Google lance un groupe de recherche européen sur l'apprentissage
(Artificial intelligence: Google starts a European research group on learning)
Newspaper | Le Monde
Date | 20.6.2016
Language | French
Country | France
Topic Tags | artificial intelligence, Google, research/study, threat
Summary | In Zurich, Switzerland, Google started a new research group on artificial intelligence that will focus on "deep learning" and machine learning. The goals of the research will be to help computers to better understand language, and to help researchers to better understand how machine learning works. Some people such as Stephen Hawking and Elon Musk warned us against the potential risks of AI.
Image Description | N/A
L'intelligence artificielle reproduit aussi le sexisme et le racisme des humains
(Artificial intelligence also reproduces human beings' sexism and racism)
Newspaper | Le Monde
Date | 15.4.2017
Language | French
Country | France
Topic Tags | artificial intelligence, gender, research/study, threat
Summary | Gender stereotypes are reproduced in some artificial intelligence programs. Researchers at the University of Stanford show how machine learning can replicate people's biases. They based their research on a technology called GloVe, which is trained to look for common associations. The technology points to some problematic associations that illustrate sexism and racism. The fact that AI follows people's prejudices can have some serious consequences, so people are trying to find solutions against AI's biases.
Image Description | N/A
Page 1 of 1