Number of Posts: 18
Posts 11 - 18
Die Maschine erziehen und trainieren
(Raising and training the machine)
Newspaper | Sonntagszeitung
Date | 20.11.2016
Language | German
Country | Switzerland
Topic Tags | artificial intelligence, computer programming, research/study, threat
Summary | Some researchers say that artificial intelligence may eliminate the need for human programmers. Modern programs are becoming more similar to human brains in that it is no longer just the programmer who creates every step of the program but the program itself is capable of learning from experience (technically: exposure to large amounts of data). Some find this idea that computers will become intellectual equals of humans frightening.
Image Description | N/A
Diese Technik soll uns den Alltag erleichtern
(This technology should make our daily lives easier)
Newspaper | Tages-Anzeiger
Date | 5.4.2017
Language | German
Country | Switzerland
Topic Tags | artificial intelligence, privacy, research/study, smartphone, translation
Summary | Computer are becoming more and more intertwined in our daily lives. Some smartphones can already translate real-time conversations with imitating the speakers voice thanks to advances in voice recognition. Image recognition has also advanced substantially to being able to "read" moods, age, and attractiveness of the photographed individuals. Research is being done for smartphones and other devices to monitor body odor, sweat (to allet to dehydration), or tear liquid (for diabetics).
Image Description | Various simple visualizations of smartphones/devices interacting with people (depicted by emojis), body parts, et cetera.
Image Tags | chart, emojis, female(s), smartphone
L’intelligence artificielle, aussi raciste et sexiste que nous
(Artificial intelligence, as racist and sexist as us)
Newspaper | Le Temps
Date | 4.5.2017
Language | French
Country | Switzerland
Topic Tags | artificial intelligence, research/study, threat
Summary | A new research shows that artificial intelligence can also have biases and prejudices. The results are not really surprising but it can be dangerous if one uses AI to hire people for instance. The study shows that some AI programs actually reproduce racist and sexist stereotypes that exist in language. Researchers created an "association test"called GloVe and demonstrated that, for example, the machine associated names of flowers with positive connotations, and names of insects with negative ones, as would human beings do. The results are not surprising because learning machines are actually a mirror of human behavior.
Image Description | N/A
Apple accélère dans l'intelligence artificielle
(Apple is moving faster in the field of artificial intelligence)
Newspaper | Le Figaro
Date | 18.10.2016
Language | French
Country | France
Topic Tags | artificial intelligence, Facebook, Google, privacy, research/study
Summary | Apple has a new research group that will be in charge of AI. With Siri, Apple was in fact one of the first companies to introduce chatbots. Nowadays, there are many personal assistants that are far better than Siri (e.g. Alexa and Google Assistant). Unlike Google or Facebook, Apple does not base its research and activities on people's personal information. However, to be efficient in the field of AI, a company needs a lot of personal data. This is why Facebook and Google are at the forefront of these technologies.
Image Description | N/A
Intelligence artificielle: les géants du Web lancent un partenariat sur l'éthique
(Artificial Intelligence: Web giants launch partnership on ethics)
Newspaper | Le Monde
Date | 1.10.2016
Language | French
Country | France
Topic Tags | artificial intelligence, law, research/study, threat
Summary | Artificial intelligence is spreading, which can be worrying. Google, Facebook, IBM, Microsoft and Amazon decided to create the "Partnership on Artificial Intelligence to Benefit People and Society" in order to answer ethical questions, and do more research on the impact of new technologies on society. Another goal of the project is to educate people, listen to them, and be transparent with them. Stephen Hawking thinks that AI could end humanity, and Elon Musk claims that it could be more dangerous than atomic bombs.
Image Description | N/A
L'algorithme qui comprend les contes pour enfants
(The algorithm that understands children's tales)
Newspaper | Le Monde
Date | 28.2.2017
Language | French
Country | France
Topic Tags | artificial intelligence, Facebook, research/study
Summary | One of Facebook's research labs on artificial intelligence created an algorithm capable of understanding and remembering texts. The algorithm is even better than the "tale's text". The essence of intelligence is the capacity to predict, which is also one of the goals of artificial intelligence. Chatbots don't understand the questions that people ask; they actually react to keywords.
Image Description | N/A
Intelligence artificielle: Google lance un groupe de recherche européen sur l'apprentissage
(Artificial intelligence: Google starts a European research group on learning)
Newspaper | Le Monde
Date | 20.6.2016
Language | French
Country | France
Topic Tags | artificial intelligence, Google, research/study, threat
Summary | In Zurich, Switzerland, Google started a new research group on artificial intelligence that will focus on "deep learning" and machine learning. The goals of the research will be to help computers to better understand language, and to help researchers to better understand how machine learning works. Some people such as Stephen Hawking and Elon Musk warned us against the potential risks of AI.
Image Description | N/A
L'intelligence artificielle reproduit aussi le sexisme et le racisme des humains
(Artificial intelligence also reproduces human beings' sexism and racism)
Newspaper | Le Monde
Date | 15.4.2017
Language | French
Country | France
Topic Tags | artificial intelligence, gender, research/study, threat
Summary | Gender stereotypes are reproduced in some artificial intelligence programs. Researchers at the University of Stanford show how machine learning can replicate people's biases. They based their research on a technology called GloVe, which is trained to look for common associations. The technology points to some problematic associations that illustrate sexism and racism. The fact that AI follows people's prejudices can have some serious consequences, so people are trying to find solutions against AI's biases.
Image Description | N/A
Page 2 of 2
Back | Next