Digital Discourse Database

Number of Posts: 3
Posts 1 - 3

Quand les émotions mènent le bal

(When emotions are leading the way)

Hyperlink

Newspaper | Le Temps
Date | 9.5.2017
Language | French
Country | Switzerland
Topic Tags | emojis, Facebook, privacy, social media, threat
Summary | Nowadays, people don't take the time to "think" and go from one emotion to another very quickly, especially on social media. For instance, Facebook introduced its "reaction" buttons. Today, it seems that a laughing emoji is worth a long speech. Facebook can also gather its users' personal information thanks to the reaction buttons. Our communication is now based on emotions, which can be dangerous.
Image Description | Cartoon representing four people chatting; one of the speech bubbles includes a series of different emojis.
Image Tags | emojis

«Les «fake news» renforcent la valeur des infos sérieuses»

("Fake news reinforce the value of serious news")

Hyperlink

Newspaper | Le Temps
Date | 27.1.2017
Language | French
Country | Switzerland
Topic Tags | Facebook, fake news, social media, threat
Summary | Traditional news media should not have to help social media find "fake news". Fake news can be a threat, but they can also give an advantage to journalists. Indeed, if there are a lot of fake news, an audience will appreciate a serious piece of news even more. Facebook has been blamed for the spread of fake news. However, Sheryl Sandberg claims that Facebook should not have to evaluate and select its content. External experts should do that. However, Facebook already filters some content related to hatred for example.
Image Description | Photograph of Mathias Döpfner and Mark Zuckerberg.
Image Tags | male(s)

L’intelligence artificielle, aussi raciste et sexiste que nous

(Artificial intelligence, as racist and sexist as us)

Hyperlink

Newspaper | Le Temps
Date | 4.5.2017
Language | French
Country | Switzerland
Topic Tags | artificial intelligence, research/study, threat
Summary | A new research shows that artificial intelligence can also have biases and prejudices. The results are not really surprising but it can be dangerous if one uses AI to hire people for instance. The study shows that some AI programs actually reproduce racist and sexist stereotypes that exist in language. Researchers created an "association test"called GloVe and demonstrated that, for example, the machine associated names of flowers with positive connotations, and names of insects with negative ones, as would human beings do. The results are not surprising because learning machines are actually a mirror of human behavior.
Image Description | N/A

Page 1 of 1