Publications:Pitfalls of Affective Computing : How can the automatic visual communication of emotions lead to harm, and what can be done to mitigate such risks?

From ISLAB/CAISR
Revision as of 21:21, 26 July 2018 by Slawek (talk | contribs) (Created page with "<div style='display: none'> == Do not edit this section == </div> {{PublicationSetupTemplate|Author=Martin Cooney, Sepideh Pashami, Anita Sant'Anna, Yuantao Fan, Sławomir Now...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigationJump to search

Do not edit this section

Property "Publisher" has a restricted application area and cannot be used as annotation property by a user. Property "Author" has a restricted application area and cannot be used as annotation property by a user. Property "Author" has a restricted application area and cannot be used as annotation property by a user. Property "Author" has a restricted application area and cannot be used as annotation property by a user. Property "Author" has a restricted application area and cannot be used as annotation property by a user. Property "Author" has a restricted application area and cannot be used as annotation property by a user.

Keep all hand-made modifications below

Title Pitfalls of Affective Computing : How can the automatic visual communication of emotions lead to harm, and what can be done to mitigate such risks?
Author
Year 2018
PublicationType Conference Paper
Journal
HostPublication WWW '18 Companion Proceedings of the The Web Conference 2018
Conference The Web Conference 2018 (WWW '18), Lyon, France, April 23-27, 2018
DOI http://dx.doi.org/10.1145/3184558.3191611
Diva url http://hh.diva-portal.org/smash/record.jsf?searchId=1&pid=diva2:1235482
Abstract

What would happen in a world where people could "see others' hidden emotions directly through some visualizing technology Would lies become uncommon and would we understand each other better Or to the contrary, would such forced honesty make it impossible for a society to exist The science fiction television show Black Mirror has exposed a number of darker scenarios in which such futuristic technologies, by blurring the lines of what is private and what is not, could also catalyze suffering. Thus, the current paper first turns an eye towards identifying some potential pitfalls in emotion visualization which could lead to psychological or physical harm, miscommunication, and disempowerment. Then, some countermeasures are proposed and discussed--including some level of control over what is visualized and provision of suitably rich emotional information comprising intentions--toward facilitating a future in which emotion visualization could contribute toward people's well-being. The scenarios presented here are not limited to web technologies, since one typically thinks about emotion recognition primarily in the context of direct contact. However, as interfaces develop beyond today's keyboard and monitor, more information becomes available also at a distance--for example, speech-to-text software could evolve to annotate any dictated text with a speaker's emotional state.