让人工智能具有“情商”

服务机器人 2021-05-31 09:44www.robotxin.com女性服务机器人
在电脑执行数据处理等“后台”功能的时代,缺乏情商问题不太大。但随着电脑承担数字助理和驾驶等“前台”职责,问题就大了。

816个词,by John Thornhill

Rana el Kaliouby has spent her career tackling an increasingly important challenge: puters don’t understand humans. First, as an academic at Campidge university and Massachusetts Institute of Technology and no as co-founder and chief executive of a Boston-based AI start-up called Affectiva, Ms el Kaliouby has been orking in the fast-evolving field of Human Robot Interaction (HRI) for more than 20 years.

拉纳埃尔卡利乌比(Rana el Kaliouby)在职业生涯中一直在应对一项日益重要的挑战计算机不理解人类。她在快速发展的人机互动(HRI)领域工作已有20多年,先是作为剑桥大学(University of Campidge)和麻省理工学院(MIT)的学者,现在是波士顿人工智能初创企业Affectiva的联合创始人和首席执行官。

“Technology today has a lot of cognitive intelligence, or IQ, but no emotional intelligence, or EQ,” she says in a telephone intervie. “We are facing an empathy crisis. We need to redesign technology in a more human-centric ay.”

“当今的科技拥有很高的认知智商,但没有情商,”她在一次电话采访中表示,“我们面临一场同理心危机。我们需要在更大程度上以人为中心来重新设计技术。”

That as not much of an issue hen puters only performed “back office” functions, such as data processing. But it has bee a bigger concern as puters are deployed in more “front office” roles, such as digital assistants and robot drivers. Increasingly, puters are interacting directly ith random humans in many different environments.

在电脑只执行数据处理等“后台”功能的时代,问题并不太大。但随着电脑承担更多“前台”职责(例如数字助理和驾驶机器人),这引发了更重大的担忧。电脑日益在很多不同的环境与随机的人类直接互动。

This demand has led to the rapid emergence of Emotional AI, hich aims to build trust in ho puters ork by improving ho puters interact ith humans. Hoever, some researchers have already raised concerns that Emotional AI might have the opposite effect and further erode trust in technology, if it is misused to manipulate consumers.

这种需求促使情感人工智能(Emotional AI)快速兴起。这种人工智能旨在通过改善电脑与人类的互动方式,建立对电脑工作方式的信任。,一些研究人员已提出担忧如果企业为了操纵消费者而滥用情感人工智能,那就可能会适得其反,进一步破坏人们对科技的信任。

In essence, Emotional AI attempts to classify and respond to human emotions by reading facial expressions, scanning eye movements, analysing voice levels and scouring sentiments expressed in emails. It is already being used across many industries, ranging from gaming to advertising to call centres to insurance.

从本质上说,情感人工智能试图通过读取面部表情、扫描眼球运动、分析音量和探测电子邮件流露出的情绪,对人类情感进行分类和做出回应。它已应用于多个行业,从游戏到广告,从呼叫中心到保险。

Gartner, the technology consultancy, forecasts that 10 per cent of all personal devices ill include some form of emotion recognition technology by 2022.

Photo credit: Getty Images

Amazon, hich operates the Alexa digital assistant in millions of people’s homes, has filed patents for emotion-detecting technology that ould recognise hether a user is happy, angry, sad, fearful or stressed. That could, say, help Alexa select hat mood music to play or ho to personalise a shopping offer.

Affectiva has developed an in-vehicle emotion recognition system, using cameras and microphones, to sense hether a driver is drosy, distracted or angry and can respond by tugging the seatbelt or loering the temperature.

Photo credit: Getty Images

And Fujitsu, the Japanese IT conglomerate, is incorporating “line of sight” sensors in shop floor mannequins and sending push notifications to nearby sales staff suggesting ho they can best personalise their service to customers.

日本IT集团富士通(Fujitsu)正将“视线”传感器置入商店人体模型,由其向近旁的销售人员发送推送通知,就如何为客户提供最佳个性化服务提供建议。

A recent report from Aenture on such uses of Emotional AI suggested that the technology could help panies deepen their engagement ith consumers. But it arned that the use of emotion data as inherently risky because it involved an extreme level of intimacy, felt intangible to many consumers, could be ambiguous and might lead to mistakes that ere hard to rectify.

埃森哲(Aenture)最近一份有关这类情感人工智能用途的报告提出,这项技术有望帮助公司深化与消费者的互动。但该报告警告称,使用情感数据存在内在风险,因为它牵扯到极端水平的亲密关系,对许多消费者来说是无形的,可能模棱两可的,而且可能导致难以纠正的错误。

Photo credit: Getty Images

The AI No Institute, a research centre based at Ne York University, has also highlighted the imperfections of much Emotional AI (or affect-recognition technology as it calls it), arning that it should not be used exclusively for decisions involving a high degree of human judgment, such as hiring, insurance pricing, school performance or pain assessment. “There remains little or no evidence that these ne affect-recognition products have any scientific validity,” its report concluded.

In her recently published book, Girl Decoded, Ms el Kaliouby makes a poerful case that Emotional AI can be an important tool for humanising technology. Her on academic research focused on ho facial recognition technology could help autistic children interpret feelings.

Photo credit: Getty Images

But she insists that the technology should only ever be used ith the full knoledge and consent of the user, ho should alays retain the right to opt out. “That is hy it is so essential for the public to be aare of hat this technology is, ho and here data is being collected, and to have a say in ho it is to be used,” she rites.

但她坚称,这项技术只有在用户完全知情且同意的情况下才能使用,用户应该永远拥有选择退出的权利。她写道“正因如此,公众有必要了解这项技术是什么、如何收集数据以及在哪里收集数据,并在如何使用数据方面拥有发言权。”

The main dangers of Emotional AI are perhaps tofold: either it orks badly, leading to harmful outes, or it orks too ell, opening the ay for abuse. All those ho deploy the technology, and those ho regulate it, ill have to ensure that it orks just right for the user.

情感人工智能的主要危险或许是双重的要么效果不好,导致有害的结果,要么效果太好,为滥用打开大门。使用这项技术的所有人,以及监管这项技术的所有人,都必须确保它恰到好处地为用户效力。

本文6月10日发布于FT中文网,英文原题为 Ho AI is getting an emotionally intelligent reboot

新冠疫情为何让“特朗普vs推特”之战愈演愈烈丨音频+双语

Copyright © 2016-2025 www.robotxin.com 人工智能机器人网 版权所有 Power by