Getting your Trinity Audio player ready...
|

We have heard the warnings from Yuval Noah Harari, that if we don’t figure out how to regulate artificial intelligence (AI) human brains will be hacked soon,” a statement that arguably speaks to humanity’s worst fears about AI. This may be especially so, when hearing from Schwab’s advisor Harari that to “hack a human being is to get to know that person better than they know themselves,” which can enable those who own the technology to increasingly manipulate us.
We may believe this extreme threat to our privacy will occur some time in the future, but, the hacking Harari is describing is more proverbial than literal, and has already been occuring in environments like Facebook and YouTube where we are led to view content that the algorithms have deemed to be of interest to us. It now would appear that many have gradually become desensitised to it the “hacking” and manipulation allowing it to increase without too much protest.
But how would you feel if your workplace was tracking how you feel? asks Nazanin Andalibi who is Assistant Professor of Information at the University of Michigan. and in the article below dissuses emotion AI which is already being used in the workplace.
Emotion-Tracking – AI on the job: Workers Fear Being Watched – and Misunderstood.
Emotion artificial intelligence uses biological signals such as vocal tone, facial expressions and data from wearable devices as well as text and how people use their computers, promising to detect and predict how someone is feeling. It is used in contexts both mundane, like entertainment, and high stakes, like the workplace, hiring and health care.
A wide range of industries already use emotion AI, including call centers, finance, banking, nursing and caregiving. Over 50% of large employers in the U.S. use emotion AI aiming to infer employees’ internal states, a practice that grew during the COVID-19 pandemic. For example, call centers monitor what their operators say and their tone of voice.
Scholars have raised concerns about emotion AI’s scientific validity and its reliance on contested theories about emotion. They have also highlighted emotion AI’s potential for invading privacy and exhibiting racial, gender and disability bias.
Some employers use the technology as though it were flawless, while some scholars seek to reduce its bias and improve its validity, discredit it altogether or suggest banning emotion AI, at least until more is known about its implications.
I study the social implications of technology. I believe that it is crucial to examine emotion AI’s implications for people subjected to it, such as workers – especially those marginalized by their race, gender or disability status.Can AI actually read your emotions? Not exactly.
Workers’ concerns
To understand where emotion AI use in the workplace is going, my colleague Karen Boyd and I set out to examine inventors’ conceptions of emotion AI in the workplace. We analyzed patent applications that proposed emotion AI technologies for the workplace. Purported benefits claimed by patent applicants included assessing and supporting employee well-being, ensuring workplace safety, increasing productivity and aiding in decision-making, such as making promotions, firing employees and assigning tasks.
We wondered what workers think about these technologies. Would they also perceive these benefits? For example, would workers find it beneficial for employers to provide well-being support to them?
My collaborators Shanley Corvite, Kat Roemmich, Tillie Ilana Rosenberg and I conducted a survey partly representative of the U.S. population and partly oversampled for people of color, trans and nonbinary people and people living with mental illness. These groups may be more likely to experience harm from emotion AI. Our study had 289 participants from the representative sample and 106 participants from the oversample. We found that 32% of respondents reported experiencing or expecting no benefit to them from emotion AI use, whether current or anticipated, in their workplace.
While some workers noted potential benefits of emotion AI use in the workplace like increased well-being support and workplace safety, mirroring benefits claimed in patent applications, all also expressed concerns. They were concerned about harm to their well-being and privacy, harm to their work performance and employment status, and bias and mental health stigma against them.
For example, 51% of participants expressed concerns about privacy, 36% noted the potential for incorrect inferences employers would accept at face value, and 33% expressed concern that emotion AI-generated inferences could be used to make unjust employment decisions.
Participants’ voices
One participant who had multiple health conditions said: “The awareness that I am being analyzed would ironically have a negative effect on my mental health.” This means that despite emotion AI’s claimed goals to infer and improve workers’ well-being in the workplace, its use can lead to the opposite effect: well-being diminished due to a loss of privacy. Indeed, other work by my colleagues Roemmich, Florian Schaub and I suggests that emotion AI-induced privacy loss can span a range of privacy harms, including psychological, autonomy, economic, relationship, physical and discrimination.
On concerns that emotional surveillance could jeopardize their job, a participant with a diagnosed mental health condition said: “They could decide that I am no longer a good fit at work and fire me. Decide I’m not capable enough and not give a raise, or think I’m not working enough.”
Participants in the study also mentioned the potential for exacerbated power imbalances and said they were afraid of the dynamic they would have with employers if emotion AI were integrated into their workplace, pointing to how emotion AI use could potentially intensify already existing tensions in the employer-worker relationship. For instance, a respondent said: “The amount of control that employers already have over employees suggests there would be few checks on how this information would be used. Any ‘consent’ [by] employees is largely illusory in this context.”Emotion AI is just one way companies monitor employees.
Lastly, participants noted potential harms, such as emotion AI’s technical inaccuracies potentially creating false impressions about workers, and emotion AI creating and perpetuating bias and stigma against workers. In describing these concerns, participants highlighted their fear of employers relying on inaccurate and biased emotion AI systems, particularly against people of color, women and trans individuals.
For example, one participant said: “Who is deciding what expressions ‘look violent,’ and how can one determine people as a threat just from the look on their face? A system can read faces, sure, but not minds. I just cannot see how this could actually be anything but destructive to minorities in the workplace.”
Participants noted that they would either refuse to work at a place that uses emotion AI – an option not available to many – or engage in behaviors to make emotion AI read them favorably to protect their privacy. One participant said: “I would exert a massive amount of energy masking even when alone in my office, which would make me very distracted and unproductive,” pointing to how emotion AI use would impose additional emotional labor on workers.
Worth the harm?
These findings indicate that emotion AI exacerbates existing challenges experienced by workers in the workplace, despite proponents claiming emotion AI helps solve these problems.
If emotion AI does work as claimed and measures what it claims to measure, and even if issues with bias are addressed in the future, there are still harms experienced by workers, such as the additional emotional labor and loss of privacy.
If these technologies do not measure what they claim or they are biased, then people are at the mercy of algorithms deemed to be valid and reliable when they are not. Workers would still need to expend the effort to try to reduce the chances of being misread by the algorithm, or to engage in emotional displays that would read favorably to the algorithm.
Either way, these systems function as panopticon-like technologies, creating privacy harms and feelings of being watched.
Source: Nazanin Andalibi for The Conversation
The Expose Urgently Needs Your Help…
Can you please help to keep the lights on with The Expose’s honest, reliable, powerful and truthful journalism?
Your Government & Big Tech organisations
try to silence & shut down The Expose.
So we need your help to ensure
we can continue to bring you the
facts the mainstream refuses to.
The government does not fund us
to publish lies and propaganda on their
behalf like the Mainstream Media.
Instead, we rely solely on your support. So
please support us in our efforts to bring
you honest, reliable, investigative journalism
today. It’s secure, quick and easy.
Please choose your preferred method below to show your support.
Categories: Breaking News
When do we get to hack Klaus Schwab’s brain to know the evil he is?
He has already been hacked, aka he is a mind controlled slave, being used to present the plans and aspirations of the elite.
None of this has been developed by him.
The evil masterminds behind the New World Order are to be found in the USA, namely the CIA and the US Army.
They have for decades worked towards these goals, they hold all the patents.
Yes, many years ago I had a military grade RFID injected into my arm against my will, and local PD’s using this for harrasment purposes. They will try to make anything I see or witness as a wrong to attempt to manipulate me or abuse me in some way. and this is now old tech compared to what they have now.
Beware.
Nuke Davos and get them all.
[…] Go to Source Follow altnews.org on Telegram […]
Very concerned about AI being used more and more. It seems it is yet another step down a very slippery slope.
so glad i’m retired and do not have to put up with such nonsense.
[…] – Emotion-Tracking – AI on the job: Workers Fear Being Watched – and Misunderstood. […]
REMOVE THIS CAPACITY FROM THE WORKPLACE…
IT’S SLAVERY AND NOT ACCEPTABLE.
CONSTANT SURVEILLANCE IS PRISON.
[…] Read more: Emotion-Tracking – AI on the job: Workers Fear Being Watched – and Misunderstood […]
Given the demographic changes in many Western and Asian countries, having an ageing workforce and declining birthrates, power will shift to the workers. There is a shrinking pool of people to hire and employers find themselves competing for labour. Changing attitudes to training, where employers cut apprenticeships and trainees, means the available pool of skilled workers is shrinking too.
All this bodes well for people, especially those with skills and some self-confidence to tell employers where they can stick their AI and associated gadgets. The labour shortages following the Black Death brought increased labour mobility and the end of the feudal system. Our current Lords and masters shall find themselves in a similar situation as the 15th century toffs facing the great serf shortage.
Frankly, emotion tracking strikes me as just more AI hype. How much does it cost? Does it do what it claims in the glossy brochure? How do you reap any benefits? What kind of staff will one retain when those with skills and ability say ‘f*ck this’ and quit?
The fundamental problem with intellectual property is that it walks around in peoples’ heads. When those heads walk out the door, your companies know-how goes with them.
The whole article is nonsense. Just because some owner of some slaves somewhere tried out monitoring the slave’s blood pressure does not mean their “brain was hacked”. It’s just more trash journalism to make money. And it nicely distracts you from things that matter.
I always looked perplexed at work working out a problem.
Read the Bible if you dare. Revelation chapter 13.