Emotion detection technology, its use and debate around user consent

0

Mozilla recently came out with an article on how some companies like Snapchat might be using emotion detection technology through AR to infer emotions, moods and (potentially) use them for various purposes.

Technology is evolving a lot quicker than what regulators and independent governance bodies are ready for. Technology is always breaking the barriers around what can be done. Unfortunately, by default, all tech organizations that rely on advertising are held guilty. I am not saying that they might not be in some instances but its not fair to hold them guilty by default.

Once the products, technology features they produce gain momentum and public eye, experts starts putting their lens on what is ethical, what could the potential misuses be, was consumer consent sought. Experts always suggest putting everything down in writing on things that they are seeking approval for and how this data could be used in future because I guess people like to know the extent to which their data can be used.

Having been in the digital marketing space long enough, I know that every click, every interaction with any piece of technology is a piece of data about me. I know all this data is in someway being used by different companies to target me with Ads. I am fine with that. I like ads because they sometimes expose me to new companies, new products that I would have never thought about buying because I never knew about them. I live advertising because it sometimes help me not make irrational decisions.

I have Amazon Alexa units in my home and I know they are listening into every conversation I have. Maybe, it is being recorded and saved somewhere and who knows Amazon might be using this data to show me ads I browse around web. Am I fine with this? Yes, to Alexa listening so that she can respond to my questions but NO to data being recorded and saved. Then I read an article about how Amazon has humans from an outsourced company listening into some of these recorded conversations to ensure Alexa algorithm is working well and help it get better. Am I fine with this? Yes, if all personally identifiable information has been removed. However, if all PII is still present, then Absolutely NO!

I have my biases because I know the value of ads and why are ads important to me and to the organizations all over the world. However, there are some cases where organizations do things that can make people uncomfortable, like recording voices and then transcribing it for humans to understand if Alexa is doing her job right. So, I also like knowing the opinions of an organization like Mozilla which is more heavily biased towards igniting privacy and user consent related conversations.

I love Mozilla because it has a genuine bias (for good reasons) towards bring out all things privacy, user consent related. A single look at all the emails I receive from them is enough to convince anyone on their take on technology and their love for privacy. Look at the subject lines highlighted in yellow and you will see what I mean.

Emails from Mozilla

The latest one from them was bringing attention to technology that use AR within their features and if they are able to sense human emotion and capture that data against that person’s identity.

The company being targeted within their detail was Snapchat which relies on advertising, so they are guilty by default. The purpose of their tool is to record videos which means you are always sharing your facial expressions. Over a period of time, they can really understand your personality in and out, infer your emotions and maybe even predict what kinds of ads, product you might like more than the other. They sensationalized it a little by saying how Snapchat did not get back to them when they asked them for details on whether emotions are inferred and used.

From an advertisers perspective, when such actions start driving up a lot of customer sentiment and strong opinions from customers, one of the two things happen:

  1. Big brands will call in their reps from these platforms and ask for explanations. This will end up leading to the vendor issuing a broader statement to calm everyone’s nerves
  2. Big brands can make a call on controlling spend on platform citing brand safety concerns i.e how would consumer feel if we show them our ads on a platform that is getting a lot of negative press around customer safety

From perspective of someone who loves the field of digital marketing and understand the value of ads and why are they important, I like these conversations when they ensure relevant dialogue. However, I do not like a shotgun approach that sometimes privacy related organizations can take.

Btw, if you are interested in experiencing what Mozilla was speaking about emotions being inferred, go to the site below and experience it first hand.

https://stealingurfeelin.gs