This is a new series about cameras and their relationship to face recognition, machine learning, and how, in the future, the ways in which we interact with technology will be radically different.
Follow the Kairos team as we explore the impact of cameras on technology and the ways your business can benefit from it.
More than meets the eye
As the sensational sci-fi perception of Artificial Intelligence imposed upon us by entertainment media gives way to the actual, practical, real life solutions AI provides — LDV’s research is right on time with a seriously insightful glimpse into how rapidly and remarkably the relationship between humans and machines is evolving.
Their forecast around the progressive function of “unique lenses and sensors used for visual capture” (formerly known as cameras :-) positions visual technologies as the cornerstone of the human/machine connection.
And while we have come to enjoy and even rely upon current conveniences of machine implemented AI — Visual Tech is creating brand new expectations by dramatically enhancing and re inventing the things machines do for us — and creating entirely new, efficient ways for machines to inform us. The “Daily Specials” sign, and rental car mentioned in The Beginning? They represent examples of both.
The car ?
In a report by analysts IHS it’s predicted that by 2025 the installation rate of AI-based systems in new vehicles will increase to 109% — climbing from just 8% in 2015.
This dramatic increase absolutely coincides with LDV’s report — because as the use of AI systems in vehicles becomes standardized across both infotainment and autonomous vehicle applications, cameras will serve as the eye on the road — as well as the eye on the driver.
SERIES CATCH-UP:
Cameras are Watching and Machines are Learning: The Beginning
For autonomous, or “self driving” cars, the technology needed to give automobiles the 360⁰, realtime, 3D views necessary for operation — will come in the form of video cameras working alongside a primary “vision” unit called the LIDAR system, short for Light Detection and Ranging. This device enables split-second decision-making, object identification, collision prediction, and avoidance strategies.
? Big news from @voyage: Our first self-driving taxi service is live in a truly remarkable community.
— Oliver Cameron (@olivercameron) October 4, 2017
4,000 residents can summon a self-driving car and travel door-to-door on 15 miles of chaotic road. https://t.co/pfjnq03pJA pic.twitter.com/EtWjIhS6uq
The LIDAR is the epitome of sophisticated advancement in Visual Technologies, and has eliminated some challenging variables associated with using cameras alone — like the mechanical issues associated with the positioning of multiple cameras correctly and keeping them clean, the heavy graphic processing needed to make sense of images, and conditions of lighting, shadows, and other factors that can make it very challenging to accurately decide what the camera is seeing.
However, solutions that combine LIDAR technology and cameras are in development, because they are less expensive than LIDAR alone. This is actually important for the market, since mainstream autonomous driving won’t happen until the industry has a cost-effective LIDAR system that is fully integrated with other sensors.
Tesla’s ‘Autopilot’ feature uses computer vision via eight surround cameras. It’s a great example of how Computer Vision is becoming part of everyday life. (Image: Tesla © 2017)
Meanwhile, inside the car, advances in In-vehicle infotainment (IVI) are creating concern over driver distraction. In fact, The AAA Foundation for Traffic Safety sponsored a study acessing the visual and cognitive demands of infotainment systems.
Their research found that of 30 different vehicles tested, 23 of them created “high or very high” levels of demand for drivers across four different tests — making a call, sending a text, tuning the radio and programming the navigation. The other seven created “moderate” levels of distraction, while no cars created what would be considered low demand.
While I can’t say I’m surprised by these findings — they are alarming, nonetheless. But, some confidence can be regained by Advanced Driver Assisted Systems (ADAS), because shipment of these safety-enhancing applications are expected to rise from 7 million in 2015 to 122 million by 2025.
Conventional ADAS technology, like backup cameras, can detect some objects, do basic classification, and alert the driver of hazardous road conditions. Innovation in ADAS is partnering deep learning with cameras — enhancing safety features to be driver focussed, by watching what’s happening on the inside of the car rather than the outside.
“Whilst a future with fully-autonomous cars is predicted to decrease traffic deaths by over 90%[1], trust and confidence in new systems of automation will be key to their adoption.”
- From ‘Face Recognition + Cars = Safer Automotive Experiences’ by Ben Virdee-Chapman
For example, Kairos is working with top automobile manufacturers in the development of ADAS applications which will detect driver fatigue, identity — and gauge driver attention.
This shift to in cabin analysis comes at the right time, considering the impact IVI is predicted to have on driver distraction, as well as offering a valuable enhancement to automobile security.
The sign ?️
While cameras in and on cars are visible, and positioned for the purpose of increasing safety — the camera(s) embedded in the “Daily Specials” sign which greets you upon entry into any number of restaurants, all over the world, is concealed — and likely gathering more information than it’s giving.
In this application of Visual Tech, the customer is not the diner — but the restaurant owner, who is employing Face Recognition technology to gather insights about diners. What insights? Why? Should I be concerned?! All perfectly reasonable responses to such a revelation.
So let me explain…
The information “the sign” is gathering, revolves around details like age, gender and facial expressions. The age and gender piece is simply to identify overall demographics of patronage for insights like, “Men between 18–25 dine-in mostly on Wednesday nights”, or “Women with children dine-in mostly on Saturday afternoons”. Basically, who is coming in, and when.
The facial expression component is more comprehensive. Cameras view patrons upon arrival, and applications like Kairos Human Analytics take advantage of their time at point of entry to gauge customer reaction/response during this time.
Further reading: How Facial Recognition Will Impact The Shopping Industry
For instance, you may be irritated by having to wait too long to be seated. Or perhaps you are pleased with your interaction with the host — in both cases, Human Analysis determines your emotions, and reports your overall impression of the restaurant during the time of engagement.
Just this year, a pizza spot in Oslo, Norway, was internationally outed for using Human Analysis tech in their point of entry signage. Now, I can totally understand that the idea of a camera watching you, with the specific intent to gather images/video for Human Analysis tech to analyze your age, gender, and emotions — seems creepy. Admittedly. At first. But this is only because most of us think of this type of technology in terms of what we have seen in science fiction, Hollywood fantasy applications.
In the real world — it’s far less dramatic, and way more practical. And although your age, gender, and emotional responses are being gauged — it’s really nothing personal. Companies are gathering this data specifically for the not-so-creepy, very corporate purpose, of market research.
“Emotional Connections Matter: Researchers show that moving customers from highly satisfied to fully connected can have three times the return from moving them from unconnected to highly satisfied.”
- From ‘How to Humanize Artificial Intelligence with Emotion by Brian Brackeen
In fact, it’s all quite uncomplicated. Once the insights are gleaned, owners can customize marketing based on highly accurate, real time data. Men between 18–25 dining in mostly on Wednesday nights? Promote a draft beer special. Lots of women with children coming in on Saturday afternoons? Run a “family funday” campaign. Customers consistently gauged as feeling “happy” upon entry? Awesome, keep doing what you’re doing. If not, consider changing the environment at point of entry.
These are just a few examples of the ways companies utilize cameras and Face Recognition/Human Analytics to gauge customer sentiment for the sole purpose of improving customer experience — thereby increasing profits. And let’s be honest — are we really going to call the 800 number on the back of the receipt and do the survey? Even with the incentive? The answer to this, in most cases, is no.
To this end, Visual Tech, by employing Face Recognition and Human Analytics, absolves us of the responsibility of giving companies our time — even if it’s for the purpose of improving our own experiences. This makes the market research process frictionless.
Come back for “The End” of this Cameras are Watching and Machines are Learning series, when we wrap up by getting into how cameras + face recognition = frictionless business experiences.
Series catch-up:
#1--The Beginning
#2--The Middle
#3--The End
_____
[1] Ten ways autonomous driving could redefine the automotive world
Cole Calistra
Cole is the CTO at Kairos, a Human Analytics startup that radically changes how companies understand people. He loves all things cloud and making great products come to life.