Privacy / Security

Are AI-powered security cameras better at protecting us than traditional ones?

A new generation of AI-powered security cameras raises lots of questions. Especially for consumers looking for the best balance of protection, privacy, and cost. While new camera tech means more convenience, you should consider the privacy concerns before becoming an early adopter of the technology.

The smarter security camera

Traditional security cameras have been around for decades. And there are hundreds of options for consumers looking for a baseline of added security. The latest traditional cameras are far from obsolete—most offer smart home integration, high-definition video, and smartphone controls that make them both convenient and simple to operate.

AI-powered cameras take security to a new level with the addition of facial recognition technology. Facial recognition is still in its infancy. But these cameras use the technology to expand the capabilities of a traditional security camera. The technology also raises some startling questions and implications for homeowners, and the disadvantages associated with the newer tech may not justify its cost.

How facial recognition technology works

Facial recognition algorithms use specific points on a person’s face (like your pupils, chin, or nose) to take incredibly precise measurements. Once the distances between your features are measured, the algorithm compares these measurements to millions of other saved data points to identify your unique features.

Earlier iterations of facial recognition were limited by things like glasses, hats, blurred images, and faces in profile, but the current generation of facial recognition algorithms boast an astounding 99.98% accuracy for both photos and video.

Facial recognition algorithms already permeate the lives of most Americans. Social media programs like Google, Facebook, and Snapchat have been using the technology for years. And most people’s photos are already part of massive stores of data. An AI-driven personal security camera uses only the data generated by locally recorded footage, but web-connected cameras in public spaces could potentially compare the data of millions of photos to accurately identify any individual.

Facial recognition technology in home security

Over the past several years, security cameras with facial recognition technology emerged on the consumer market. The first versions of these cameras used recorded footage to “learn” the faces of anyone who frequently visits your home. Unrecognized faces would be automatically flagged by the software, triggering an alert on a smartphone or at a monitoring facility.

Facial recognition technology is already becoming a mainstream feature on affordable security cameras. Known brands like Amazon’s Ring doorbells or Google’s Nest cameras. In fact, Amazon’s social network, Neighborhood, allows users to flag the faces of suspicious individuals and share the data with anyone on the network, as well as local law enforcement. Other apps like Citizen and Nextdoor are creating networked neighborhood watches in cities across the US.

AI-powered home security cameras that utilize a local sharing platform like Neighborhood provide law enforcement with the ability to identify and monitor any individuals that are flagged by users. This close relationship between companies like Amazon and law enforcement agencies is unprecedented, and law enforcement may not be required to protect the data gathered by these applications.

Amazon filed a patent in December 2018 that would allow Ring cameras to increase their facial recognition accuracy by building a database from the images collected by any Ring camera. It’s a significant leap from local sharing of images on Amazon’s Neighborhood. The patent has brought the company’s ambitions under intense scrutiny.

The ethics of facial recognition

Programs like Amazon’s facial recognition software, Rekognition, have sparked a backlash from prominent human rights organizations like the ACLU, who recognize the software’s potential for exacerbating racial biases while infringing on the privacy of American citizens. The negative press has done little to slow the adoption of the technology.

Message boards associated with these platforms are often a haven for racist language and racial profiling. An investigation by Motherboard found that the majority of individuals flagged on these platforms are people of color, and the descriptions by users often extend to racial profiling and hostility.

A legislative response

California, Massachusetts, New York, and Washington are all considering legislative action to limit the use of facial recognition technology in certain contexts. Regulations were also considered by Congress on July 2019, as bipartisan concerns grew over the unchecked use of the technology.

Advocacy groups like the ACLU, Fight for the Future, and Liberty is all drawing a hard line against facial recognition technology, citing a loss of privacy and the abuse of data by private corporations. Amazon’s combination of facial recognition and social networking creates a particularly dangerous environment. One of abuse of technology and increased harassment or profiling of people of color.

Facial recognition and you

Security cameras with facial recognition software are rapidly becoming more affordable. Since Amazon’s Ring cameras may soon use AI to process images over the web, the cameras themselves could gain facial recognition capabilities while remaining cheap and easy to install.

Facial recognition cameras offer other benefits for families or busy households. A camera with facial recognition technology can automatically alert you when your child comes home from school. Thereby affording them independence without risk. For business owners, the cameras could substantially decrease instances of shoplifting and hold would-be thieves responsible for lost property.

Public opinion of facial recognition technology is mixed. In September 2018, a survey found that roughly half of respondents were in favor of limitations on the use of facial recognition technology in law enforcement. Less than a year later, a new survey indicated that only one in four people favored these limits. Most individuals approve of the technology if it can be used to reduce crime.

Does facial recognition actually make you safer?

Ultimately, most consumers want to know whether this controversial technology can actually keep them safe. The technology has already been used in a substantial number of arrests; in 2018, 998 arrests were made in New York City using data from the FBI’s Facial Identification Section.

Proponents of the technology also cite the unreliability of human witnesses. Most false convictions stem from false IDs by humans. And the improved accuracy of an AI promises to limit these misidentifications.

There are, however, substantial trade-offs for the enhanced safety that facial recognition technology may provide. Facial recognition data is often stored on private servers with no legislative oversight. Your data could be at risk if those servers are compromised. And there’s nothing stopping major corporations from selling your facial recognition data to advertising agencies.

The bottom line

Ultimately, the adoption of facial recognition technology will be decided by the wallets of consumers. Privacy is being eroded every day by a number of technologies. And many consumers are becoming apathetic to the unrestricted use of their data. So-called neighborhood watch apps already have tens of thousands of users in cities all over the US.

As major corporations and governments prepare for the widespread implementation of facial recognition technology across a range of devices, individuals may soon lose their agency in deciding whether to opt-in or not. The technology is rapidly altering the way our society perceives security and privacy. And the benefits and harms are not yet clear.

Leave a Comment

Your email address will not be published.

You may also like

Pin It on Pinterest