Contact us at hello@techheading.com for Guest Posting, Adding link to an already existing Post and for your Product or Service Advertisement 


AI Now, a New York University-affiliated public interest group that counts employees of Microsoft and Google as its members, has raised the alarm over the inevitable uses of facial recognition technologies by governments and law enforcement agencies that, they believe, will erode our civil liberties, reports the Verge.

In a paper published on Thursday, AI Now warns of the U.S. becoming a China-like surveillance state thanks to facial recognition tech. In China, the technology is already being used to publicly shame people in debt, track minorities, and block people from entering certain housing estates. To stop this dystopian vision from happening in America, the authors of AI Now’s paper call on the government to regulate the use of AI and facial recognition tech:

Facial recognition and affect recognition need stringent regulation to protect the public interest. Such regulation should include national laws that require strong oversight, clear limitations, and public transparency. Communities should have the right to reject the application of these technologies in both public and private contexts. Mere public notice of their use is not sufficient, and there should be a high threshold for any consent, given the dangers of oppressive and continual mass surveillance.

And it’s not just the Microsoft employees who are members of AI Now sounding the alarm. Microsoft’s president Brad Smith recently revealed similar concerns during a speech at the Brookings Institution:

We believe it’s important for governments in 2019 to start adopting laws to regulate this technology. The facial recognition genie, so to speak, is just emerging from the bottle. Unless we act, we risk waking up five years from now to find that facial recognition services have spread in ways that exacerbate societal issues. By that time, these challenges will be much more difficult to bottle back up.

In particular, we don’t believe that the world will be best served by a commercial race to the bottom, with tech companies forced to choose between social responsibility and market success. We believe that the only way to protect against this race to the bottom is to build a floor of responsibility that supports healthy market competition. And a solid floor requires that we ensure that this technology, and the organizations that develop and use it, are governed by the rule of law.

Perhaps most frightening of all is that facial recognition tech has now moved beyond simply identifying and tracking people. The technology has now grown to include “affect recognition,” or the ability of facial recognition systems to detect someone’s emotions. As explained in AI Now’s paper:

Affect recognition, a subset of facial recognition, aims to interpret faces to automatically detect inner emotional states or even hidden intentions. This approach promises a type of emotional weather forecasting: analyzing hundreds of thousands of images of faces, detecting “micro-expressions,” and mapping these expressions to “true feelings.” . . . This reactivates a long tradition of physiognomy–a pseudoscience that claims facial features can reveal innate aspects of our character or personality. Dating from ancient times, scientific interest in physiognomy grew enormously in the nineteenth century, when it became a central method for scientific forms of racism and discrimination. . . Although physiognomy fell out of favor following its association with Nazi race science, researchers are worried about a reemergence of physiognomic ideas in affect recognition applications. . . . The idea that AI systems might be able to tell us what a student, a customer, or a criminal suspect is really feeling or what type of person they intrinsically are is proving attractive to both corporations and governments, even though the scientific justifications for such claims are highly questionable, and the history of their discriminatory purposes well-documented.

You can’t get much more Orwellian than that.

READ  This week in science | Science & Technology





Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here