The Detroit Board of Police Commissioners approved a proposal Thursday that critics say gives police too broad a license to monitor citizens through at least 500 yet-to-be-installed roadside surveillance cameras that were initially meant to surveil only traffic patterns.
The policy approved 6-3 Thursday allows Detroit police to tap into the camera footage for criminal investigations and when a crime is "reasonably expected to occur.” Police can also pair the footage with controversial facial recognition software. The policy applies to traffic-mounted cameras to be installed at intersections throughout the city by 2020.
Critics are concerned that the plan lets police overstep and spy on lawful activity and that the facial recognition component will lead to the misidentification of people of color, as studies have shown.
"It doesn’t provide the police any real guidance or any real restrictions on their ability to access the cameras, or to restrict their use of facial recognition technology,” Eric Williams, an attorney working with the ACLU to monitor surveillance initiatives in Detroit, said of the policy. “We should always be leary of the word ‘reasonably’ when applied to the police; the department will give its officers wide berth in determining what is ‘reasonable.’"
Craig confirmed at a Thursday news conference that police can review the footage, which streams into the same room as Project Green Light's Real Time Crime Center and will be stored, under any circumstance. The roadside footage is primarily monitored by Detroit Public Works staff.
The roadside camera plan, first unveiled by Mayor Mike Duggan in his March State of the City address, will place cameras at 500 locations prioritized based on crime data and traffic volume, doubling a real-time surveillance network that currently consists of cameras on about 500 Project Green Light partner businesses. The traffic-mounted cameras will be paid for with $9 million in state and federal transportation funds.
The expansion comes despite a lack of evidence showing the surveillance component of Project Green Light helps drive down crime, and as more progressive cities ban or float bans on facial recognition technology.
Elected leaders and others concerned with the policy proposal held a public meeting in the North End Wednesday night warning residents that its wording is too vague to protect privacy rights.
Detroit and Chicago are the only two cities to invest in facial recognition technology for policing, according to a recent report by Georgetown University’s Center on Privacy & Technology. The Detroit Police Department's guidelines allow it to connect the facial recognition technology "to any interface that performs live video, including cameras, drone footage, and body-worn cameras." Those guidelines, which are already in effect despite never having received approval from the police commission, were considered at Thursday's meeting. The vote was postponed, as details of the formal directive to go before the commission were still being ironed out, according to the Free Press.
Detroit Police Chief James Craig has said the department only uses facial recognition technology with still images to identify suspects in criminal cases. He said Thursday that the techonology would only be used with live video under extreme circumstances, like in the event of a "credible" terror threat.
But the ACLU's Williams believes the facial recognition technology poses great risk to residents' rights. He noted that in Baltimore in 2015, police used the technology to monitor people protesting the death of Freddie Gray. Documents obtained by the ACLU showed police arrested demonstrators with outstanding warrants.
“The technology has gotten ahead of the legal mechanisms that safeguard our civil liberties,” says Williams. “This isn’t simply the police being able to ID someone they’re looking for … it's the ability to analyze everybody and run all those pictures through a database in very short order, and to use artificial intelligence to review what people are doing."
“This is a very different situation than we’ve ever had before and I think we really need to appreciate the difference in order to understand the threat that it poses.”
San Francisco became the first major U.S. city to ban the use of facial recognition technology this spring, citing privacy concerns. Additional concerns have been raised over the accuracy of the technology, which has been shown to misidentify people of color at a higher rate than other groups. The technology also has the potential to reinforce existing biases in the criminal justice system.
“In a more than 80 percent black city, you’re just asking for a problem,” says Williams.