177 Livingston Street 7th Floor Brooklyn, NY 11201 (718) 254-0700 info@bds.org

BDS TESTIFIES BEFORE THE NYC COUNCIL COMMITTEE ON TECHNOLOGY ON ALGORITHM TRANSPARENCY.

TESTIMONY OF:

Yung-Mi Lee

BROOKLYN DEFENDER SERVICES

Presented before

The New York City Council Committee on Technology

Hearing on Intro 1696

October 16, 2017

My name is Yung-Mi Lee. I am a Supervising Attorney in the Criminal Defense Practice at Brooklyn Defender Services (BDS). BDS provides multi-disciplinary and client-centered criminal, family, and immigration defense, as well as civil legal services, social work support and advocacy, for over 30,000 clients in Brooklyn every year. I thank the New York City Council Committee on Technology, and in particular Chair James Vacca, for holding this hearing today on Int. 1696, which would establish basic transparency in New York City’s automated processing of data for the purposes of targeting services, imposing penalties, or policing.

BDS SUPPORTS Int. 1696

The arrival of the digital age in the criminal legal system has been heralded by technology entrepreneurs, law enforcement leaders, and some academics, but it presents a series of new threats to the liberty and well-being of our clients that warrant deeper investigation. However, many of these technological advances are deemed proprietary or otherwise kept secret by police, making true accountability all but impossible. At worst, such tools provide a veneer of color- and class-blind objectivity while exacerbating the racial and economic discrimination and other inequalities in law enforcement practices and criminal and civil penalties. From law enforcement’s use of facial recognition software that disproportionately misidentifies Black people to so-called gang databases and designations that indefinitely flag people for harsh surveillance or worse, based on who they stand beside in a Facebook photo, apparently with no way to be removed, there are numerous examples of technology reinforcing, rather than mitigating or eliminating, biases that afflict our society as a whole. Two key examples that I will focus on are the rise of pre-trial Risk Assessment Instruments (RAI’s) and so-called predictive policing. Int. 1696 will shine a necessary spotlight on these and other areas of the modern surveillance and punishment system.
RAI’s and Pre-Trial Detention

Across the United States, nearly a half a million people are detained pre-trial—legally presumed innocent but locked in a cage. The majority of these individuals are legally eligible for release on bail, but detained because courts set bail in an amount and form they cannot afford. Financial conditions of release are, on their face, discriminatory and amplify broader inequalities in society. While attempts at reform have come in cycles for the last several decades, the most onerous forms of money bail remain in use in most of the country. Meanwhile, multinational surety companies have profited from this mass misery through the financing of the bail bonds industry, which is banned in every country except the United States and the Philippines. Because the courts generally only accept bail in cash or commercial bail bond—as opposed to, for example, an unsecured bond—bail bond agents are often a family’s only hope for getting a loved one out of jail. These agents can charge exorbitant unrefundable fees, demand unlimited collateral and impose onerous conditions, all with no meaningful oversight by local, state, or federal regulators. The industry siphons billions of dollars from marginalized communities across the country while leaving the majority of people with bail set to suffer in jail.

Understandably, there is a demand for something—anything—different, but policymakers must be deliberate about reform. Specifically, the goal of bail reform must be to reduce pre-trial detention and eliminate racial and other disparities. The zeitgeist in bail reform is the promotion of RAI’s to drive decisions about pre-trial detention, but it is not clear this approach will help, rather than harm. RAIs purport to objectively and accurately predict one outcome or another. In reality, RAIs function as a proxy for a series of subjective, human decisions. People decide whether to attempt to measure risk of flight, risk of future criminality, risk of re-arrest, or some combination of the three. People decide what level of offense to attempt to predict, i.e. any offense or a serious offense. People decide which factors to consider in the assessment and how much weight to attribute to each factor in the overall risk score. People then decide what qualitative conclusions to draw from these risk scores, establishing benchmarks for low, medium, and high risks. Finally, judges decide what weight to give the risk assessment when issuing decisions regarding release, supervision, and predictive detention.

In practice, RAIs typically use a series of highly discriminatory metrics that provide little or no utility to seeing the future. Common factors include homelessness, employment status, school enrollment, age, family connections, prior convictions, and prior incarceration. RAI proprietors argue their tools are not discriminatory because they do not consider demographic information, but this analysis ignores the pre-existing sharp disparities in the aforementioned factors. A landmark ProPublica investigation of RAIs found one commonly used tool was more likely to falsely identify Black people as likely to commit a crime. The investigation also found this RAI to be only “somewhat more accurate than a coin flip” in determining a risk of re-offense, and “remarkably unreliable” in predicting violent crime.

RAIs come with a unique threat to liberty in New York State: a concurrent push to allow judges to make assumptions about dangerousness, using RAIs, in pre-trial detention decisions. Under current state law, judges may only consider a risk of flight, with certain exceptions. While RAIs can be used exclusively to measure this risk, many high-level policymakers, including Mayor de Blasio, are urging changes to the bail statute so that dangerousness may be assessed and considered as well. As such, the first order of business is to stop this rush toward dystopic preventive detention. There is ample evidence that even a few days in jail can be criminogenic; preventive detention is a counterproductive tool of public safety. Moreover, there is no guarantee that adding dangerousness to the statute would significantly reduce jail populations. Results across the country are mixed, and courts in New York City already have comparatively high rates of releasing people on their own recognizance.

In short, RAI’s, by their nature, bypass an individual’s right to due process and the individualized, case by case, analyses required of prosecutors, judges and defense attorneys.

The transparency in RAIs afforded by this legislation is critical for policymakers and the public to analyze their efficacy and fairness. Many such assessments are currently proprietary. Currently, the Mayor’s Office of Criminal Justice Services is engaged in a good-faith effort to improve its pre-trial RAI, and it is critical that it be fully transparent. Transparency requires the release of any and all data used to formulate any RAI.  Moreover, the public should have an opportunity to recommend changes before it is implemented.

Importantly, pre-trial detention may not meet the legal definition of a penalty. This legislation should be amended to explicitly include algorithms used to determine custodial detention, incarceration, civil commitment, and supervised release.

There are many better ways to incentivize pre-trial freedom and discourage pre-trial detention, including through expanded use of the unsecured appearance bonds that are already permitted by state. These alternatives must be pursued aggressively. BDS has testified before the Council about bail reform in the past and would be happy to further discuss the issue.

Predictive Policing

Predictive Policing uses algorithms and computer modeling to attempt to predict and prevent crime, including through targeted allocations of resources. In its grudging and incomplete responses to FOIL requests from the Brennan Center for Justice, the NYPD has acknowledge the use of a predictive policing system that was developed in-house as well as a prior purchase of Palantir, a commercial predictive policing product. With both systems, NYPD has stonewalled requests for transparency, citing either trade secrets or vague security concerns. There is a high likelihood that these systems disproportionately impact low-income people of color and other heavily policed groups, but refusing to disclose, for example, the information inputs and the possible or actual outputs, serves to shield the NYPD from scrutiny. Likewise, the public is prevented from evaluating the system’s efficacy and cost-effectiveness. Perhaps resources allocated to identifying a particular housing development and/or certain of its residents as likely sources of crime would be better spent identifying and fulfilling community needs like jobs, affordable and accessible public transit, and quality community-based mental health services.

Int. 1696 will open a window in predictive policing operations and allow us to better evaluate its safeguards against civil rights violations, utility and appropriateness.

The Limits of Transparency

BDS strongly supports the Council’s years-long efforts to establish more transparency in the criminal legal system, but we also recognize the limits of this approach. Ultimately, we as a democratic society must retain the ability to direct our law enforcement, not the reverse. Transparency is an important tool of community control, but it should not be mistaken for the endgame for policymakers. As public defenders, it is impossible for us to zealously protect our clients’ Constitutional rights without knowing, for example, whether the NYPD officers are parked outside their homes in an x-ray van and how they determined their targets; disclosure of this information is therefore critical but the Council should also explore outright prohibitions on certain domestic spying operations. Likewise, the Council or Comptroller could exert authority to block the purchase of improper and invasive technology used for profiling. Ultimately, the Council must regard law enforcement secrecy as a political tool, in addition to a public safety tool. Without transparency, those of us who urge a shift away from punishment and control toward community support are at an information disadvantage, but we know more than enough from lawsuits and police and civilian recordings to rein in the discriminatory and abusive practices of law enforcement and reinvest in communities.

Thank you for your time and consideration of our comments. If you have any questions, please feel free to reach out to Jared Chausow in our Policy and Advocacy Unit at 718-254-0700 ext. 382 or jchausow@bds.org.