Mary Louis, a Black woman from Massachusetts, recently found herself at the center of a groundbreaking class action lawsuit against a rental screening algorithm. The algorithm, which was used by a third-party service, denied Louis tenancy based on what she alleged was discrimination on the basis of race and income. However, a federal judge has now approved a settlement in the case, marking a significant step forward in holding companies accountable for algorithmic discrimination.
The settlement reached in the lawsuit requires the company behind the algorithm, SafeRent Solutions, to pay over $2.2 million and make changes to their screening products that were deemed discriminatory. Notably, the settlement does not include any admissions of fault on the part of SafeRent Solutions. The company released a statement affirming their belief that their screening scores comply with all applicable laws, but acknowledged that litigation is time-consuming and expensive.
This landmark case sheds light on the widespread use of algorithms and artificial intelligence (AI) in making consequential decisions for Americans. Whether it’s a job application, a home loan application, or seeking medical care, there is a high probability that an AI system or algorithm is involved in scoring or assessing individuals. Unfortunately, these AI systems remain largely unregulated, even though instances of discriminatory outcomes have been documented.
Todd Kaplan, one of Louis' attorneys, emphasized the significance of this settlement, noting that management companies and landlords must be aware that the reliability of the systems they rely on will be challenged. One of the key allegations made against SafeRent’s algorithm was that it failed to consider the benefits of housing vouchers, thereby discriminating against low-income applicants who relied on such assistance. The algorithm was also accused of relying too heavily on credit information, which disproportionately affected Black and Hispanic applicants due to historical inequities in credit scores.
Christine Webber, one of the plaintiff’s attorneys, pointed out that even if an algorithm or AI is not explicitly programmed to discriminate, the data it uses or weighs can still have discriminatory effects. Louis experienced this firsthand when her application was denied, despite having provided references from landlords proving her responsible payment history. The algorithm’s decision was final, leaving Louis feeling defeated and emphasizing the lack of individual empathy in these automated systems.
While there have been proposals for aggressive regulations of AI systems, they have struggled to gain sufficient support from lawmakers. As a result, lawsuits like Louis' are becoming increasingly important in establishing accountability for discriminatory AI. SafeRent’s defense attorneys had argued that the company should not be held liable because they were not the final decision-makers in the tenant selection process. However, Louis' attorneys, backed by the U.S. Department of Justice, argued that SafeRent’s algorithm still played a significant role in access to housing and should be held accountable.
The settlement reached in this case includes several key stipulations. SafeRent will no longer be able to include its scoring feature on tenant screening reports in specific cases, such as when an applicant is using a housing voucher. Additionally, any future screening scores developed by SafeRent must be validated by a third-party approved by the plaintiffs. These stipulations aim to prevent further discriminatory practices and ensure that any new screening scores are free from bias.
For Louis, the journey continues. While her son was able to find her a new apartment through Facebook Marketplace, it came at a higher cost and in a less desirable location. But Louis remains determined, fueled by the responsibility she feels towards those who rely on her. This case sets an important precedent and highlights the need for continued vigilance to ensure that algorithmic discrimination is addressed and eliminated.
In the battle against algorithmic discrimination, it is clear that lawsuits like this one can be transformative. By shedding light on the flaws and biases present in these systems and holding companies accountable, we can begin to make progress towards a more just and equitable future. As AI continues to shape various aspects of our lives, it is imperative that we remain vigilant in safeguarding against discrimination and ensure that the algorithms we rely on are fair and unbiased.
Use the share button below if you liked it.