Loading blog post, please wait Loading blog post...

Listen to the AI-generated audio version of this piece.

00:00
00:00

The New Jersey Division on Civil Rights has released comprehensive guidance clarifying that the state's Law Against Discrimination (LAD) applies with full force to artificial intelligence and automated decision-making tools. The guidance, issued in January 2025, makes one thing crystal clear: whether discrimination comes from human decisions or AI algorithms, it's still illegal.

A key point emphasized throughout the guidance is that organizations can't evade responsibility by pointing to their technology vendors. As the guidance explicitly states, "a covered entity is not shielded from liability for algorithmic discrimination that results from the entity's use of an automated decision-making tool simply because the tool was developed by a third party or because the entity does not understand the inner workings of the tool."

The document takes a balanced approach, recognizing that AI and automated tools aren't inherently problematic. In fact, when properly designed and deployed, these technologies can help reduce bias and discrimination. The guidance cites examples like using AI to identify discriminatory language in property records and leveraging alternative data to increase access to credit for marginalized communities.

However, the guidance is unequivocal about the legal obligations: "The LAD prohibits all forms of discrimination, irrespective of whether discriminatory conduct is facilitated by automated decision-making tools or driven by purely human practices." This applies across employment, housing, public accommodations, and other areas covered by the LAD.

The document outlines how discrimination can occur at various stages - in the design, training, or deployment of these tools. As the guidance explains, "Ultimately, bias can be introduced into automated decision-making tools if systemic racism, sexism, or other inequalities are not accounted for when designing, training, and deploying the tools. And this, in turn, can reinforce and exacerbate existing disparities, risking significant harm to marginalized populations."

To avoid these pitfalls, the guidance emphasizes the importance of careful evaluation both before and after deployment. Organizations must consider fairness throughout the lifecycle of these tools - from initial design through ongoing monitoring and assessment. This includes testing for bias, conducting impact assessments, and ensuring reasonable accommodations are properly handled.

The message is clear: embracing new technology is fine, but it must be done responsibly and in compliance with anti-discrimination laws. Organizations that use AI and automated decision-making tools must ensure they don't perpetuate or amplify discrimination, regardless of whether they developed the technology themselves.

For New Jersey organizations using or considering AI tools, this guidance provides a crucial framework for ensuring compliance with the LAD while harnessing the potential benefits of automation. For other states, it offers a model to follow. The full guidance document and additional resources are also available at www.NJCivilRights.gov.