Following the widespread adoption and use of video conferencing software following the COVID-19 pandemic, many employers have turned to video interviews to evaluate potential employees. Even as employers have become mandatory in the office, the widespread use of video interviews has continued as an easy and efficient way to evaluate job applicants. Some of the video interview tools used by employers make use of artificial intelligence (AI) in an effort to maximize the effectiveness of the interview process. Often, employers contract with third-party vendors to provide these AI-powered interview tools, as well as other technically enhanced selection processes.
While these AI-powered video interview tools offer the promise of optimizing hiring and selection efforts, these products can raise a host of legal issues, including questions about hidden bias, disparate impact, disability discrimination, and data privacy. While there are no federal laws expressly regulating the use of AI in employment decisions, Charlotte Burrows, chair of the Equal Employment Opportunity Commission, at a recent event entitled “Initiative of AI and Algorithmic Fairness: Disability-Focused Listening Session her concerns about the use of video interviews. AI technology, noting, for example, that such technology may inappropriately exclude persons with speech impediments. The same concerns would apply to individuals with visible disabilities or disabilities that affect their movements. Shortly thereafter, the EEOC released technical guidance on “The Americans with Disabilities Act and the use of software, algorithms, and artificial intelligence to assess job applicants and employees.” Legislatures in Illinois, Maryland and New York City have taken a more active approach, passing laws that directly affect the use of AI-powered video interview and facial recognition software.
Practical use of AI video interview software
Consider the following example:
A technology company with offices across the country, including New York City and Los Angeles, contracts with a third-party vendor to screen potential job candidates. As part of the application process, the third-party vendor uses proprietary software, marketed as powered by artificial intelligence, to generate a numerical score based on the candidate’s voice, facial expressions, and word choices. At the start of each interview, a representative of the technology company’s HR department announces that the interview will be videotaped and analyzed by an automated employment decision tool, and will provide the candidate with the option not to use this software. The HR representative also explains that the software undergoes a rigorous bias audit every year, the results of which are published on the third-party vendor’s website.
What legal problems can this cause?
State and local lawmakers have taken the lead in imposing explicability and transparency requirements on employers. In this example, the technology company must consider laws and regulations applicable to employers in New York City and Los Angeles to determine whether it is required to notify candidates of the nature of the AI-powered video interview tool that will be used. The company must also be aware of current laws in Illinois and Maryland if it hires candidates from locations in either state.
New York City† The New York City Council recently passed a local law governing the use of Automated Employment Decision Tools (AEDT), which will go into effect on January 1, 2023. See NYC administrator. Code Title 20, Chap. 5, below. Ch. 25, 20-870. NYC’s AEDT law, among other things, makes it illegal for an employer to use an automated employment decision tool to screen a candidate for employment unless the tool has been subject to a bias audit not more than one year prior to using the tool and a summary of the results of the bias audit is made public. In addition, employers must notify the candidate no later than ten business days prior to the interview, specifying the candidate’s “job qualifications and characteristics that such an automated employment decision tool will use in its assessment”, and provide the candidate with an opportunity to apply for a request an alternative selection or accommodation. In the example above, the tech company failed to provide the required disclosure under the NYC AEDT Act.
California† California’s Fair Employment & Housing Counsel has proposed draft rules that apply to automated decision-making systems, which in their draft form apply to “algorithms that use facial and/or voice recognition to analyze facial expressions, word choices, and voices.” The draft regulation would incorporate automated decision-making systems into California’s existing regulations regarding discriminatory hiring practices under the Fair Employment & Housing Act. The draft regulations would make it illegal for an employer to use automated decision-making systems that “exclude or tend to exclude an applicant” on the basis of a protected characteristic unless it is demonstrated that the “selection criteria … position and are consistent with the business necessity.” The ordinances are currently in the pre-rulemaking phase and the California Department of Fair Employment and Housing has not yet set a timetable for the adoption of the draft ordinances, but the ordinance, as drafted, does not contain an express obligation to notify candidates to suggest the use of AI, or to explain how the AI works.
Illinois† The Illinois Artificial Intelligence Video Interview Act (AIVI Act) requires an employer to provide notice and obtain prior consent from the applicant, and also requires the employer to explain to the candidate how the AI works and what general types of characteristics it uses to interview applicants. to evaluate . See 820 ILCS42.
Illinois employers who use video interview technology that perform facial scans or collect other biometric data must also be aware of the Biometric Information Privacy Act (BIPA), which requires employers to provide notice and obtain consent before they collect biometric data, including “hand scans”. of face geometry’, and offers candidates a private right of action. See 740 ILCS 14.
Maryland† Maryland’s Md. Code Lab. & Work. § 3-717 prohibits employers from using facial recognition technology during job interviews without the applicant’s consent. When using facial recognition services when interviewing employees, an employer in Maryland must obtain the applicant’s written consent and waiver stating the applicant’s name, date of interview, that the applicant consents to the use of facial recognition during the job interview and that the applicant has read the exemption.
EEOC technical guidance. In addition to the above state and local laws, employers must consider the EEOC’s recently released guidance on using software, algorithms, and AI to assess job applicants and employees. The EEOC guidelines contain a list of “promising practices” for employers to consider, including the following recommendations to address explainability and transparency by:
inform all candidates that reasonable accommodations are available for persons with disabilities, and provide clear and accessible instructions on how to request such accommodations; and
describe, in plain language and accessible formats, the properties that the technology-enabled tool is designed to assess, the method by which those properties will be assessed, and the variables or factors that may influence the assessment or assessment.
What should employers prepare for?
As more employers use AI-powered tools, including video interview tools, to aid in their hiring practices, they can expect more oversight in this area from federal, state and local regulators and legislators. Using tools that can be explained to the candidates being assessed, and being transparent about how the tools will be used, will not only help employers comply with applicable laws and regulations, but also enhance the credibility of the employer among candidates and regulators. To that end, employers should conduct due diligence on the software company offering AI-powered tools, familiarize themselves with the software and the way it works, and carefully craft communications that will provide sufficient information to candidates to enable them to understand the evaluation process .
 The term “automated job decision aid” is defined as any computer process, derived from machine learning, statistical modeling, data analysis, or artificial intelligence, that provides simplified output, including a score, rating, or recommendation, that is used to substantially assist or replace of discretionary decision-making for making employment decisions that affect individuals
©2022 Epstein Becker & Green, PC All rights reserved.National Law Review, Volume XII, Number 144