Artificial intelligence tools are increasingly embedded in multifamily tenant screening, promising to catch fake applications, falsified income data and padded resumes. Yet these same models can also misfire, incorrectly flagging qualified renters as high risk and creating new compliance exposures for owner-operators.
Steve Carroll, co-founder and CEO of screening platform Findigs, told Connect CRE that operators should watch closely for signals that their AI systems are overreaching. Red flags include approval rates that swing away from typical industry patterns, especially across specific income types or regions, and a rising volume of applicant disputes or appeals. Disputes clustering around categories such as self-employed income or unconventional documentation may indicate that protected classes are disproportionately affected.
Carroll noted that even a single warning sign warrants a closer look, while multiple indicators suggest the system is making decisions the operator never intended. He emphasized that human review should not simply override AI outputs, but should be used to monitor trends through structured, exception-based audits of a random sample of files each month.
Because many AI systems are trained on historical screening data, they risk embedding legacy biases. Carroll pointed to indirect proxies such as zip codes, income sources and employment types, which can correlate with race, national origin or familial status even when those characteristics are not explicitly included. He urged operators to push vendors for clear explanations of model decision-making and to require impact testing against real portfolio outcomes. Vendors, he said, should be willing to share approval rate data and accept third-party audits.
Regulators and courts are already responding. In May 2024, HUD issued guidance stating that disparate impact liability under the Fair Housing Act can extend to AI-driven tenant screening. The class-action case Louis et al. v. SafeRent et al. alleged that SafeRent’s algorithmic screening discriminated against minorities and voucher holders, and was resolved with a $2.275 million settlement. Separately, TransUnion agreed in 2023 to pay $15 million to settle claims by the Federal Trade Commission and Consumer Financial Protection Bureau over alleged issues tied to its tenant screening reports.
States are also moving to shape AI use. Colorado’s Artificial Intelligence Act (SB24-205), effective June 2026, targets high-risk AI systems and seeks to prevent algorithmic discrimination. The law requires developers to disclose known discrimination risks and their risk management policies, and directs deployers to complete annual impact assessments and inform consumers when high-risk AI is used to make significant decisions.
For renters, the Fair Credit Reporting Act guarantees an adverse action notice and the right to dispute when they are denied based on a report. Carroll argued that best practice goes further, with simple explanations, clear instructions for submitting additional documentation and a defined path for reconsideration.
Carroll added that operators are increasingly demanding accountability from screening platforms. He cited Findigs’ contractual fraud guarantee for any approved fraudulent applications as one example of risk-sharing that is beginning to appear in RFP language. Looking ahead, he expects legal standards around due diligence to keep evolving as automated screening becomes more common. He also stressed that fraud prevention is only half of the equation; underwriting must also focus on whether approved residents will reliably pay rent, rather than simply maximizing rejections.


