Hiring algorithms, claim adjudication models, credit scoring engines: these tools are central to the main operational questions of the moment, from who gets a job to who qualifies for a mortgage. McKinsey's 2025 survey of nearly 2,000 organizations found that 51% of companies using AI have experienced at least one negative consequence.McKinsey, The State of AI in 2025. 51% of companies using AI reported at least one negative consequence. Explainability ranked among the top-reported risks but was not among the most commonly mitigated. The most telling finding: explainability ranked among the top-reported risks but was not among the most commonly mitigated. The core reasoning behind the tools enterprises trust for business decisions is degrading, not least in systems sold by the largest enterprise technology vendors.

The degradation takes a specific form known as "black box" deployment, when an AI system absorbs decision-making authority that is invisible to its operators. In its most extreme version, this morphs into a "trust the algorithm" dogma. A single model output is taken as a definitive answer, rather than one data point among many that must be held in balance.


Why This Is Not A Technical Problem §

But why worry about algorithmic opacity? If AI-driven efficiency encompasses more and more decisions that are genuinely valuable, why quibble about the reasoning? Isn't this mere technical hand-wringing?

Far from it. When algorithms go opaque, organizations lose the accountability trail behind a given automated decision. The cost is not abstract. It is legal, because these systems govern something of value. If an automated system determines whether a patient is covered, an applicant is hired, or a tenant is housed, then to lose grip on the reasoning is to lose a vital defense against outcomes that are discriminatory, inaccurate, or legally indefensible.


When The Box Was Empty §

The SEC gave this problem a name in its 2024 enforcement actions: "AI-washing."SEC, 2024 enforcement actions. The commission coined "AI-washing" to describe vendors inflating the role of AI in their products, emptying the term of distinctive content. If AI is whatever a vendor's marketing team claims, then almost any software product qualifies. The inflated label empties AI of distinctive content, making it a synonym for "automation we'd rather not explain."

Consider the insurance industry. Traditionally, claim review meant clinical evaluation by qualified physicians. Cigna, one of the largest health insurers in America, built its PxDx algorithm to handle not only billing code matching but coverage eligibility determinations, and even blanket denial recommendations. The company defended this by calling the system "physician-reviewed."Cigna PxDx algorithm litigation. The algorithm denied over 300,000 claims in two months, with physicians spending an average of 1.2 seconds on each case. A federal court found the "physician-reviewed" label misleading.

A federal court disagreed. The algorithm denied over 300,000 claims in two months, with physicians spending an average of 1.2 seconds on each case. What had been a medical judgment was now a rubber stamp.

Automated screening, however, offers the starkest illustration of black box deployment. AI hiring and housing algorithms have come to play the role of "objective gatekeeper," purporting to offer a total assessment of human worth.

Workday, the HR technology company, now faces a nationwide class action after acknowledging that 1.1 billion applications were rejected through its AI screening tools.Workday class action. 1.1 billion applications rejected through AI screening tools. The DOJ filed a statement of interest confirming algorithmic disparate-impact liability under the 1968 Civil Rights Act. The Department of Justice, filing a statement of interest in a separate housing discrimination case against SafeRent Solutions, made the legal stakes explicit: algorithmic disparate-impact liability under the 1968 Civil Rights Act is now settled doctrine. But equally disturbing is the flipside of this proliferation. Automated judgments have historically been valued because they are regarded as rigorous assessments, ones that may not be overridden merely on grounds of volume or speed. Under the opaque model, an algorithmic score is just a number processed at scale without human review. Consequential decisions become computational outputs.


The Regulatory Response Has Limits §

A backlash against algorithmic opacity has already set in. The EU AI Act mandates that high-risk AI systems be "sufficiently transparent" to enable human interpretation, with penalties reaching 30 million euros or 6% of global turnover.EU AI Act, 2024. Penalties for non-compliance with high-risk AI transparency requirements reach 30 million euros or 6% of global turnover, whichever is greater.

Yet transparency alone is not enough. A peer-reviewed study in Information Systems Research found that AI explanations can introduce asymmetric confirmation bias: users reinforce beliefs the explanations confirm but do not abandon beliefs the explanations contradict.

Key insight

Full autonomy is an anti-goal. No single algorithm provides all the answers. Multiple checks are essential: audit trails, qualified human reviewers, third-party testing, all held in productive tension.

The remedy is governance discipline. Organizations need to be clear about what their AI systems decide and what they don't. The temptation to deploy faster than you can explain should be resisted at every level.

This requires humility. No single algorithm provides all the answers. Multiple checks are essential: audit trails, qualified human reviewers, third-party testing, all held in productive tension. Gartner's 2026 research found that organizations with structured AI governance are 3.4 times more likely to achieve high effectiveness than those relying on ad hoc processes.Gartner, 2026. Organizations with structured AI governance are 3.4x more effective than those relying on ad hoc processes.

In a time of rapid adoption — whether in healthcare, financial services, or hiring — governance is more important than ever. Enterprises cannot afford to let their most consequential automated decisions drift beyond accountability. Keep them auditable, defensible, and ready for the scrutiny they will inevitably face.