What the Good, the Bad, and the Ugly Means for AI for Consumer Banking
Andy Kessler’s recent, “The Good, the Bad, and the Ugly of AI” piece in The Wall Street Journal is an excellent, concise assessment of where artificial intelligence actually stands today. It captures the reality most executives are sensing but struggling to articulate: AI is getting dramatically more capable, it still makes consequential mistakes, and the expectations surrounding it are racing far ahead of what organizations can responsibly operationalize.
That framing is especially relevant for consumer banking.
AI’s “good” is undeniable. Systems that can score in the top percentiles of standardized exams, pass professional certifications, and absorb centuries of written knowledge are a genuine technological leap. Enterprises are already capturing value, particularly in customer service and software development, where AI is handling a meaningful share of interactions and accelerating productivity.
The “bad” is equally clear. These systems hallucinate. They are inconsistent from one query to the next. They lack judgment and accountability. And when mistakes occur in outward-facing contexts, the consequences are not theoretical. Brands are tarnished. Customers lose trust. Liability becomes real.
For banks, that combination of power and fragility is not a side note. It is the central issue.
Consumer banking operates in an environment where errors are asymmetric. A single bad interaction, misleading response, or poorly governed automation can outweigh dozens of quiet successes. Regulatory scrutiny, data sensitivity, and customer expectations dramatically raise the bar for what “working” actually means.
This is where many AI initiatives quietly break down.
The industry conversation often defaults to familiar guidance: be disciplined, test carefully, start small. While directionally correct, that advice only scratches the surface for consumer banks. The challenge is not simply whether to test AI. It is how to evaluate it in environments complex enough to reflect real banking conditions without exposing customers or operations to undue risk.
AI is not failing in banks because teams lack ambition or rigor. It is failing because most organizations lack structured environments that sit between experimentation and production. Sandboxes are too artificial. Pilots are too loosely scoped. Success metrics are ambiguous. Learning is noisy. As a result, institutions either stall out or overcorrect, scaling initiatives before they are truly understood.
Kessler’s discussion of the “ugly” side of AI reinforces this point. Today’s models rely on brute-force economics: massive compute, energy, and memory. Costs are volatile. Efficiency gains are coming, but not evenly or predictably. This makes long-term bets difficult to justify without clear evidence of value.
For consumer banks, this reality demands a different posture toward AI. Not slower adoption, but more intentional progression. Not generic experimentation, but controlled learning tied directly to customer behavior, economics, and operational feasibility.
This is where a more specialized approach becomes necessary.
AI in banking cannot be evaluated purely as a technology problem or a process exercise. It must be examined as a system-level capability, one that intersects customer experience, risk, compliance, data access, and organizational readiness. That requires environments designed specifically to surface signal, constrain failure modes, and translate insight into decisions leaders can stand behind.
The institutions that will ultimately win with AI will not be those that deploy the most tools or generate the most pilots. They will be the ones that create repeatable ways to determine, with confidence, where AI creates lift, where it introduces unacceptable risk, and where it simply does not belong.
The WSJ article makes clear that the future of AI will not be a straight line upward. For consumer banking, that zigzag is not something to fear, but it is something that must be navigated deliberately.
In an era where intelligence is abundant but trust is scarce, evidence, structure, and control are what separate momentum from missteps.
About PilotLaunch.AI
PilotLaunch.AI is a strategy-led advisory that helps consumer banks modernize customer experience and AI adoption through structured, controlled experimentation, supported by proprietary methodologies and purpose-built technology. We work with bank teams to define high-value use cases, establish clear guardrails and success metrics, and stand up disciplined pilot environments that turn ambition into evidence and evidence into production-ready outcomes.