Main content

Want to fail at AI? Instructive case studies & AI’s ‘last mile problem’

Brian Sommer Profile picture for user brianssommer April 1, 2026
Summary:
Ever wonder why your firm’s AI initiatives don’t seem to be going anywhere? Your firm might not have the right people, development methods, plans, controls, etc. to mount an effective rollout of AI. Your firm, in spite of decades of prior IT experience, may be winging their way through AI. Caution: some project descriptions may be disturbing to sensitive readers.

fail

I spent an hour with a top finance leader discussing their firm’s AI program. The conversation was a master class in what NOT to do regarding AI. Even worse is the knowledge that this is a publicly traded firm that can’t afford the associated negative press, regulatory/compliance failures, etc. that might get spawned from this effort.

Since that call, I spoke with another manager who parroted many of the same AI concerns in his firm.  But this conversation was clearer still about the last mile issues with implementing AI in many firms today. 


Finally, I was at a recent software vendor event where a key executive made a number of cautionary statements about the right & wrong way AI should be deployed in firms. He spoke at length re: testing, compliance, guardrails, data hygiene and other relevant matters.

What is happening now is that the period of overexuberance regarding AI must yield to a more careful, nuanced and enlightened way to approach AI implementations. But before that happens, we’ll likely see some firms face some disappointing AI rollouts with some being nigh onto catastrophic. The AI in business market is due for a correction. How with this occur?

Case study #1 - How not to do AI

Here are the high low points from one company’s AI adventure:

  • All employees, without exception, are required to use AI in their jobs, develop prompts, create AI agents and provide lists of potential AI use cases for top executive management. Non-compliance is not an option.
  • The company is mandating that all employees create AI applications even though none have been trained in AI or apps development.
  • Executives are not  the concerns that employees have re: AI. Some employees have large AI data centers being built in their hometown. Electricity rates are already increasing for those employees thus increasing their living costs and negatively impacting their standard of living. Likewise, employees may have concerns about water availability and quality due to data centers in their towns. This employer has been silent on what sustainability requirements their AI provider(s) should possess and what their own firm will do to minimize environmental impact/harm.
  • The company does not have a visible and working reskilling program for employees whose positions may be in jeopardy due to AI. Employees struggle to understand why they should use AI to automate away their current job when the company is not preparing a new job future for them.
  • Employees appear to be sharing confidential company or PII (personally identifiable information) data with a consumer grade LLM. No one currently knows what data and how much data has been exposed to public AI tools or what the long-term consequences of this will be.
  • There is no review process to ensure AI apps/agents/utilities possess adequate and complete controls, guardrails and human oversight.
  • No audit trails exist for AI-enabled transaction processing functionality.
  • No project development discipline (e.g., no steering committee, no ROI or business case documentation, no test plans, no high-volume testing, etc.) is present.
  • Employees are rolling out untested apps into production.
  • No documentation regarding the existing processes/workflows, benchmarks and performance metrics exists re: current systems. Likewise, documented future workflows/to-be processes, et.al. are also absent and/or rarely created. How will anyone know if they are working on the right AI initiatives (e.g., the ones creating the greatest ROI, business upside, customer satisfaction improvements, improved quality of hires, etc.) if these data elements are not present? Likewise, how can the firm assess the effectiveness and ROI of the new AI capabilities?The firm’s controls, pre-AI, are atrocious.  What will happen as more work is cutover to AI? Finance personnel already admit that the firm, today, is paying AI generated fake invoices and is receiving numerous jobseeker resumes full of AI generated content. Strangely enough, no one is currently working on AI solutions to stop fraud.

The preceding isn’t exactly a listing of best practices.  Is this an aberration or will other companies follow suit?

Case study #2 - it gets worse 

The second manager (different firm) stated many of the same concerns above. His commentary focused more on 'last mile issues'. Specifically, this leader noted that:

  • He also has to develop AI tools to improve his and his department’s productivity.
  • He has taken Python programming classes to create code (a new experience for him).
  • The AI utility he recently built will help better manage production orders and scheduling.
  • Unfortunately, he can’t get any help from internal IT to help connect this AI tool (built via Grok) to their custom ERP software.
  • Worse still, no other non-IT personnel at this firm can connect their AI projects to production systems. The'last mile' cannot be completed at this firm.
  • That may actually be a good thing as user created AI tools are not being tested (unit, string, high volume or other test cycles), not getting integrated to appropriate systems, and, not in possession of needed guardrails, controls, etc. This may be one of those rare cases where a dysfunctional firm is actually sparing itself from large potential problems.
  • 'Last-mile problems' are not new. Early internet applications struggled as users often could not connect to the internet when they were not at work. Users had to get access to a land-line phone line, use a modem to dial into a server and then suffer from exceedingly slow transmission rates (e.g., 200 bps).  Many homes that couldn’t get cable television had to wait years for satellite technology and even that is still quite pricey. The tech landscape is full of stories where a new solution category doesn’t fully take off as the “whole product” isn’t quite complete. If the solution is missing key steps, players or other components, then market uptake will struggle.

What these interviews suggest is that companies have not acquired or assembled the whole AI product for their staff to utilize. If I’m assembling a car, I need all the parts otherwise all I have is a large, heavy, immovable reminder of my project.

What is going on?

These incidents indicate that one or more issues may be afoot. These issues may include: 

  • Rapid code development time doesn’t mean that implementation efforts will be a form factor quicker, too. AI technologies can help create software code quickly but not every step in a development and application rollout will see similar time, labor and cost improvements. As a result, firms must ensure that AI projects have the same oversight (i.e., testing, management reviews, planning, etc.) as traditional IT efforts.
  • Giving powerful tools to the untrained doesn’t mean that great results will follow. I wouldn’t turn over a Formula One race car to a ten-year old as they lack training, perspective and other knowledge to operate it safely. Why should any business assume that non-programmers will make great AI developers? It’s not logical.
  • Assuming the new tech will work like the old tech isn’t necessarily correct. AI brings new issues to the code world: hallucinations, probabilistic outcomes, etc. All of these and more need to be dealt with in a comprehensive and thorough manner.
  • Not all citizen AI developers are the same. Some might be able to get great things done and others much less so. While I’m a fan of democratizing technology, not all citizen AI developed solutions are great or trustworthy. For example, if I need a heart surgeon, I want the best, most experienced one there is. I would not want someone who just completed a YouTube course on agentic-AI. Overall, your best AI developers will be specialists or experts not accountants who do some part-time or hobby AI coding. Interestingly, my clients also fuss over the resumes of people on their IT projects. If clients want the best and brightest for their projects, shouldn’t your AI efforts get the same?
  • Ignoring decades of past experience in rolling out successful IT projects may not trigger great results. All those tasks like change management, documentation, training, testing, etc. were there for a reason and AI didn’t eliminate those reasons (or eliminate them altogether).
  • Exactitude is giving way to the “close enough is good enough” ethos. Close enough works for some AI outcomes like meeting scheduling, internal document summarization and some modeling/forecasting efforts. It won’t work for matters involving regulatory, tax, compliance, accounting and other issues. Firms that ignore this end up in the headlines. Several law firms have been embarrassed publicly and/or faced sanctions for relying on AI to provide relevant case citations. Unfortunately, those citations were AI hallucinations. Putting AI to use in some circumstances and not having a human in the middle of an AI-powered process may not be correct. 

Neither interviewee waded into a discussion about the long tail of these AI ‘solutions’. The long tail refers to a situation where the initial software acquisition costs are often dwarfed by the long-term support costs to patch, upgrade, re-test, etc. of those solutions. This is a classic long-tail situation. 

  • If your firm is developing a number of AI tools, be sure to have answers to the following:
  • Who supports the AI solution when/if the original developer leaves the firm or gets promoted or transferred to a very remote or different part of the firm?
  • If the solution utilizes a third-party LLM or other AI tool, whose responsibility is it to upgrade the AI solutions in the affected apps, do needed regression testing, etc.?
  • If other applications are connected to the user developed tool, whose job is it to maintain, upgrade and retest any integrations?
  • As these AI applications/agents/etc. age, who will replace, renovate and/or upgrade these? How will the firm avoid acquiring more technical debt
  • Where is the IT budget and staffing to help integrate, test, protect and maintain these AI generated tools?

Change management is a serious matter with IT projects and the way it’s being ignored/skipped by some firms for their AI initiatives is troubling. The biggest change challenge isn’t even technical – it’s quite personal to some workers. The key change challenge appears to be failing to address the long-term career/employment consequences of those workers whose job responsibilities will be materially altered or the job is eliminated altogether. If an AI project leader is not proactively and frequently addressing these employment and career concerns with those being impacted by the new AI capabilities, then they aren’t doing their job. If AI is taking away someone’s livelihood, of course, those persons will be upset. A comprehensive change plan should be part of most every AI initiative.  

How the board will see these firms

If I were on the board of these firms, I’d have numerous questions and concerns:

  • If this is how poorly thought-out these AI implementations have been planned and structured, then shouldn’t the entire management team be replaced? Could you ever trust these ‘leaders’ to actually implement any big transformative effort well? I sure couldn’t.  Face it folks, now that it’s 2026, what manager doesn’t know the basics of transformational project work (e.g., a steering committee, business case, a program management office, change management, etc.)? Can there be any justification for this kind of professional malfeasance?
  • Are there any in-process AI developments that should be paused until all proposed and started initiatives have been ranked and prioritized based on their projected economic upside, their risk profile and how these risks are getting mitigated?What long term support, integration, staffing, etc. costs will these initiatives require?

A board member, of course, would want to see the firm more productive, more efficient and more competitive. But no board member would be amenable to the firm:

  • taking unnecessary/unprotected risks.
  • spending capital without financial rigor.
  • jeapardizing its reputation. 

My take

 The lack of planning, budgeting, change management, etc. these companies exhibited is akin to managerial/executive malfeasance. Nothing I heard rose to the level of a best practice. If anything, these companies exemplify all that shouldn’t be done in the name of an AI change program. 
The overarching problem is a severe lack of leadership from the company’s executives. There is no one taking responsibility for outcomes, value realization, change, people’s careers, program management, etc. 

Significant change is rarely automagical. Executives could wish all they want for AI to materially automate jobs but they would have to do the work to get this outcome. As we say in Texas: “wanting and getting are two different things”. 
Asking people to voluntarily help train outsourcers or AI to replace their job has never been good for morale, culture, retention, etc. Why executives think this is a good idea now simply escapes me. Anyone with a modicum of empathy can see how divisive and problematic this is.

Offering stay pay has been a key tool in retaining talent during acquisitions and outsourcing deals – Why aren’t companies using this now when they are asking employees to help eliminate their own jobs via AI?

The lack of business cases, ROI calculations, etc. is disturbing. Recent studies have found large numbers of AI projects delivering little or no measurable return on investment. This suggests that many of these projects were: experiments/pilots; poorly built; or, should have never been launched. 
Lessons learned from IT projects and the methods/practices/etc. that have worked so well in prior decades are still relevant and viable with AI projects. The use of AI doesn’t mean that the outputs are 100% correct, free of bias, free of hallucinations, have built-in safeguards, etc. Nope. Human oversight doesn’t go away just because the project is using a generative/algorithmic/agentic AI tool. 
You can’t wish away fraud, malfeasance and inexactitude. Businesses need a master plan and business case for their planned AI efforts. Even in an AI world, constraints like time, personnel, budget, etc. exist and individual projects must meet specific financial, timeline and staffing hurdles. One of case study firms had no focus re: AI development projects.  They already possess a clear and present financial and reputational danger with citizen-AI tools creating fraudulent transactions, synthetic job seekers and other undesirable threats. Why isn’t stopping these threats a priority?  

My last word on this is that a market correction is overdue here.  There has been a lot market euphoria the last couple of years about algorithmic, agentic and generative AI tools and executives everywhere want to get their firms moving on AI initiatives. While that’s an understandable desire, it doesn’t mean firms should chuck their common sense. 

We will start to see the bloom fade on this AI rose shortly.  The correction will come. Corrections happen all the time after a new technology appears.  As that happens, companies will cease the riskier and poorly thought-out AI initiatives. Prioritization will become important as real-world resource constraints get new attention. IT will want a larger say in what will likely become a pain point and work generator for them. 

Companies can wait until the correction occurs before they get their AI game plans in order. Or, they can get ahead of things and implement better AI development and rollout processes now. The smart firms will, I hope, start now….
 

Loading
A grey colored placeholder image