A little bit of history repeating? Déjà vu lessons for AI code development
- Summary:
- AI hype has triggered millions of people and many corporations to dabble in AI with some launching full-blown applications. Could these innovators benefit from hard lessons learned in previous ways of tech innovation? Hang on for some parallels to the rapid overreach of spreadsheet usage and how it made a mess of things. Will you heed history’s lessons?
In a recent podcast with Jon Reed, I made a brief comparison to the problems low-cost spreadsheets triggered years ago and how I was seeing parallels to new AI deployment/development tools today. Here’s the full analysis on this observation and the implications for firms everywhere.
AI providers, application software vendors and others are pitching the process automation, code generation and other capabilities of their AI-powered products. Their tools can create some incredible outputs, quickly. For example, they can:
- Create highly summarized documents
- Narrate financial statements and other long, complex documents
- Generate software code quickly
- Test & troubleshoot code, possibly better than many IT people
- Integrate a new AI-built app to a firm’s existing application software
- Generate applications from videos (I’ve actually seen this happen)
- String together multiple agents into processing long workflows
- Etc.
The code generation capabilities are intriguing and often capture a lot of software customer interest. The WOW factor on this gets people excited – really excited – but, maybe we should look to the past at previous WOW moments in IT and see if a more considered, premeditated approach to AI is appropriate.
One way to understand this new phenomenon is to look at a previous innovation and what lessons it can impart. For that, let’s look at the changes, challenges and opportunities that a work-a-day technology called a spreadsheet wrought us.
When spreadsheets dramatically impacted business apps
In the late 1980s and early 1990s, transactions systems (e.g., ERP, Financials, Payroll, etc.) could now create output files of transaction data, summary data and more. The exporting of the data (e.g., via comma delimited file formats) became feedstock for PC based spreadsheet software.
The use of spreadsheets exploded at this time. The principal reasons for this included:
- The extremely low cost and ubiquity of spreadsheet software. This software, often sold as part of a suite, was inexpensive to purchase and use. Everyone, it seemed was a user of Microsoft Excel or Lotus 1-2-3.
- Users of the tools needed little training as the user interface resembled familiar paper-based spreadsheets.
- Some users became ‘super-users’ after they learned a few powerful capabilities (e.g., macros).
- Many spreadsheet “solutions” could be built and used within minutes.
- Users could fail (and fail often) in building a new “solution” and suffer few or no real consequences.
- Users did not need to get help from IT (i.e., no IT needed) with purchasing and installing this software.
- The same software used at work could also be installed on one’s home computer.
- Spreadsheets could be easily shared with others via email attachments.
- Alternative tools (e.g., RDMS) were often more expensive to acquire, often required a relational database and server, needed skilled SQL programmers, might need expensive integrations, etc.
In fact, the popularity of spreadsheets triggered a new term to be coined: sneaker-net. This described the placement of a spreadsheet on electronic media (e.g., diskette or thumb-drive) and walking the spreadsheet over to new user or computer.
But, all of this market acceptance didn’t necessarily mean that everyone that built and/or used these new spreadsheet ‘solutions’ did this well. In fact, spreadsheets often produced a number of unintended and adverse consequences. These included:
- Users often justified spreadsheets as they were quick to generate and didn’t require input from IT. Quality issues, risk issues, security concerns, etc. were often ignored.
- Huge numbers of standalone spreadsheets appeared in almost every firm. Spreadsheet reports with over 200K rows or columns were in use at some firms. One client had over 100K ‘active’ spreadsheets in use, by their own accounting. When business process reengineering emerged years later, the elimination of spreadsheets became a frequently targeted area for improvement.
- Spreadsheets often existed in a shadow IT environment. These weren’t tightly integrated with production systems and, once created, rarely were plans/budgets/etc. set aside to maintain these ‘systems’.
- Spreadsheets often required manual data entry and/or data rekeying. Along with this, errors were frequently introduced into the data.
- Little documentation for these “systems” existed.
- Data within spreadsheets could quickly become dated/outdated. Version control issues blossomed. Two people building the same tool often got differing results due to using different techniques to capture data (e.g., one person used a range of values to calculate a sum while another used specific cell values) in creating the spreadsheets.
How bad did it get?
- Accuracy wasn’t always a priority with some spreadsheets only needing to be “close enough”.
- Spreadsheets usually lacked controls & audit trails. Many didn’t show who changed what cells or when these ‘adjustments’ occurred. New rows and columns could be added while existing ones could be removed with no approval needed.
- Executives often wasted time arguing over whose spreadsheet was more accurate
Spreadsheet users failed to realize some KEY implications, consequences and business requirements. Specifically:
- Using an inexpensive ‘tool’ like a spreadsheet as a proxy for an ‘application’ was often a mistake. Spreadsheet technology was not designed to create, edit or process transactions. Core application software had the edits, controls, security and audit trails that one must have in their business. Spreadsheets were not designed for this use case.
- Citizen developers don’t necessarily build solutions with the standards, controls, security, etc. that are needed and may not have the requisite skills, IP, etc. to do this well.
- Spreadsheet developers may not have understood the long-term implications of updating, upgrading and maintaining these spreadsheets. Software companies know and understand the long-tail cost, staffing and maintenance needs of application software (and these can be multi-decade in length). Where is the budget for this activity?
- When creating a spreadsheet to be an ‘application’, some components may have been left out in the short-term. Nonetheless, these omissions (e.g., a lack of audit trails) represent a type of technical debt that has to be rectified at some point.
Enter AI
Returning to the current time, individuals, employees and employers are experimenting with new AI tools. These tools can be used to create:
- Source code
- Net-new applications
- Test scripts
- Videos
- Graphics
- And many, many more outputs
Users will doubtlessly be impressed with the speed and power these new AI tools possess. They will want to use these new tools immediately to:
- Create reports, dashboards, and other outputs fast
- Technically refresh old application software (e.g., move old on-premise custom apps to a cloud platform)
- Build new solutions including the latest headless agentic apps that utilize long strings of agents
- Automate workflows
- Create headless process solutions
- Etc.
But in their zeal to create new applications or ‘solutions’, users may be tempted to commit many of the same mistakes that occurred when spreadsheets first appeared. The expression “the road to hell is paved with good intentions” comes to mind here.
Early AI users may already be:
- Acquiring and using AI tools without IT involvement. Your firm could already be exposing crucial IP (intellectual property), PII (personally identifiable information), trade secrets and more if lone-wolf AI users are tapping general AI tools and LLMs without adequate training and safeguards. “Just because users can get AI tools for free or low cost, doesn’t necessarily mean they should”.
- Using AI to build ‘solutions’ because AI can create them quickly and cheaply (low cost) and because anyone can summon AI tools from a browser (ubiquity). However, these solutions may be far from complete.
- Using AI tools as they require minimal to no training. Some AI tools (e.g., chatbot) need only a prompt line.
- Overconfident in their abilities and assume that a few modest successes they’ve achieved with simple AI capabilities will also apply for more complex workflows, agent designs, etc.
- Confusing speed with completeness. Creating something with AI doesn’t necessarily mean the end product will do everything it should or needs to do.
- Building quick-fix AI solutions that lack critical audit trails, controls, guardrails, etc. may feel good in the short-term but create problems long-term.
- Releasing/Using new AI solutions without extensive testing and exposing the company to reputational, economic and other harm.
- Creating too many AI ‘solutions’ that the company lacks personnel, budget, etc. to support long-term.
But, unlike the world of spreadsheets decades ago, the AI world has its own peculiarities. To begin with, AI is probabilistic not deterministic. That means a process workflow or chatbot might respond flawlessly a high percentage of the time but it will return non-standard or unexpected data/results in other situations. Traditional software will return the same results time after time if the input data is unchanged.
Likewise, AI systems can hallucinate responses. As a consequence, AI solutions have to have guardrails, human/expert review and other oversight to ensure the right responses are created each and every time. For some AI experimenters today, they claim this isn’t always needed. This is the ‘close enough is good enough’ excuse. But malfunctioning or poorly functioning AI tools can produce interesting and/or negative consequences. In a recent Fortune article, one AI user noted how she had tested her new AI tool but it failed to work correctly when putting it into production:
Yue described how her OpenClaw autonomous AI agents—built to run locally on a Mac minicomputer—deleted her entire inbox, ignoring instructions to pause and ask for confirmation first.
Other aberrant AI behaviors have also been detected. One I reported on a year ago involved an auto dealer who constructed an AI-powered app that helped car shoppers choose the right vehicle for them. Unfortunately for the dealership, it often recommended car/truck buyers acquire a vehicle from their local competitor.
Testing, human review and guardrails are the minimum requirements for deploying AI broadly.
My take
If you didn’t like the comparisons to spreadsheets, that isn’t the only parallel your firm can examine. In the past, companies have gone ga-ga for best-of-breed applications, departmental apps, country-specific apps and other solutions. The tech world is littered with examples where firms and users have rushed headlong into a period of technological over-exuberance. It’s happening again now with AI. Great companies harness this energy and direct it in the best paths and manner. Are you doing that with AI?
If we’ve learned anything over the last several decades re: technology, better things accrue to those firms that take care to research, design, plan, implement and deploy new solutions well. Rushed, incomplete, independent and potentially redundant efforts can really pile on the expensive, embarrassing regrets fast.
So far, there’s been little discussion as to the impact that all of this AI, especially shadow AI, will have on your overtaxed and possibly understaffed IT organization. They can’t easily add support for new AI apps without impacting all of the other production systems they support, your networks, your cellular systems, databases, etc. Find a way to make your growing AI needs fit within their never-ending support efforts.
In fact, I suspect your firm will likely create scores of AI tools, applications, workflows, etc. with only a few of them becoming outstanding successes. Another group of them may demonstrate value for several years while a large number of these might be short-lived, quickly abandoned or rendered obsolete by better performing and reliable solutions in the long term. That certainly happened with a lot of spreadsheets. Yes, every technology has a useful life but hot, trendy tech can have the lifespan of a fruit fly. Spend your time and capital wisely and make every AI effort one that truly matters.
It’s okay for people to experiment with AI and to maybe create some limited personal or work convenience utilities. A best practice, second party review, is a key requirement for this, though. Nothing should get built or rolled out until another person has inspected the solution and verified its value proposition, business case, safeties and controls. Further, the second party should ensure that a support budget (and maybe staff) is present for this solution long-term.
Finally, remember that AI is just the newest shiny bauble to hit the technology shelves. Learn what did/didn’t work in previous waves of innovation and avoid as many of the pitfalls as you can. Good luck….