Your browser doesn’t support HTML5 audio
For most of my working life the least controversial question in enterprise IT has been:
What solution shall we buy?
Building bespoke systems fell out of fashion because of high costs, interminable delays, and ongoing maintenance headaches — something nobody wanted to suffer unless the system in question was truly unique. So if you could find an application that kind of worked, you simply bought it and reshuffled your work around whatever it did — an orthodoxy that became even more entrenched with the rise of easy-to-purchase SaaS applications.
But that orthodoxy has started to wobble as a growing army of ‘vibe coders’, fresh from their YouTube tutorials, blithely ask:
Why don’t we just build it?
This enterprise heresy is in large part driven by the sudden spike in interest in coding agents, and the assumption that you can now replicate any SaaS application with little more than an afternoon of casual prompting.
But the democratization of software development didn’t start with vibe coders and coding agents.
Instead, it began almost two decades ago with low-code tooling — arguably the daddy of vibe coding — which has spent the intervening years developing the discipline required to build reasonably complex systems at scale.
In that sense, to misquote William Gibson, the future has been here for quite a while — it’s just been unevenly distributed across IT departments.
Blaine Carter, CIO of executive training provider FranklinCovey, is one man who has been living on the frontier of that uneven future for almost a decade. After adopting the low-code platform Make for integration, he quickly realized that it had changed the economics of software development — and flipped the question to prioritize building over buying.
What had once been heretical became the new orthodoxy.
And so with AI hype spreading this new gospel to a broader range of potential converts — and potentially upending decades of settled IT buying behavior — I wanted to understand what happens when a company shifts from ‘buy’ to ‘build’ — and how on earth Carter got there.
Return to Binder — Status Unknown
Carter explains that this pivot from ‘buy’ to ‘build’ as the default posture didn't happen overnight. Instead it emerged through a number of practical experiences that demonstrated the automation potential offered by the Make platform.
The most pivotal of these experiences — during which it became clear that building was not only possible, but often preferable — was the implementation of a system for the FranklinCovey finance team — one that reduced workloads, eliminated $50k a year in software spend, and secured an unsolicited reduction in external audit fees.
And as Carter explains — it started with a binder:
The project was called ‘the fiscal binder’ — and when I say ‘binder’, I literally mean a big three-inch binder full of evidence around financial metrics. Every month that binder would be passed around the 10-person team — person one would have it, literally print out all of this stuff, put it in the binder. Pass it off to the next person. It would take about 30 days to print it and go through verification. And then we have a room full of binders.
Unsurprisingly, Carter recalls, the team started to think this might not be a very good use of their time — and so they came to him to talk about possible solutions:
They said, ‘There's software we can buy that does much more than we need it to do, but we’re willing to pay the $50k a year to have this be digital because the process is so cumbersome.
It was at this moment that Carter started to feel like there should be a better way. In his view it made no sense that the finance team would have to drop $50k every year on a software solution they didn’t need, just to gain access to a fraction of its features. And so he decided to take his first step into the build heresy and see what he could do with low-code. He explains:
With no money and a few hours with the business leader, we were able to implement it. We integrated the reports out of our ERP, had Make do the legwork of automatically creating a template every month for people to provide the evidence — and even provide the evidence where it can. Then each person is notified to upload their data and digitally sign it off. It just takes a few days now because we don't have to pass the binder around.
Carter explains that the system is now in its second year of operation and has proven both sustainable and effective — with savings of $100k that they didn’t have to spend to buy ready-made software already banked. But he goes on to explain that there were further savings from process simplification — ones that came from a quarter they didn’t expect:
Our external auditor came to us and said, ‘We think what you're doing is fantastic — and we're willing to lower our rates because we can do this much quicker.’ When does your external auditor come and say they can take $10k off their fees?
For Carter, however, the binder project was only the beginning.
Because while low-code enabled a shift in the organization’s approach to software systems, the arrival of generative AI opened up something even bigger — a shift in the organization’s approach to the work those systems were built to enable.
Love My Tender — Make it True
The finance system was a major step in Carter’s pivot from assuming solutions would be purchased to assuming they would be built.
But with the emergence of generative AI, Carter found himself asking another heretical question — if low-code automation could replace transactional software systems, how much further could low-code solutions augmented with AI go in absorbing and reshaping the human work those systems were designed to support?
The answer, he realized, was pretty far — and it emerged while looking at RFP fulfilment. He explains:
FranklinCovey responds to a lot of government contracts, state contracts, large companies, and so on. And we had bought a very specific software solution to help with that. But I said, hey, let's at least look at what we can do with automation and AI.
To explore this, Carter says his team built a knowledge base using historic RFPs and instructed an LLM to use it to support tender creation. The system absorbed much of the boilerplate work so people could focus on the most critical parts of each response. He goes on:
The first benefit is that the AI is able to validate whether we are we a good fit for a particular RFP — we can triage RFPs at a very rapid rate without having a person read through them, because those resources have expert knowledge and are very expensive.
But that triage step was only the beginning. Carter says:
Then there are usually about 10% of questions that are ‘golden questions’, or what the RFP is really about. The rest is boilerplate — like corporate governance, do you have insurance etc — that can be generated by the AI. So we programmed the AI to look specifically for that 10% of golden questions and highlight them to experts who can spend more time being verbose on those.
The result was not just the replacement of another transactional system — it was a dramatic boost in productivity:
We built this process and found that we could replace the entire software package — while completing RFPs in about 10 to 15% of the time. So the next year that software came up, the team said, ‘Nope, we're dropping that for this tool.’
This decision, Carter reveals, led to a direct $30k-$40k annual saving in software licenses alongside the productivity gains. But he stresses that the impact was not only about cost or efficiency — it also made the work itself more meaningful for the team:
The person doing it is not having to read and answer the 90% of questions that are all mindless — ‘Do you do this? Yes, do I do this…’ Having the AI highlight whether there’s a good fit and then say, ‘Here's where you want to maybe do a little bit extra’ means we can bring in other resources to help in places that are really going to make a difference — rather than just spend time reading through a lot of mindless documentation.
This expansion in what technology can deliver has given Carter and his team a much broader space in which to think about the value of solutions — moving the scope of their impact from transactional system support to operating model leverage.
But I suggest to Carter that this opportunity also comes with a new challenge.
If operating models increasingly depend on bespoke technology support — as the trade-offs between ‘buy’ and ‘build’ shift and the boundaries of solutions expand — then those systems must avoid the cost, scale and maintenance problems of traditional bespoke development.
A Little Less Custom Building — a Little More Config Please
Carter agrees that the risks of switching from buying to building are real.
But he suggests it’s not just a technical challenge but an operating model shift — one that changes how the IT team builds solutions and engages with the wider organization.
And the foundation, Carter argues, was simplifying development and management by embracing Make’s low-code platform:
Make’s automation platform has really opened up opportunities to do things that, even a few years ago, would have been infeasible because it would have required a mountain of developers. Now we can provide tangible business benefits very quickly.
Importantly, however, Carter stresses that the build-first approach only works because teams avoid custom development unless absolutely necessary. Instead they use the constraints of the Make platform to adapt solutions to the operating model without losing control of the technology:
Instead of customization, we use configuration. That's one of our mantras. We configure what we need because customization leads to a lot of overhead.
While low-code doesn’t cover 100% of development needs, Carter says the gap is narrowing. And by maximizing its use, his teams can deliver sustainable bespoke functionality by relying on standardized architecture and management — resulting in a more robust operating model despite company-specific adaptations.
In practice, this approach may be less radical than it first appears. Many SaaS applications ultimately revolve around a familiar core — presenting forms, applying rules and storing data — areas where low-code platforms typically excel.
And with that architectural discipline in place, Carter says it also becomes much easier to extend systems further — something that was key to accommodating AI and extending the scope of solutions beyond the boundaries of traditional automation without losing control of the system itself:
We’ve seen the value of AI really blossom where it’s built into the flow of work that’s already happening — the context is already there and so it just becomes an integration problem.
And while technological discipline forms the foundation of Carter’s build-first approach, he says the decision to build solutions required teams to work within the operational reality of those experiencing the problem — rather than evaluating packaged tools designed for a wider market:
We actually had to change the mindset — instead of working backwards from a solution and seeing how many problems it could solve, we started to really dig in and see if a problem could be solved internally with an automation. This required partnership with the business unit feeling the pain, as it was really the only way to find the very specific use cases that changed business behavior — and ultimately drove business value.
Carter concludes our discussion by pointing out that this partnership approach required a repositioning of the IT team within the operating model — making it an integral part of operations rather than an arm’s-length service provider:
We had a very specific plan — partnering with every department to explore possibilities and challenge them to ideate one or two very narrow use cases that could move measurable KPIs. We’d then work with them to carry out side by side implementations, because one or two good successes sparked a natural grassroots desire to keep going.
My take
Step back from the detail of Carter’s story and a broader pattern emerges.
Much of the current excitement around AI coding agents centers on the idea of a coming 'SaaS-pocalypse' — the notion that organizations will soon be able to recreate any enterprise application with little more than a few well-crafted prompts.
But what makes Carter’s story interesting is that, despite the current focus on AI, he largely achieved his results with existing low-code technology — a far less fashionable but perhaps more structurally mature approach to custom-built systems.
Because unlike AI-generated code, low-code platforms have spent nearly two decades standardizing the architecture, governance and operational management of the applications they support — while enabling functionality to be defined and inspected via visual models and configuration tools. In doing so they have made it possible to build systems that behave like bespoke software while retaining the visibility and structural discipline normally associated with packaged applications.
And whatever the truth of the SaaS-pocalypse, Carter’s experience suggests the replacement potential has existed for some time — particularly when the application in question isn’t overly complex. And in truth, many SaaS applications… just aren’t.
AI coding agents, by contrast, are undoubtedly powerful — but the sheer volume and velocity of code generation is raising increasing concerns about our ability to understand and maintain the systems they produce.
Seen through that lens, Carter’s experience offers an interesting counterpoint to the current hype cycle. By treating build not as an engineering free-for-all but as an operating model discipline grounded in architectural constraints and close engagement with business problems, his team has been able to sustain a build-first posture at scale.
Which suggests that, irrespective of whether systems are ultimately delivered through low-code or AI, value doesn’t simply flow from deciding to build rather than buy.
It flows from the discipline of turning that decision into sustainable operating model advantage.