

Business Strategy & Growth
April 9, 2026
10 min read
.webp)
A leadership team feels the acute pressure of "missing the boat." A budget window opens, or perhaps a strategic grant becomes available. A vendor arrives with a polished deck, a series of impressive demos, and a high-velocity proposal. Suddenly, the entire conversation jumps straight to the finish line: Large Language Models (LLMs), custom prototypes, and aggressive six-month timelines.
But for the modern executive, this is exactly the wrong place to start.
At Sigli, we have observed that the most successful AI strategies do not begin with technology. They begin with a diagnostic inquiry. For CIOs, CTOs, and enterprise leaders, AI should not start as a "solution" to buy. It should start as a forcing function, a strategic provocation that clarifies business problems, tests legacy assumptions, and exposes what actually needs to evolve within the organization's DNA.
Sometimes that evolution requires a generative model. Sometimes it requires a fundamental restructuring of a data pipeline. Identifying the difference isn't a failure of the AI initiative; it is the definition of fiduciary responsibility and strategic progress.
The traditional enterprise sales cycle is built on a "Problem-Solution" framework. The vendor identifies a pain point and offers a tool to fix it. However, AI is not a traditional tool like a CRM or an ERP. It is a probabilistic engine that thrives on high-quality data and clearly defined logic, two things many enterprises lack in their legacy processes.
When we frame AI as a "solution" before the problem is fully understood, we fall into the Inverse Logic Trap. Instead of asking, “What is fundamentally slowing our growth?” the focus becomes, “How do we force AI into this specific workflow?”
This framing creates an opening for "False Momentum." False momentum feels like progress because workshops are being scheduled, internal newsletters are announcing "AI task forces," and roadmaps are being drawn. But if the underlying business outcome remains vague, you aren't accelerating; you’re just failing at a higher frequency.
As Sigli’s leadership often notes, the biggest limitation of AI today isn't the technical capability of the models, it's uncertainty of outcomes. Without a diagnostic phase, "inexpensive" consulting or rapid prototyping becomes the most expensive line item on the ledger. It creates the impression of movement while delaying the structural clarity required to achieve a real Return on Investment (ROI).
To understand how AI acts as a conversation starter, we can look at Sigli’s work with one of the UK’s most prominent property data platforms.
On paper, the project was a classic AI "solution" play: “Implement advanced Machine Learning to enrich property data and power new predictive features for users.” It was a high-value, high-visibility goal. But once the diagnostic conversation began, the team didn't just look at models; they looked at the "machinery" of the business.
The "AI project" acted as a lens that revealed four deeper operational truths that a standard "solution" vendor would have ignored:
By treating AI as a conversation starter rather than a plug-and-play solution, the organization didn't just build a feature; they built a hardened infrastructure that made insights repeatable and features shippable.
When an executive shifts from "buying a solution" to "starting a conversation," the diagnostic framework should center on three key pillars:
Where is vital institutional knowledge trapped? Often, AI is pitched to "replace" human effort, but its higher value lies in making trapped knowledge liquid. If your best underwriters or engineers leave, does their logic leave with them? A diagnostic AI conversation asks how we can use technology to codify and distribute that expertise across the firm.
In many enterprises, the "problem" isn't speed; it’s variance. If three different managers look at the same data and make three different decisions, the business is inefficient. AI is a tool for reducing variance. The conversation should not be "How do we automate the decision?" but "Where is our human decision-making wildly inconsistent, and why?"
Executives must ruthlessly ask: “If we fixed this one thing with AI, what would actually change on the P&L?” If the answer is a marginal gain in "efficiency" that doesn't lead to increased throughput or reduced cost, the project is likely "Innovation Theater."

In the military, there is a saying: "Slow is smooth, and smooth is fast." This applies perfectly to AI implementation.
A good partner doesn't amplify the illusion of a quick fix. They help dismantle it. In the property data case mentioned earlier, Sigli’s process prioritized Research, Pipeline Development, and Sequential Integration. This approach prioritized "Data Readiness" over "AI Novelty."
This often means slowing the sales process down to improve the eventual decision. It means asking the uncomfortable questions that define a project's success before a single line of code is written.
Without these answers, speed is a liability. This is why many projects that begin as “AI initiatives” eventually turn into something else, perhaps a master data management project or a workflow automation overhaul. That shift is not a sign that the AI idea was "wrong", it is a sign that the first conversation finally became honest.
For executives, the question isn’t whether a vendor "does AI." In 2026, every vendor "does AI." The real question is: How do they behave when the original AI idea begins to weaken under scrutiny?
Enterprise value is not created by novelty. It is created when technology fits the business well enough to be operationalized, adopted, and trusted by the people on the front lines.
The companies winning the AI race are not necessarily those who moved first. They are the ones who used the AI conversation to find their real constraints. They understand that AI is a diagnostic tool that exposes weak process logic, vague ownership, and poor data discipline.
One of the healthiest signs in a high-level AI conversation is the willingness to leave the room with a narrower, less glamorous, but more executable next step. Stop treating AI as a purchase decision. Treat it as a strategic inquiry. Judge your partners not by how quickly they can sell you the answer, but by how deeply they help you define the problem. That is where the real ROI begins.
AI is fundamentally different from legacy software because its success depends on the quality of an organization's data, process logic, and cultural readiness. By treating it as a conversation starter, leadership can use the technology to diagnose internal bottlenecks and test assumptions before committing to expensive, full-scale implementations. This approach reduces the risk of building a solution for the wrong problem.
A solution-first approach often leads to False Momentum where an organization spends significant resources on workshops and prototypes that never reach production. It can result in "Innovation Theater," where the technology is impressive but fails to deliver a measurable Return on Investment (ROI) or solve a core business pain point.
The key is the P&L Test. If an AI initiative doesn't clearly show how it will increase throughput, reduce operational costs, or materially improve decision-making consistency, it is likely hype. Value is found when AI integrates into the "machinery" of the business such as the data pipelines and infrastructure seen in Sigli’s property data case study rather than just acting as a flashy interface.
Look for partners who prioritize diagnostic discovery over rapid sales cycles. A reliable partner should be willing to slow the process down to ensure data readiness and process alignment. They should focus on your specific constraints and be honest when a simpler automation or data management project is a more effective first step than a complex AI model.
Yes. One of the most valuable outcomes of an AI conversation is the realization that a different, more executable step such as workflow automation or data cleansing is the real priority. This "strategic pivot" saves the organization from expensive detours and ensures that future AI investments sit on a stable, high-performance foundation.

