Enterprise Architecture as a Service (EAaaS) – Part 3: Your Enterprise AI Roadmap
In the first two parts of this series, we explored the foundational pillars of data and platform-driven digital transformation. Now, we arrive at the next logical frontier: Artificial Intelligence (AI).
For enterprise leaders, the question is no longer if AI should be adopted, but how. Integrating AI is not just a tooling upgrade; it represents a fundamental shift in business strategy, operations, and competitive positioning. An effective AI roadmap is the compass that ensures every investment aligns with core business objectives and delivers measurable value.
Why Enterprises Need an AI Roadmap
Many organisations find themselves at a crossroads:
- Burdened by legacy systems.
- Pressured to innovate.
- Tempted by the allure of AI’s promises, from deep customer personalisation to operational efficiency.
Without a clear roadmap, enterprises risk costly missteps, fragmented solutions, and poor scalability. A roadmap ensures AI adoption is purposeful, integrated, and attributable to tangible benefits. Thus, the path to successful AI adoption is paved with strategic decisions, an honest assessment of current limitations, and not speculative gambles.
This guide is designed for technology leaders and C-suite executives tasked with steering their organisations through this complex landscape. We will examine the critical components of building an AI roadmap that is fit for purpose, exploring AI readiness, deployment models, leveraging existing platforms, and tackling the inevitable challenges of adoption and upskilling. By the end, you will have a clearer understanding for transforming AI from a buzzword into a powerful engine for growth and innovation.
Assessing AI Readiness
Before choosing tools or platforms, leaders must assess their AI readiness. This involves:
- Data maturity: is your data high-quality, accessible, and integrated?
- Applications landscape: are you on modern tools, with digital capabilities on par with the offerings of contemporary SaaS and API-enabled software packages?
- Business alignment: are AI initiatives tied to clear business outcomes aiming to address sizeable problems?
- Talent and skills: do you have the right mix of data engineers, analysts, and AI-curious business users to cultivate an AI-first enterprise-wide movement?
Tip: If your organisation has already embraced Generative AI productivity tools like Microsoft Copilot or ChatGPT, you’re further along the curve than you think. But readiness still requires a structured assessment beyond small wins or signs of early non-organisation-wide AI adoption.
The Right Approach: Proprietary vs. Platformised AI?
The effective application of AI technologies hinges on the foundational principle that success begins with high-quality, accessible data and a clearly defined business case addressing a specific problem. Only once these prerequisites are established should organisations proceed with the selection and implementation of AI solutions, ensuring alignment with strategic objectives and the potential for measurable impact. Here’s a look at various options to curate an approach fit for your organisation.
Proprietary AI (Artificial Intelligence) / ML (Machine Learning)
- Custom statistical models through Machine learning, exclusively and internally trained on your company’s data, to support a task/core competency.
- Predictive Analytics applications with or some natural language processing (NPL) capabilities for better user experience and launching a data product.
- Maximum, control and privacy.
- Ability to incorporate modern techniques like Fine Tuning to leverage a general AI Model and render it fit-for-purpose for the organisation’s specific needs.
- Talent and data-intensive, requiring significant upstream engineering, modelling and development.
Platformised (Embedded) AI
- Readily available and built-in AI capabilities for task-level or process-level workload streamlining within commercial software packages from vendors like Microsoft, Salesforce, SAP, and Oracle.
- Faster adoption of AI across various functions, simultaneously.
- Often just a subscription away within enterprise platform / software subscriptions, thereby lowering barriers of entry to AI.
- Primarily, and in most cases, exclusively available within SaaS subscriptions or Cloud-first deployment models of CRM, ERP, WMS, and Analytics software suites.
- Evolving at a manageable pace with transparent roadmaps from reputable vendors, thereby facilitating access, reliability and adoption for medium-sized and larger organisations.
- Highly dependent on the respective vendor ecosystem and roadmap prioritisation with varying capability enablement timelines across different tools and platforms.
Agentic AI
- Dubbed the new frontier, which introduces an orchestration level on top of and in relation to existing layers including database systems, application APIs, and even low-level constructs such as operating and control systems
- Able to tackle complex business workflows spanning multiple processes and systems such as quote-to-cash, without human involvement.
- Able to incorporate domain-specific Small(er) or Large Language Models, models to complete simple tasks or generate complex output, where reasoning and quality assurance are mission critical.
- Allows for RAG (Retrieval-Augmented Generation) and flexible prompt engineering to improve relevancy and reduce hallucination.
- Complex to maintain, where multiple bleeding-edge AI applications impose an “Eternal Alpha” state, with aggressive release cycles and deprecating API.
- Privacy governance concerns are significant when private deployment of the AI models (Large or foundational) is not an option.
- As above, public subscription / tokenised consumption of such tools and models can make an agentic ecosystem a costly one to run,
Key takeaway: Proprietary AI offers control but demands scale and focus. Platformised AI accelerates adoption but requires careful vendor management and selection to ensure the right AI features are made available within acceptable timelines. Whereas the Agentic orchestration route may provide faster and hyper-scalable turnouts in and around complex and multi-layered ecosystems with a mix of legacy and modern AI tools in a “pick and integrate” manner.
The Spectrum of AI Models
Foundational Models
A broader category of large AI models trained on vast, diverse, and often multi-modal datasets (text, images, audio, etc.).
- Strength: Highly adaptable and can be fine-tuned for a wide range of applications across different domains. Examples include image generation or interpretation via text prompts or audio and music generation.
- Challenge: High-cost, black-box state, and most prone to hallucinations.
Large Language Models (LLMs)
A type of foundation model trained explicitly on large amounts of text -both human and machine-readable- to understand, generate, and mimic human language.
- Strength: Broad, can handle complex tasks like creative writing, translation and text-heavy activities like summarisation or interpretation of complex documents.
- Challenge: High-cost and compute requirements.
Small Language Models (SLMs)
Trained on fewer parameters, small language models are more specialised, and efficient for specific tasks, requiring less computational power and cost, offering viable and secure alternatives to avoid tokenised public subscriptions to large language models.
- Strength: Domain specific, cost efficient, and ideal for private/on-prem deployments. Seamless adoption, minimal integration overhead.
- Challenges: Task-specific and for targeted use cases like customer service chat bots, OCR or invoice processing.
Decision rule: Considerations for Selecting and Deploying AI.
- Use SLMs for high volume, domain-specific tasks. Consider a non-tokenised private deployment route for improved control, cost management and privacy. Although limited in output capabilities, SLMs can help drive performance and cost control within agentic orchestration flows.
- Use LLMs and Foundational Models for broad, research-intensive, general-purpose and complex queries such as long-form knowledgebase article generation, report building, image, audio and video generation/analysis. These models are almost exclusively available through subscription and tokenised consumption models. These models are highly favoured within agentic orchestration ecosystems given their flexibility and general-purpose output capabilities. Cost control and privacy matters remain the most significant concerns.
- Use Embedded AI for immediate, process-integrated benefits within existing platforms where functional and cross-functional processes are already automated within a commercial software package. Use cases involve bulk processing of overdue payments and dunning letters with tailored email messages sent en masse with text-based instructions, where supported.
Balancing Cost, Privacy, and Scale
On-Premise / Private deployment
- Highest privacy and control.
- Cost-effective at scale, but limited scalability and orchestration support.
Cloud / Tokenised
- Fast to deploy, highly scalable.
- Lower upfront cost, but higher long-term API/usage fees.
Hybrid
- Sensitive and supported workloads on-premise, experimentation in the cloud.
- Pragmatic balance of cost, security, and flexibility.
Leveraging Existing Platforms & AI-first practices
Your SaaS tools, data systems and cloud infrastructure are a goldmine for AI enablement. Start with a maturity curve assessment:
- Foundational: Data is available, but siloed and inconsistent.
- Developing: Partial integration across data systems and applications, but fragmented.
- Advanced: Unified data model, high-quality and readily available data across the enterprise.
FineTuning vs. Retrieval Augmented Generation (RAG)
- FineTuning: Retrains a model with domain-specific data. Best for deep expertise, and productisation based on vast amounts of proprietary data in a niche.
- RAG: Connects a general model to your private knowledge base/database. Best for up-to-date, proprietary information and relevancy based on different sets of data within the organisation.
Key analogy: Fine-tuning is like teaching an employee a new skill. RAG is like giving them instant access to your company’s intranet.
Your Next Steps
Building a fit-for-purpose AI roadmap is a strategic imperative for any modern enterprise. Start by initiating a dialogue with your leadership team to align on an iterative approach:
- Assessing Data and AI readiness.
- Identifying a pilot project opportunity / use cases with measurable impact.
- Leveraging existing platforms.
- Choosing the right model, tools and deployment strategy.
- Making peace with the volatile, disruptive and transformational nature of AI
- Investing in change management and obsessing about an AI-first culture.
- Establishing an aligned strategy for capability / talent management
At Searchlight Consulting, we help enterprises cut through AI hype and land practical, secure, and scalable AI roadmaps. Let’s explore your first AI pilot together.