Just Enough or Good Enough Architecture for Generative AI with PaaS for SaaS
I have spent over a decade working with ERP implementation for small to large enterprises. Today, I would like to talk about how to effectively and quickly integrate Generative AI into the application, which is the foundation of the organization’s data.
Adding Generative AI into the existing IT landscape and organizational operations can be seen as a journey that evolves through distinct stages, each bringing benefits and challenges.
What we observe in this journey begins when employees bring productivity-enhancing tools to the organizations. These simple, off-the-shelf AI tools, such as ChatGPT, Copilot, and Grammarly, are designed to enhance productivity. They are easy to adopt and provide immediate value to knowledge workers without the complexity of the organization’s security, rules, adherence, etc.
This diagram illustrates the typical organizational journey with Generative AI, highlighting the progression from adopting simple tools to building sophisticated AI models, ensuring a structured and strategic approach to AI integration. The journey reflects AI’s increasing complexity and value, moving from cost-efficiency and simplicity to achieving robust business value and governance.
Generative AI in Applications:
When on-premise applications started migrating to SaaS, the fundamental questions were at the heart of managing the custom components in the SaaS applications and having the liberty of customizing the on-premise applications (e-business suite, JD Edwards, Siebel, etc.).
What do you do with the custom components unique to every organization to manage specific business processes?
We spent most of the time discussing how to address these custom extensions. At that point, the concept of PaaS for SaaS came into play, where we started having cloud services like Visual Builder/Process Cloud (OIC) and APEX, or building entirely custom on Weblogic and using the Oracle Autonomous Database or Database cloud service as a database.
Every customer, colleague, and friend is now very well versed in PaaS 4 SaaS areas, and they were able to quickly address the unique requirements from the business point of view.
Now, with the advancements in AI, specifically in Generative AI, the key question is how to integrate it into the existing ecosystem of ERP applications.
This reminds me of the concept we use to practice in building enterprise architecture. Just Enough or Good Enough Enterprise Architecture (EA) practice was introduced by Gartner to promote a pragmatic approach to enterprise architecture. The idea emphasizes creating an EA framework to meet business needs without over-engineering or over-investing in unnecessary complexities.
Here are the core principles:
- Business-driven: To secure ongoing support and investment, the EA initiative must be directly aligned with business objectives and demonstrate meaningful results within a short time frame, typically six months.
- Incremental Development: Instead of attempting to build a comprehensive architecture from the outset, start with small, manageable components that address immediate business requirements. This approach allows for flexibility and adaptation as the business evolves.
- Focus on Value: Prioritize EA activities that provide clear and measurable value to the organization. Avoid extensive documentation and processes that do not contribute to achieving business goals.
If we return to our PaaS for SaaS ecosystem, we were building ‘just enough architecture’ to meet the business requirements, with the addition of a PaaS layer to the SaaS applications.
Let us go through our earlier before Gen AI era, the typical architecture.
We used these additional components and SaaS to build custom extensions and integrations.
Now let’s fast forward, and we are in the Generative AI era
As a business, your utmost race is to capitalize on Generative AI and how quickly you can bring/introduce it into your organization. Let’s reflect on your existing applications, tech ecosystem, and what is happening.
Oracle Cloud applications (HCM, ERP, SCM, CX) have introduced built-in capabilities to utilize large language models, and several use cases are introduced as part of the fusion applications.
Generative AI is a fully managed Oracle Cloud Infrastructure service that provides a set of state-of-the-art, customizable large language models (LLMs) that cover many use cases, including chat, text generation, summarization, and creating text embeddings.
You can follow the same principles of ‘just enough architecture’ to embrace Generative AI in your organization. You can extend the existing PaaS for the SaaS ecosystem with another component called Generative AI.
Oracle Cloud Applications practitioners need to think in this mindset, enabling them to have a simplified architecture.
The same rules apply now.
In this case, you are just extending the PaaS for the SaaS ecosystem with another Service: Generative AI.
You can also upgrade Oracle Autonomous Database to Autonomous Database 23ai to get the capabilities required for Gen AI, such as AI Vector Search. Review this newsletter’s earlier edition on 23ai for more detailed info.
Let’s try to build your to-be architecture after the addition of Generative AI.
Now, definitely, there is a 3rd phase where you want to have your own enterprise AI/Generative AI platform, as reflected at the start of this article. This is the area where you need to have your own set of chosen LLMs deployed on GPUs. This area needs more detailed analysis and thought; we will cover it in the future.
Concluding today’s article, these are my views, as I am also stepping into this ride, so please, please, add your opinions and comments and extend it.
Let’s lay the groundwork, and we can all benefit from it as the AI community.