Oracle APEX and Artificial Intelligence: How a Low-Code Platform Is Arguably Changing the Rules of Enterprise Development

By Dmitry Borisov

Over the past two years I’ve watched a clear shift in how companies approach artificial intelligence. Not long ago, AI projects were mostly the domain of large tech firms with deep budgets and teams of data scientists. Mid-size companies are now running production AI systems with teams a fraction of that size.

AI is becoming a practical tool for everyday business problems. Companies are using it to automate routine processes and build systems that support decision-making. Several approaches have emerged to address these challenges, including low-code platforms integrated with AI capabilities, cloud-native architectures, and traditional custom development – each with its own tradeoffs in speed, cost, and complexity.

The barriers to adoption, however, remain high – assembling a capable AI stack still requires specialists who are in short supply, and projects routinely run over budget and past their deadlines. Gartner estimates that more than 80 percent of enterprise AI initiatives never make it into production, largely because implementation turns out to be more complex than planned.

In the last year I completed a series of hands-on Oracle LiveLabs workshops focused on integrating AI into Oracle APEX and earned certification in the course “Oracle APEX Cloud Developer Professional”. What follows is a look at how Oracle APEX, combined with the AI capabilities built into Oracle Autonomous Database, is arguably reshaping the way intelligent enterprise applications are built.

When low-code meets AI

Artificial intelligence usually brings to mind neural networks, advanced mathematics, and enormous training datasets. Low-code platforms suggest the opposite: simplicity, accessibility, fast development. The two worlds look incompatible on paper. Working with both changes that assessment.

Oracle Application Express has been around for more than twenty years. It started as a way to build web forms quickly on top of Oracle Database and gradually evolved into a full-fledged platform for enterprise application development.

Other low-code platforms – including Microsoft Power Platform and Mendix – have moved in a similar direction, embedding AI capabilities closer to the development layer. Oracle APEX takes a different architectural bet: rather than connecting to AI through middleware, it integrates AI directly into the database engine.

The turning point came in 2023–2024 with the release of Oracle Database 23ai and APEX 24.1. AI capabilities were embedded directly into the database engine – no longer sitting in a separate layer connected through APIs and middleware, but available as native database functionality through standard SQL queries.

An APEX developer can add semantic search, generative AI, or predictive analytics to an application with significantly less effort than traditional integration approaches require.

After working with Oracle ERP systems for two decades, I’ve seen how difficult it can be for companies to bolt new technologies onto existing infrastructure. Each additional layer increases complexity, introduces risk, and often requires new specialists. APEX with built-in AI reduces that complexity. The database, business logic, and AI functionality share the same Oracle environment – eliminating the need for additional tooling and separate security configurations.

Paths to AI inside APEX

Oracle APEX offers several ways to integrate AI. The right option depends on the problem you’re solving, the infrastructure available, and security requirements.

The most native route is to use the AI capabilities in Oracle Database 23ai. The platform’s AI Vector Search works with vector representations of data directly inside the database. Instead of matching exact keywords, the system understands the meaning of a query and returns semantically similar results.

I tested Vector Search using a movie recommendation example from Oracle LiveLabs. A user might type something like, “I want something philosophical about fighting the system.” The application suggests films such as The Matrix, V for Vendetta, or 1984, even when those exact words never appear in the descriptions. Under the hood it’s still a regular SQL query running against a vector index.

Another option is Cloud AI Services, which cover tasks such as image recognition, text analysis, and document processing. The platform also includes built-in generative AI features, including chatbot components that can be added directly to application pages.

For maximum flexibility, developers can integrate external AI providers through REST APIs, including OpenAI, Anthropic, or Google Gemini. Organizations in regulated industries can import their own models using the ONNX format, ensuring that sensitive data never leaves corporate infrastructure.

From theory to practice

The LiveLabs program offers more than sixty free hands-on workshops – I worked through several over the past year.

The first one, mentioned earlier, focuses on building a movie recommendation system using Vector Search. One practical detail stood out: the entire workshop took about three hours. By the end, I had a working application and a practical understanding of vector search. The workshop required minimal data preparation and no separate infrastructure setup – everything ran inside Oracle Database.

Another workshop walks through building an online store with AI-powered search that combines Vector Search with OCI Generative AI. A user might type a query like “an affordable gift for someone who loves hiking.” The system finds relevant products and automatically generates descriptions. I built a working prototype in three days.

A third workshop focuses on social media analysis and sentiment detection. The application collects brand mentions, analyzes sentiment through OCI Language Service, and displays the results on a dashboard. A single developer can build a complete AI-driven application end to end.

Security at enterprise scale

Whenever I discuss AI with executives, security comes up first. Oracle APEX addresses that concern at the architectural level.

Oracle AI Vector Search runs entirely inside Oracle Database. Data never leaves corporate infrastructure. Vector embeddings are stored in regular tables protected by the same mechanisms used for other enterprise data – encryption, row-level security, and full audit trails.

When building AI assistants using a retrieval-augmented generation (RAG) architecture, corporate data remains inside Oracle Cloud Infrastructure. Only the search results are sent to the generative model, not the entire dataset. That’s a significant difference from approaches where the entire knowledge base is transmitted to an external AI service.

That said, the approach is not without tradeoffs. Organizations already running non-Oracle infrastructure may face meaningful integration challenges when adopting APEX. Heavy reliance on a single vendor’s ecosystem can also raise questions around flexibility and long-term costs – particularly if business requirements evolve in directions the platform doesn’t support well. For companies evaluating options, these factors are worth weighing alongside the development speed advantages.

The economics of AI projects

Traditional AI development often takes months. With Oracle APEX, a prototype can appear in two or three days, and a production-ready version typically takes two to four weeks – though timelines vary significantly depending on application complexity. Much of that speed comes from a unified environment – database, business logic, AI functionality, and user interface running in one place, without integration overhead between layers.

Oracle APEX is free for anyone using Oracle Database, and the Oracle Cloud Free Tier provides enough capacity for proof-of-concept work – giving smaller organizations access to enterprise-grade tooling without corresponding costs.

Getting started

A practical starting point for most organizations is hands-on experimentation through vendor-provided workshops or sandbox environments – these typically provide working exposure to the technology within a day or two. From there, identifying a contained pilot use case helps validate whether the approach fits your specific context before committing to broader adoption. The key is choosing something that addresses a real business problem without touching critical operations. Evaluate the results, and scale only if the value is demonstrable.

The more interesting question isn’t whether AI will become standard in enterprise systems – the trajectory is clear. The question is how quickly organizations can move from pilot to production, and whether the tools they choose help or hinder that transition. Platforms that embed AI at the infrastructure level are one answer to that question. Not the only one, but an increasingly practical one.

Dmitry Borisov has over 20 years of experience delivering enterprise system implementations for large industrial and financial organizations across mining, banking, logistics, and manufacturing sectors. His work spans multiple jurisdictions, with a focus on complex, multi-entity environments requiring regulatory compliance and cross-border integration. As a solution architect, he specializes in aligning business requirements with technology capabilities – leading projects that involve performance optimization, cross-jurisdictional adaptation, and integration with legacy systems. He holds current certifications in enterprise application development and AI integration and combines deep technical expertise in database architecture and enterprise systems with practical understanding of business processes in regulated industries.