Oracle APEX’s GenAI Dynamic Actions make it effortless to drop an AI chat widget into your app.
The catch? As of now they’re hard-wired only for certain API providers such as OpenAI, Cohere, and OCI Gen AI.
If your favorite model—say Anthropic’s Claude—uses a different JSON format, it won’t plug in directly.
I ran into this exact roadblock… and found a workaround.
With one small PL/SQL proxy layer, you can keep APEX’s low-code experience and talk to any API — all without signing up for third-party routing services or sharing your API keys outside your control.
The Problem
- APEX GenAI DA sends payloads to the
/v1/chat/completions
endpoint in OpenAI format.
- Claude (and most non-OpenAI models) expect a different endpoint and JSON schema (
/v1/messages
for Anthropic).
- No setting exists to override the request/response format in the low-code GenAI DA.
- Services like OpenRouter or Together AI can wrap Claude in an OpenAI-compatible API — but they require you to use their API keys and bill through them.
I wanted full control over my keys and usage.
The Workaround
Instead of changing the APEX widget or paying for a middleman, make the API look like OpenAI yourself.
We’ll:
- Create a PL/SQL package that:
- Receives OpenAI-style requests from the DA.
- Transforms them to the target model’s request format.
- Sends them using
APEX_WEB_SERVICE
with your own API key.
- Transforms the model’s response back into OpenAI’s shape.
- Expose that package through ORDS at
/v1/chat/completions
.
- Point the APEX GenAI DA to your ORDS endpoint instead of api.openai.com.
How It Works
Before
APEX Chat Widget -> OpenAI API -> GPT model
After
APEX Chat Widget -> ORDS Proxy -> Target AI API
The DA still thinks it’s talking to OpenAI, but the proxy does the translation behind the scenes — with zero third-party dependency.
Architecture Diagram (Flowchart)
flowchart LR
A[APEX GenAI Chat Widget] --> B[ORDS endpoint /v1/chat/completions]
B --> C[PLSQL proxy JSON transform]
C --> D[Target AI API]
D --> C
C --> B
B --> A
Architecture Diagram (Sequence)
sequenceDiagram
participant A as APEX GenAI Chat Widget
participant B as ORDS endpoint (/v1/chat/completions)
participant C as PLSQL proxy JSON transform
participant D as Target AI API (Claude / OCI GenAI / Gemini)
A->>B: OpenAI-style request
B->>C: Forward request
C->>D: Transform & call provider
D-->>C: Provider response
C-->>B: Convert to OpenAI format
B-->>A: Chat completion response
Key Code
You’ll find the full working package and ORDS handler in my GitHub repo (link below).
https://github.com/cvranjith/apex-claude-proxy
Highlights:
- Native JSON parsing: Uses
JSON_OBJECT_T
/ JSON_ARRAY_T
instead of APEX_JSON
for cleaner, standard parsing.
- APEX_WEB_SERVICE: Handles outbound HTTPS with your APEX credentials; no
UTL_HTTP
wallet headaches.
- Configurable model & tokens: Pass
max_output_tokens
, temperature
, etc., through your proxy.
Example call in the proxy:
APEX_WEB_SERVICE.ADD_REQUEST_HEADER('x-api-key', l_api_key);
APEX_WEB_SERVICE.ADD_REQUEST_HEADER('anthropic-version','2023-06-01');
l_resp_clob := APEX_WEB_SERVICE.MAKE_REST_REQUEST(
p_url => 'https://api.anthropic.com/v1/messages',
p_http_method => 'POST',
p_body => l_body.to_clob()
);
Choosing the Right Claude Model
For general-purpose chat + content creation with JSON analysis:
- claude-opus-4-1-20250805 – highest quality, deepest reasoning.
- claude-sonnet-4-20250514 – great balance of quality and speed.
- claude-3-7-sonnet-20250219 – solid hybrid reasoning, lower cost.
ORDS and APEX Setup
For this integration to work with the APEX GenAI chat widget, your ORDS API must have a POST handler with a URI template ending in:
ORDS Definition Example
ORDS.DEFINE_TEMPLATE(
p_module_name => 'claude-proxy',
p_pattern => 'chat/completions',
p_priority => 0,
p_etag_type => 'HASH',
p_etag_query => NULL,
p_comments => NULL);
ORDS.DEFINE_HANDLER(
p_module_name => 'claude-proxy',
p_pattern => 'chat/completions',
p_method => 'POST',
p_source_type => 'plsql/block',
p_mimes_allowed => NULL,
p_comments => NULL,
p_source =>
' DECLARE
l_body CLOB := :body_text;
l_out CLOB;
BEGIN
l_out := claude_proxy.chat_completions(l_body);
OWA_UTIL.mime_header(''application/json'', TRUE);
HTP.prn(l_out);
END;
');
APEX Generative AI Service Definition
In APEX, go to:
Workspace Utilities → Generative AI Services → Create New Service
Choose “OpenAI” as the service provider.
- URL:
Set it to your ORDS handler without the /chat/completions
suffix.
Example:
https://xxxx.adb.ap-singapore-1.oraclecloudapps.com/ords/xxx/claude-proxy/v1
- Additional Attributes:
Add any attributes your target model requires.
For example, many Claude models require max_tokens
:
- AI Model:
Declare the model ID you want to use. Example:
Works with OCI Generative AI Agents Too
Oracle’s blog Integrating OCI Generative AI Agents with Oracle APEX Apps for RAG-powered Conversational Experience demonstrates a different approach:
They use low-level REST API calls directly to OCI Generative AI and render messages in a classic report to mimic a chat experience.
That works well, but it’s still a custom UI — you build and maintain the conversation rendering logic yourself.
With this proxy method, you can:
- Keep the APEX GenAI Dynamic Action chat widget for a true low-code UI.
- Point it to your ORDS proxy.
- Have the proxy map the OpenAI-style request to the OCI Generative AI API format (with OCI auth,
modelId
, and input
).
- Map the OCI response back into the OpenAI
chat/completions
shape.
You get:
- The same RAG-powered intelligence from OCI Generative AI.
- Zero custom UI code.
- Full control over authentication and model switching.
Why This is Powerful
- No UI rewrites – keep using the low-code chat widget.
- Model agnostic – works for Claude, OCI GenAI, Gemini, Mistral, or any API.
- Full control – you never hand over your API key to a third-party router.
- Central control – one place to add logging, prompt tweaks, or safety filters.
Bonus Idea
You can extend the proxy to:
- Route requests dynamically (OpenAI for quick chats, Claude for deep reports, OCI GenAI for enterprise data).
- Add guardrails (token limits, banned phrases).
- Log all prompts/responses for analytics.