Don’t Shoot the Trigger
Pulling the Right Trigger: Crafting Event-Driven Harmony

Some people still treat database triggers and stored procedures like relics from another era — tools that should have disappeared along with floppy disks. They say the database should stay dumb, while all the “smart” work should happen outside in fancy middleware or change-data-capture pipelines. But when you think about it, the database isn’t just a place to store facts — it’s where those facts are born. That makes it the best place to notice and react to change. Ignoring that is like keeping your hand on a hot stove and waiting for someone to remind you it’s burning.
We’ve been building a lightweight replication utility called Saga Replication — an internal framework, a small but powerful utility designed to enable event-driven architecture (EDA) for data synchronization across disparate platforms.
Instead of doing heavy log mining or byte-for-byte or table-to-table replication like traditional CDC tools, Saga Replication focuses on what really matters: the business event.
- It uses lightweight triggers to publish simple, meaningful messages into Oracle Transactional Event Queues (TEQ).
- TEQ listeners then pick up those messages, create the final payloads, and send them wherever they need to go — to Kafka, APIs, or other systems
- The heavy work happens asynchronously, often on a read replica, so the main database stays fast and clean, and can even operate in a CQRS pattern using a read replica for fan-out.
The result is a low-maintenance, configurable, and flexible replication framework that’s reliable enough for production yet simple enough to maintain without an army of DBAs and middleware admins.
But of course, mention “trigger-based replication” in an architecture forum, and you’ll see eyebrows rise — usually accompanied by the familiar phrases:
“Triggers are evil.”
“Triggers kill performance!”
“Never use triggers!”
This post is a check on that narrative — to see what it actually means when a well-architected trigger-based emitter simplifies your system without needing an expensive, high-maintenance CDC engine.
Because, really — a Ferrari isn’t required for every road.
Sometimes a reliable hatchback is all you need.
💣 The “Triggers Are Bad” Narrative — Let’s Deconstruct It
Some of the “triggers are bad” arguments do have merit — but they’re not universal truths.
When engineered thoughtfully, triggers can actually be a powerful, flexible mechanism that simplifies complex data pipelines.
| Common Concern |
Reality Check |
| “Triggers slow down transactions.” |
Only if they do heavy work. A minimal DBMS_AQ.ENQUEUE of a few bytes adds microseconds, not milliseconds. |
| “Hidden logic makes systems opaque.” |
True if abused. But if used consistently for event publishing, it’s a clear architectural pattern. |
| “Hard to maintain.” |
Any poorly documented mechanism is. With disciplined naming, standard templates, and version control, triggers are no harder to maintain than any stored procedure. |
| “Not scalable.” |
Depends on the workload. Properly tuned Streams Pool, async dequeue, and batching handle very high volumes. |
| “Deprecated in modern architectures.” |
Ironically, EDA brings them back — we just call them event publishers now. |
The key isn’t whether you use triggers or not — it’s how you use them.
A well-architected trigger that does nothing but enqueue a small event to TEQ is predictable, testable, and extremely fast.
🧩 Why We Chose Triggers + TEQ
Our approach focuses on decoupled, application-aware event emission directly within the database.
Instead of copying rows from one table to another, we capture business events — changes that actually mean something to the application layer.
- The trigger fires after commit and enqueues a minimal event to TEQ — just enough to identify what changed ({replication_code, rowid, dml_type}).
- The enqueue itself is extremely lightweight and part of the same transaction.
- A TEQ listener (running independently) picks up the event, looks up related context — for example, customer details from multiple tables like Customer Master, Demographics, Address, or MIS — and then assembles a complete business payload.
- The listener then sends it to the destination through the right channel — Kafka, REST, or other protocols — all orchestrated through Saga’s declarative, multi-protocol transport design.
This means a “customer update” isn’t just one row cloned across databases. It’s a rich event built from several related tables, validated, and delivered securely through the target system’s APIs — honoring their authentication and data rules.
This pattern achieves:
- Transactional integrity — the enqueue rolls back with the source transaction.
- Application awareness — events are meaningful to business systems, not just database mirrors.
- Decoupling — the source database stays untouched by downstream complexity.
- Simplicity — no log mining, trail files, or brittle CDC topologies to maintain.
And when combined with read replicas, it naturally extends to CQRS-style scaling, where the heavy work happens downstream without burdening the OLTP system.
⚙️ Inside TEQ — Oracle’s In-Database Streaming Engine
Transactional Event Queues (TEQ) are not just queues backed by database tables. They are kernel-level, high-performance event streams built directly into the Oracle Database engine.
A few facts that make TEQ unique and strategically important:
-
🧠 In-memory and partitioned:
TEQ can store queue tables in-memory for ultra-low latency and partition them for parallel processing. Each partition can act as an independent shard, enabling millions of events per second throughput on large systems.
-
⚙️ Optimizer-aware & fully ACID:
Because TEQ runs inside the Oracle kernel, it’s query-optimizer aware, transactionally consistent, and benefits from the same redo/undo architecture as your base tables. No external brokers, no “eventual consistency” trade-offs.
-
🚀 Sharded and horizontally scalable:
TEQ supports sharded queue tables for high-volume distributed workloads. This means your event streams scale with your database shards — natively, without additional middleware.
-
🔄 Streaming semantics:
It supports publish/subscribe, fan-out, ordered delivery, replay, and subscriber-based acknowledgment, giving you the durability of a queue with the semantics of a stream.
-
🧰 Low maintenance, zero middleware:
TEQ runs entirely inside the DB — no separate JVM, ZooKeeper, or brokers to manage.
Backup, restore, high availability, and performance tuning all ride on Oracle’s existing infrastructure.
-
🧩 Unified HA and DR with no external replication headaches:
When event processing is handled inside the database, disaster recovery and high availability come for free. Otherwise, you’d need to replicate the database, but also your external event broker, its metadata, offsets, and queues — all of which must stay perfectly in sync during failover. With TEQ, events and data share the same transactional and replication model, so your Data Guard or Autonomous DR configuration automatically protects the event stream.
No separate pipeline, no parallel DR plan, no “two systems to recover.” One database — one consistent state.
-
💬 Kafka compatibility:
Oracle’s Sharded TEQ (introduced in 23c and continuing in 23ai/26ai) exposes Kafka APIs natively, allowing Kafka clients to produce/consume directly from TEQ topics — making it Oracle’s strategic event-streaming platform, competing head-on with Kafka, Pulsar, and RabbitMQ.
In short, TEQ is a first-class citizen in the Oracle ecosystem — a streaming layer that’s ACID, SQL-aware, and managed just like any other database object. It’s not an add-on; it’s in the kernel.
For event-driven replication, this is a dream — low latency, no middleware, no external ops burden.
🧠 Scaling Out with CQRS and Read Replicas
The trigger + TEQ design enables in-DB decoupling with built-in durability, retries, and backpressure.
From this point, the replication can scale horizontally:
- Primary (write model): Runs the lightweight trigger. The enqueue adds microseconds of latency at most.
- Replica (read model): Consumes from TEQ, hydrates the row data, transforms it, and sends it downstream (Kafka, REST, object storage, etc.).
- CQRS pattern: Read and write concerns are naturally separated. The heavy replication or fan-out runs off the replica, leaving the OLTP path untouched.
It’s a clean, scalable, low-overhead design — and unlike CDC tools, it’s yours, not a black box.
📊 Real-World Benchmark: Oracle Autonomous Database (Always Free, 1 OCPU)
To see what the “trigger overhead” really looks like, I tested it on an Always Free Oracle Autonomous Database — the most modest setup possible (roughly 1 OCPU).
The trigger simply enqueues a lightweight event payload into TEQ ({replication_code, rowid, dml_type}).
| Scenario |
Rows Updated |
Elapsed (s) |
Redo Size (B) |
db block gets |
consistent gets |
| Without trigger |
33,925 |
0.117 |
3,124,764 |
1,140 |
767 |
| With trigger (TEQ enqueue) |
33,925 |
0.538 |
7,656,816 |
34,510 |
767 |
Difference: 0.421 seconds total for 33,925 rows
→ ≈ 12 microseconds per row
That’s 0.000012 seconds of extra work per DML row — on the smallest possible cloud tier.
Even on Always Free, the per-row cost is practically invisible for the kind of single-row transactions that define event-driven workloads.
⚙️ What This Means
- Even on minimal compute, the trigger cost per row is negligible.
- In OLTP or EDA scenarios, where transactions are small and frequent, the added latency is unnoticeable while delivering instant, transactional event capture.
- Only in data-warehouse batch updates (hundreds of thousands or millions of rows per statement) would this overhead accumulate into seconds — and that’s simply not the right use case for triggers.
💡 Roughly 12 µs per row is the “price” of turning every DML into an event — far cheaper than a network call, cache lookup, or log-miner pipeline.
For data-warehouse systems that process bulk updates or ETL batches, triggers will indeed add measurable time — every fired row-level trigger adds redo, context switches, and enqueue work.
But in a typical OLTP system, where each transaction modifies a few rows, the trigger overhead is well under one millisecond total — practically invisible.
In fact, for a single-row insert or update (the real EDA scenario), the overhead vanishes into the noise of normal database activity.
“If you’re inserting millions of rows in one go, triggers will slow you down.
If you’re inserting a few rows per transaction — which is what most systems actually do — they’re practically invisible.”
⚖️ The Balanced View: Triggers vs. CDC
| Aspect |
Trigger + TEQ |
Log-based CDC (GoldenGate, XStream) |
| Capture point |
In-line with transaction |
Redo log mining (post-commit) |
| Latency |
Instant (commit-time) |
Sub-second (log parse delay) |
| Complexity |
Simple, few moving parts |
Complex infrastructure |
| Licensing |
Included with DB |
Additional license |
| Maintenance |
Minimal |
Continuous tuning/monitoring |
| Best suited for |
OLTP/EDA replication, microservices |
High-volume cross-DB replication |
🚀 The Takeaway
There’s nothing “evil” about a trigger when it’s used purposefully.
What’s evil is complexity for its own sake — or using a Ferrari to drive to the corner store.
A well-architected trigger-based emitter, when combined with TEQ, can:
- Deliver clean, event-driven replication with transactional guarantees.
- Run entirely within the Oracle kernel — no middleware to install, patch, or monitor.
- Scale horizontally via partitions, sharding, and in-memory queues.
- Operate in CQRS mode using replicas for downstream work.
- Stay low-cost and low-maintenance, while delivering Kafka-class throughput within the same database.
✨ Final Thought
Triggers aren’t the villains they’re made out to be — they’re just misunderstood.
Used responsibly, they turn your database into a real-time event producer without external dependencies or costly CDC engines.
And with TEQ maturing into Oracle’s strategic event-streaming platform, the message is clear:
You don’t always need a separate cluster of brokers to do event-driven design.
Sometimes, the simplest solution — already running inside your database — is also the smartest one.