One Sketch Away

Don't Shoot The Trigger

Don’t Shoot the Trigger

Pulling the Right Trigger: Crafting Event-Driven Harmony

trigger

Some people still treat database triggers and stored procedures like relics from another era — tools that should have disappeared along with floppy disks. They say the database should stay dumb, while all the “smart” work should happen outside in fancy middleware or change-data-capture pipelines. But when you think about it, the database isn’t just a place to store facts — it’s where those facts are born. That makes it the best place to notice and react to change. Ignoring that is like keeping your hand on a hot stove and waiting for someone to remind you it’s burning.

We’ve been building a lightweight replication utility called Saga Replication — an internal framework, a small but powerful utility designed to enable event-driven architecture (EDA) for data synchronization across disparate platforms.

Instead of doing heavy log mining or byte-for-byte or table-to-table replication like traditional CDC tools, Saga Replication focuses on what really matters: the business event.

The result is a low-maintenance, configurable, and flexible replication framework that’s reliable enough for production yet simple enough to maintain without an army of DBAs and middleware admins.

But of course, mention “trigger-based replication” in an architecture forum, and you’ll see eyebrows rise — usually accompanied by the familiar phrases:

“Triggers are evil.”
“Triggers kill performance!”
“Never use triggers!”

This post is a check on that narrative — to see what it actually means when a well-architected trigger-based emitter simplifies your system without needing an expensive, high-maintenance CDC engine.
Because, really — a Ferrari isn’t required for every road.
Sometimes a reliable hatchback is all you need.


💣 The “Triggers Are Bad” Narrative — Let’s Deconstruct It

Some of the “triggers are bad” arguments do have merit — but they’re not universal truths.
When engineered thoughtfully, triggers can actually be a powerful, flexible mechanism that simplifies complex data pipelines.

Common Concern Reality Check
“Triggers slow down transactions.” Only if they do heavy work. A minimal DBMS_AQ.ENQUEUE of a few bytes adds microseconds, not milliseconds.
“Hidden logic makes systems opaque.” True if abused. But if used consistently for event publishing, it’s a clear architectural pattern.
“Hard to maintain.” Any poorly documented mechanism is. With disciplined naming, standard templates, and version control, triggers are no harder to maintain than any stored procedure.
“Not scalable.” Depends on the workload. Properly tuned Streams Pool, async dequeue, and batching handle very high volumes.
“Deprecated in modern architectures.” Ironically, EDA brings them back — we just call them event publishers now.

The key isn’t whether you use triggers or not — it’s how you use them.
A well-architected trigger that does nothing but enqueue a small event to TEQ is predictable, testable, and extremely fast.


🧩 Why We Chose Triggers + TEQ

Our approach focuses on decoupled, application-aware event emission directly within the database. Instead of copying rows from one table to another, we capture business events — changes that actually mean something to the application layer.

  1. The trigger fires after commit and enqueues a minimal event to TEQ — just enough to identify what changed ({replication_code, rowid, dml_type}).
  2. The enqueue itself is extremely lightweight and part of the same transaction.
  3. A TEQ listener (running independently) picks up the event, looks up related context — for example, customer details from multiple tables like Customer Master, Demographics, Address, or MIS — and then assembles a complete business payload.
  4. The listener then sends it to the destination through the right channel — Kafka, REST, or other protocols — all orchestrated through Saga’s declarative, multi-protocol transport design.

This means a “customer update” isn’t just one row cloned across databases. It’s a rich event built from several related tables, validated, and delivered securely through the target system’s APIs — honoring their authentication and data rules.

This pattern achieves:

And when combined with read replicas, it naturally extends to CQRS-style scaling, where the heavy work happens downstream without burdening the OLTP system.


⚙️ Inside TEQ — Oracle’s In-Database Streaming Engine

Transactional Event Queues (TEQ) are not just queues backed by database tables. They are kernel-level, high-performance event streams built directly into the Oracle Database engine.

A few facts that make TEQ unique and strategically important:

In short, TEQ is a first-class citizen in the Oracle ecosystem — a streaming layer that’s ACID, SQL-aware, and managed just like any other database object. It’s not an add-on; it’s in the kernel.

For event-driven replication, this is a dream — low latency, no middleware, no external ops burden.


🧠 Scaling Out with CQRS and Read Replicas

The trigger + TEQ design enables in-DB decoupling with built-in durability, retries, and backpressure.
From this point, the replication can scale horizontally:

It’s a clean, scalable, low-overhead design — and unlike CDC tools, it’s yours, not a black box.


📊 Real-World Benchmark: Oracle Autonomous Database (Always Free, 1 OCPU)

To see what the “trigger overhead” really looks like, I tested it on an Always Free Oracle Autonomous Database — the most modest setup possible (roughly 1 OCPU).
The trigger simply enqueues a lightweight event payload into TEQ ({replication_code, rowid, dml_type}).

Scenario Rows Updated Elapsed (s) Redo Size (B) db block gets consistent gets
Without trigger 33,925 0.117 3,124,764 1,140 767
With trigger (TEQ enqueue) 33,925 0.538 7,656,816 34,510 767

Difference: 0.421 seconds total for 33,925 rows
→ ≈ 12 microseconds per row

That’s 0.000012 seconds of extra work per DML row — on the smallest possible cloud tier.

Even on Always Free, the per-row cost is practically invisible for the kind of single-row transactions that define event-driven workloads.


⚙️ What This Means

💡 Roughly 12 µs per row is the “price” of turning every DML into an event — far cheaper than a network call, cache lookup, or log-miner pipeline.


🔍 When “Performance Overhead” Really Matters (and When It Doesn’t)

For data-warehouse systems that process bulk updates or ETL batches, triggers will indeed add measurable time — every fired row-level trigger adds redo, context switches, and enqueue work.

But in a typical OLTP system, where each transaction modifies a few rows, the trigger overhead is well under one millisecond total — practically invisible.

In fact, for a single-row insert or update (the real EDA scenario), the overhead vanishes into the noise of normal database activity.

“If you’re inserting millions of rows in one go, triggers will slow you down.
If you’re inserting a few rows per transaction — which is what most systems actually do — they’re practically invisible.”


⚖️ The Balanced View: Triggers vs. CDC

Aspect Trigger + TEQ Log-based CDC (GoldenGate, XStream)
Capture point In-line with transaction Redo log mining (post-commit)
Latency Instant (commit-time) Sub-second (log parse delay)
Complexity Simple, few moving parts Complex infrastructure
Licensing Included with DB Additional license
Maintenance Minimal Continuous tuning/monitoring
Best suited for OLTP/EDA replication, microservices High-volume cross-DB replication

🚀 The Takeaway

There’s nothing “evil” about a trigger when it’s used purposefully.
What’s evil is complexity for its own sake — or using a Ferrari to drive to the corner store.

A well-architected trigger-based emitter, when combined with TEQ, can:


✨ Final Thought

Triggers aren’t the villains they’re made out to be — they’re just misunderstood.
Used responsibly, they turn your database into a real-time event producer without external dependencies or costly CDC engines.

And with TEQ maturing into Oracle’s strategic event-streaming platform, the message is clear:
You don’t always need a separate cluster of brokers to do event-driven design.
Sometimes, the simplest solution — already running inside your database — is also the smartest one.