One-Line Summary: Eventarc is GCP's event routing service — it lets cloud events (GCS object uploads, Pub/Sub messages, BigQuery job completions) directly trigger Cloud Run, Cloud Functions, or Workflows without a cron in the middle.
Prerequisites: Lesson 02-ingestion-patterns/02-event-streams-with-pub-sub.md.
What's the Concept?
Composer DAGs and Dataform schedules run on time-based cron. Eventarc runs on state changes: "a file landed in this bucket," "this BigQuery query finished," "a new Pub/Sub message arrived on this topic." For pipeline steps that should fire in response to upstream events — not on a clock — Eventarc is the right plumbing.
The mental model: there's a stream of every cloud event happening in your project (GCS writes, BQ job completions, etc.). Eventarc is the filtered subscription that says "when this kind of event happens, call that endpoint." The endpoint is usually Cloud Run, Cloud Functions, or a Cloud Workflows execution.
How It Works
Common patterns:
1. New file in bronze triggers a silver-layer transform
GCS bucket "myco-lake-bronze" Eventarc trigger Cloud Run
/source=acme/entity=orders/ ────────────────▶ /transform-orders
orders-2026-05-13.csv │
▼
UPDATE silver.orders;
INSERT INTO _pipeline_log ...Eventarc filters by event type (google.cloud.storage.object.v1.finalized) and resource (the specific bucket). The Cloud Run service receives an HTTP request with the event payload — bucket name, object path, size, timestamps — and does its work.
2. BigQuery job completion triggers a downstream refresh
silver.orders MERGE finishes Eventarc trigger Cloud Workflows
(audit log event) ────────────────▶ refresh_gold_billing
│
▼
BigQuery: rebuild gold.billing_agent_context
BigQuery: re-embed changed rows
(chain of REST calls)This kind of chain — silver triggers gold, gold triggers embedding refresh — is graphable in Composer too. Eventarc-only chains are simpler and cheaper for small projects.
3. Pub/Sub message triggers a worker
topic "manifest.new_file" Eventarc trigger Cloud Function
(manifest published by ────────────────▶ /handle-manifest
ingester) │
▼
validate, kick off Dataflow jobThis is the lightest-weight option. Manifest events arrive sparsely; the function spins up only when needed.
A minimal Eventarc trigger definition (via Terraform or gcloud):
gcloud eventarc triggers create silver-orders-trigger \
--location=us-central1 \
--service-account=eventarc-sa@myco-prod.iam.gserviceaccount.com \
--destination-run-service=transform-orders \
--destination-run-region=us-central1 \
--event-filters="type=google.cloud.storage.object.v1.finalized" \
--event-filters="bucket=myco-lake-bronze" \
--event-filters-path-pattern="name=source=acme/entity=orders/**"The path pattern filters down to "only acme orders files," so this trigger doesn't fire for every object write in the bucket.
Why It Matters
- Reacts at sub-second latency. No polling interval. The pipeline starts the instant the event happens.
- Removes "is it ready yet?" coordination. Downstream steps don't need to know when upstream finishes — they're notified.
- Cheap for low-volume work. Cloud Run / Functions cost zero when idle. A pipeline that runs occasionally costs cents per month with Eventarc; the same on Composer would cost hundreds.
- Composer can stay focused. Use Composer for the heavy scheduled DAGs; use Eventarc for the reactive glue between them.
Key Technical Details
- Eventarc supports a wide event source list: GCS, Pub/Sub, BigQuery audit logs, Firestore, Cloud Build, and any custom event published via Pub/Sub.
- Delivery is at-least-once. The destination must be idempotent (same lesson as ingestion in Module 02).
- Retries up to 7 days with exponential backoff. Truly poisoned events end up in a configurable dead-letter topic.
- Avoid putting heavy work directly in the Cloud Run / Function handler — they have execution timeouts (60 min for Run, 9 min for Functions Gen 2). For longer work, the handler should kick off a Dataflow job or Composer DAG and return immediately.
Common Misconceptions
"Eventarc replaces Composer." It's complementary. Composer is the scheduler; Eventarc is the router. Many pipelines use both — Composer for the daily DAG, Eventarc for the "respond to upstream change" hooks.
"Triggering on every GCS write is fine." Watch your filters. A pattern of bucket=myco-lake-bronze with no path filter fires on every file write — that includes Dataform's temporary tables and BigQuery exports. Always include a path pattern.
"Eventarc is real-time." It's near-real-time. Median latency from event to handler invocation is around 1–3 seconds; high-volume scenarios can occasionally spike to 10+ seconds.
Connections to Other Concepts
- Course
02-ingestion-patterns/04-files-and-bulk-loads-into-gcs.md— File-upload triggers as the canonical Eventarc use case. 01-orchestrating-with-cloud-composer.md— When to use one vs. the other.- Course
07-operating-the-system/01-observability-and-data-quality-monitoring.md— Eventarc trigger metrics in Cloud Monitoring.
Further Reading
- Google Cloud, "Eventarc overview" docs.
- "Cloud Workflows" docs — Often the right destination for multi-step Eventarc-triggered logic.
- "CloudEvents specification" — Open spec that Eventarc implements; useful for cross-cloud thinking.