On this page

Cron Workers

Prefer the terminal? See CLI commands → crons.

A cron worker is a scheduled trigger that lives inside your project. You define a name, a cron expression with an IANA timezone, a destination, and an optional JSON payload. Guara Cloud fires the trigger on schedule, delivers the payload to the destination, and persists every run with status, attempt count, timing, and a structured failure reason if anything goes wrong.

You don’t need to deploy a long-lived “cron service” or pull dependencies into your app. Cron workers are a first-class resource — like a service or a database — managed entirely from the dashboard.

When to use a cron worker

Reach for a cron worker whenever you need something to happen on a schedule:

  • Hourly cleanup HTTP call to your API (POST /jobs/cleanup-stale-uploads).
  • Nightly digest that publishes a NATS message your worker consumes to send emails.
  • Cache warm-up that publishes to a Redis or Valkey channel every 15 minutes.
  • Data pipeline kick-off that pushes an event to RabbitMQ to trigger a job.
  • Database notification via Postgres LISTEN/NOTIFY to fan out work to listeners.

Key concepts

  • Schedule: a cron expression plus an IANA timezone. The minimum interval between fires is 60 seconds.
  • Destination: where the trigger lands. Six destination types are supported: HTTP, NATS, Redis, Valkey, RabbitMQ, and Postgres LISTEN/NOTIFY. Once chosen, the destination type is immutable; you can still edit the destination’s configuration (path, subject, channel, etc.).
  • Payload: an optional JSON object (max 64 KiB) sent verbatim to the destination on every fire.
  • Status: active, suspended, or deleted. Suspended workers don’t fire but are preserved with their full configuration and history.
  • Run: every fire produces a run record with status (running, success, failed), triggered_by (schedule or manual), attempt count, scheduled-for time, start/finish, and a structured failure_reason when applicable.
  • Manual trigger: you can fire a worker on demand from the dashboard, outside its schedule, for testing or one-off backfills.

Per-project quota

The number of cron workers you can have in a single project depends on your plan:

ResourceHobbyProBusinessEnterprise
Cron workers per project 1 10 10 Unlimited
Run history retention 30 days 30 days 30 days 30 days

When you reach the per-project quota, the dashboard surfaces an upgrade prompt instead of a generic error. Existing workers keep firing normally — the limit only blocks new creation.

What happens during a run

For each fire, the orchestrator opens a run record, resolves the destination’s connection details (and any required credentials from the catalog), sends the payload, and waits up to your configured timeout (5–300 s, default 30 s) for the destination to acknowledge. If the destination errors out or times out, the run is retried with exponential backoff up to your configured max retries (1–5, default 3). Permanent errors (HTTP 4xx other than 408/429, validation rejections) short-circuit the retry loop. Network errors, 5xx, and timeouts are retried.

Every run carries a W3C traceparent header so you can follow the trigger end-to-end through your services in the Traces view.

Reliability and security guarantees

  • Single-fire per slot: the orchestrator is leader-elected, so a given cron slot fires exactly once even during deploys or pod restarts.
  • Concurrency cap: each worker is capped on simultaneous in-flight runs to prevent thundering herds when a destination is slow.
  • SSRF protection on HTTP: outbound HTTP destinations cannot reach loopback, RFC 1918, link-local, IPv4-mapped IPv6, or carrier-grade NAT addresses. Errors are returned as ssrf_blocked.
  • Reserved headers blocked: custom headers cannot override Host, Authorization, Cookie, Connection, Upgrade, Transfer-Encoding, Content-Length, Traceparent, or Tracestate.
  • Audit logging: create, update, suspend, resume, delete, and manual trigger are all logged in your Audit Logs.

Where to go next