Usage
Target nodes spec You can choose whether a job is run on a node or nodes by specifying tags and a count of target nodes having this tag do you want a job to run. The target node syntax: [tag-value]:[count] Examples: Target all nodes with a tag: { "name": "job_name", "command": "/bin/true", "schedule": "@every 2m", "tags": { "role": "web" } } mermaid.initialize({startOnLoad:true}); graph LR; J("Job tags: #quot;role#quot;: #quot;web#quot;") --|Run Job|N1["
CRON Expression Format A cron expression represents a set of times, using 6 space-separated fields. Field name | Mandatory? | Allowed values | Allowed special characters ---------- | ---------- | -------------- | -------------------------- Seconds | Yes | 0-59 | * / , - Minutes | Yes | 0-59 | * / , - Hours | Yes | 0-23 | * / , - Day of month | Yes | 1-31 | * / , - ?
Executors Executors plugins are the main mechanism of execution in Dkron. They implement different “types” of jobs in the sense that they can perform the most diverse actions on the target nodes. For example, the built-in shell executor, will run the indicated command in the target node. New plugins will be added, or you can create new ones, to perform different tasks, as HTTP requests, Docker runs, anything that you can imagine.
HTTP executor can send a request to an HTTP endpoint Configuration Params: method: Request method in uppercase url: Request url headers: Json string, such as "[\"Content-Type: application/json\"]" body: POST body timeout: Request timeout, unit seconds expectCode: Expect response code, such as 200,206 expectBody: Expect response body, support regexp, such as /success/ debug: Debug option, will log everything when this option is not empty Example { "executor": "http", "executor_config": { "method": "GET", "url": "http://example.
Shell executor runs a system command Configuration Params shell: Run this command using a shell environment command: The command to run env: Env vars separated by comma cwd: Chdir before command run Example { "executor": "shell", "executor_config": { "shell": "true", "command": "my_command", "env": "ENV_VAR=va1,ANOTHER_ENV_VAR=var2", "cwd": "/app" } }
Execution Processors Processor plugins are called when an execution response has been received. They are passed the resulting execution data and configuration parameters, this plugins can perform a variety of operations with the execution and it’s very flexible and per Job, examples of operations this plugins can do: Execution output storage, forwarding or redirection. Notification Monitoring For example, Processor plugins can be used to redirect the output of a job execution to different targets.
File processor saves the execution output to a single log file in the specified directory Configuration Parameters log_dir: Path to the location where the log files will be saved forward: Forward log output to the next processor Example { "name": "job_name", "command": "echo 'Hello files'", "schedule": "@every 2m", "tags": { "role": "web" }, "processors": { "files": { "log_dir": "/var/log/mydir", "forward": true } } }
Log processor writes the execution output to stdout/stderr Configuration Parameters forward: Forward the output to the next processor Example { "name": "job_name", "command": "echo 'Hello log'", "schedule": "@every 2m", "tags": { "role": "web" }, "processors": { "log": { "forward": true } } }
Syslog processor writes the execution output to the system syslog daemon Note: Only work on linux systems Configuration Parameters forward: Forward the output to the next processor Example { "name": "job_name", "command": "echo 'Hello syslog'", "schedule": "@every 2m", "tags": { "role": "web" }, "processors": { "syslog": { "forward": true } } }
Configure a cluster Dkron can run in HA mode, avoiding SPOFs, this mode provides better scalability and better reliability for users that wants a high level of confidence in the cron jobs they need to run. To form a cluster, server nodes need to know the address of its peers as in the following example: # dkron.yml join: - 10.19.3.9 - 10.19.4.64 - 10.19.7.215 Etcd For a more in detail guide of clustering with etcd follow this guide: https://github.
Concurrency Jobs can be configured to allow overlapping executions or forbid them. Concurrency property accepts two option: allow (default): Allow concurrent job executions. forbid: If the job is already running don’t send the execution, it will skip the executions until the next schedule. Example: { "name": "job1", "schedule": "@every 10s", "executor": "shell", "executor_config": { "command": "echo \"Hello from parent\"" }, "concurrency": "forbid" }
This document is a WIP, it’s intended to describe the reasons that lead to design decisions in Dkron. Execution results Dkron store the result of each job execution in each node. Every time dkron executes a job it assigns it an execution group, generating a new uuid and send a serf query to target machines and waits for a response. Each target machine that will run the job, then responds with an execution object saying it started to run the job.
Job chaining You can set some jobs to run after other job is executed. To setup a job that will be executed after any other given job, just set the parent_job property when saving the new job. The dependent job will be executed after the main job finished a successful execution. Child jobs schedule property will be ignored if it’s present. Take into account that parent jobs must be created before any child job.
Jobs can be configured to retry in case of failure. Configuration { "name": "job1", "schedule": "@every 10s", "executor": "shell", "executor_config": { "command": "echo \"Hello from parent\"" }, "retries": 5 } In case of failure to run the job in one node, it will try to run the job again in that node until the retries count reaches the limit.
Dkron has the ability to send metrics to Statsd for dashboards and historical reporting. It sends job processing metrics and golang, serf metrics too. Configuration Add this in your yaml config file dog_statsd_addr: "localhost:8125" Metrics dkron.agent.event_received.query_execution_done dkron.agent.event_received.query_run_job dkron.memberlist.gossip dkron.memberlist.probeNode dkron.memberlist.pushPullNode dkron.memberlist.tcp.accept dkron.memberlist.tcp.connect dkron.memberlist.tcp.sent dkron.memberlist.udp.received dkron.memberlist.udp.sent dkron.grpc.call_execution_done dkron.grpc.call_get_job dkron.grpc.execution_done dkron.grpc.get_job dkron.runtime.alloc_bytes dkron.runtime.free_count dkron.runtime.gc_pause_ns dkron.runtime.heap_objects dkron.runtime.malloc_count dkron.runtime.num_goroutines dkron.runtime.sys_bytes dkron.runtime.total_gc_pause_ns dkron.runtime.total_gc_runs dkron.serf.coordinate.adjustment_ms dkron.serf.msgs.received dkron.serf.msgs.sent dkron.serf.queries dkron.serf.queries.execution_done dkron.serf.queries.run_job dkron.serf.query_acks dkron.serf.query_responses dkron.serf.queue.Event dkron.serf.queue.Intent dkron.serf.queue.Query
Intro Plugins in Dkron allow you to add funcionality that integrates with the workflow of the job execution in Dkron. It’s a powerful system that allows you to extend and adapt Dkron to your special needs. This page documents the basics of how the plugin system in Dkron works, and how to setup a basic development environment for plugin development if you’re writing a Dkron plugin. How it Works Dkron execution execution processors are provided via plugins.
Developing a Plugin Advanced topic! Plugin development is a highly advanced topic, and is not required knowledge for day-to-day usage. If you don’t plan on writing any plugins, we recommend not reading the following section of the documentation. Developing a plugin is simple. The only knowledge necessary to write a plugin is basic command-line skills and basic knowledge of the Go programming language. Note: A common pitfall is not properly setting up a $GOPATH.
Dkron Pro comes with a native ECS executor out of the box. Use with Amazon ECS To use Dkron to schedule jobs that run in containers, a wrapper ECS script is needed. Install the following snippet in the node that will run the call to ECS Prerequisites The node that will run the call to ECS will need to have installed AWS cli jq Example ecs-run --cluster cron --task-definition cron-taskdef --container-name cron --region us-east-1 --command "rake foo"