Task-scheduler for Java that was inspired by the need for a clustered java.util.concurrent.ScheduledExecutorService
simpler than Quartz.
As such, also appreciated by users (cbarbosa2, rafaelhofmann, BukhariH):
Your lib rocks! I'm so glad I got rid of Quartz and replaced it by yours which is way easier to handle!
See also why not Quartz?
- Cluster-friendly. Guarantees execution by single scheduler instance.
- Persistent tasks. Requires a single database-table for persistence.
- Embeddable. Built to be embedded in existing applications.
- High throughput. Tested to handle 2k - 10k executions / second. Link.
- Simple.
- Minimal dependencies. (slf4j)
- Getting started
- Who uses db-scheduler?
- Examples
- Configuration
- Third-party extensions
- Spring Boot usage
- Interacting with scheduled executions using the SchedulerClient
- How it works
- Performance
- Versions / upgrading
- FAQ
- Add maven dependency
<dependency>
<groupId>com.github.kagkarlsson</groupId>
<artifactId>db-scheduler</artifactId>
<version>14.0.3</version>
</dependency>
-
Create the
scheduled_tasks
table in your database-schema. See table definition for postgresql, oracle, mssql or mysql. -
Instantiate and start the scheduler, which then will start any defined recurring tasks.
RecurringTask<Void> hourlyTask = Tasks.recurring("my-hourly-task", FixedDelay.ofHours(1))
.execute((inst, ctx) -> {
System.out.println("Executed!");
});
final Scheduler scheduler = Scheduler
.create(dataSource)
.startTasks(hourlyTask)
.threads(5)
.build();
// hourlyTask is automatically scheduled on startup if not already started (i.e. exists in the db)
scheduler.start();
For more examples, continue reading. For details on the inner workings, see How it works. If you have a Spring Boot application, have a look at Spring Boot Usage.
List of organizations known to be running db-scheduler in production:
Company | Description |
---|---|
Digipost | Provider of digital mailboxes in Norway |
Vy Group | One of the largest transport groups in the Nordic countries. |
Wise | A cheap, fast way to send money abroad. |
Becker Professional Education | |
Monitoria | Website monitoring service. |
Loadster | Load testing for web applications. |
Statens vegvesen | The Norwegian Public Roads Administration |
Lightyear | A simple and approachable way to invest your money globally. |
NAV | The Norwegian Labour and Welfare Administration |
ModernLoop | Scale with your company’s hiring needs by using ModernLoop to increase efficiency in interview scheduling, communication, and coordination. |
Diffia | Norwegian eHealth company |
Swan | Swan helps developers to embed banking services easily into their product. |
TOMRA | TOMRA is a Norwegian multinational company that designs and manufactures reverse vending machines for recycling. |
Feel free to open a PR to add your organization to the list.
See also runnable examples.
Define a recurring task and schedule the task's first execution on start-up using the startTasks
builder-method. Upon completion, the task will be re-scheduled according to the defined schedule (see pre-defined schedule-types).
RecurringTask<Void> hourlyTask = Tasks.recurring("my-hourly-task", FixedDelay.ofHours(1))
.execute((inst, ctx) -> {
System.out.println("Executed!");
});
final Scheduler scheduler = Scheduler
.create(dataSource)
.startTasks(hourlyTask)
.registerShutdownHook()
.build();
// hourlyTask is automatically scheduled on startup if not already started (i.e. exists in the db)
scheduler.start();
For recurring tasks with multiple instances and schedules, see example RecurringTaskWithPersistentScheduleMain.java.
An instance of a one-time task has a single execution-time some time in the future (i.e. non-recurring). The instance-id must be unique within this task, and may be used to encode some metadata (e.g. an id). For more complex state, custom serializable java objects are supported (as used in the example).
Define a one-time task and start the scheduler:
OneTimeTask<MyTaskData> myAdhocTask = Tasks.oneTime("my-typed-adhoc-task", MyTaskData.class)
.execute((inst, ctx) -> {
System.out.println("Executed! Custom data, Id: " + inst.getData().id);
});
final Scheduler scheduler = Scheduler
.create(dataSource, myAdhocTask)
.registerShutdownHook()
.build();
scheduler.start();
... and then at some point (at runtime), an execution is scheduled using the SchedulerClient
:
// Schedule the task for execution a certain time in the future and optionally provide custom data for the execution
scheduler.schedule(myAdhocTask.instance("1045", new MyTaskData(1001L)), Instant.now().plusSeconds(5));
Example | Description |
---|---|
EnableImmediateExecutionMain.java | When scheduling executions to run now() or earlier, the local Scheduler will be hinted about this, and "wake up" to go check for new executions earlier than it normally would (as configured by pollingInterval . |
MaxRetriesMain.java | How to set a limit on the number of retries an execution can have. |
ExponentialBackoffMain.java | How to use exponential backoff as retry strategy instead of fixed delay as is default. |
ExponentialBackoffWithMaxRetriesMain.java | How to use exponential backoff as retry strategy and a hard limit on the maximum number of retries. |
TrackingProgressRecurringTaskMain.java | Recurring jobs may store task_data as a way of persisting state across executions. This example shows how. |
SpawningOtherTasksMain.java | Demonstrates on task scheduling instances of another by using the executionContext.getSchedulerClient() . |
SchedulerClientMain.java | Demonstates some of the SchedulerClient 's capabilities. Scheduling, fetching scheduled executions etc. |
RecurringTaskWithPersistentScheduleMain.java | Multi-instance recurring jobs where the Schedule is stored as part of the task_data . For example suitable for multi-tenant applications where each tenent should have a recurring task. |
StatefulRecurringTaskWithPersistentScheduleMain.java | |
JsonSerializerMain.java | Overrides serialization of task_data from Java-serialization (default) to JSON. |
JobChainingUsingTaskDataMain.java | Job chaining, i.e. "when this instance is done executing, schedule another task. |
JobChainingUsingSeparateTasksMain.java | Job chaining, as above. |
InterceptorMain.java | Using ExecutionInterceptor to inject logic before and after execution for all ExecutionHandler . |
Example | Description |
---|---|
BasicExamples | A basic one-time task and recurring task |
TransactionallyStagedJob | Example of transactionally staging a job, i.e. making sure the background job runs iff the transaction commits (along with other db-modifications). |
LongRunningJob | Long-running jobs need to survive application restarts and avoid restarting from the beginning. This example demonstrates how to persisting progress on shutdown and additionally a technique for limiting the job to run nightly. |
RecurringStateTracking | A recurring task with state that can be modified after each run. |
ParallellJobSpawner | Demonstrates how to use a recurring job to spawn one-time jobs, e.g. for parallelization. |
JobChaining | A one-time job with multiple steps. The next step is scheduled after the previous one completes. |
MultiInstanceRecurring | Demonstrates how to achieve multiple recurring jobs of the same type, but potentially differing schedules and data. |
The scheduler is created using the Scheduler.create(...)
builder. The builder has sensible defaults, but the following options are configurable.
⚙️ .threads(int)
Number of threads. Default 10
.
⚙️ .pollingInterval(Duration)
How often the scheduler checks the database for due executions. Default 10s
.
⚙️ .alwaysPersistTimestampInUTC()
The Scheduler assumes that columns for persisting timestamps persist Instant
s, not LocalDateTime
s,
i.e. somehow tie the timestamp to a zone. However, some databases have limited support for such types
(which has no zone information) or other quirks, making "always store in UTC" a better alternative.
For such cases, use this setting to always store Instants in UTC.
PostgreSQL and Oracle-schemas is tested to preserve zone-information. MySQL and MariaDB-schemas
does not and should use this setting.
NB: For backwards compatibility, the default behavior
for "unknown" databases is to assume the database preserves time zone. For "known" databases,
see the class AutodetectJdbcCustomization
.
⚙️ .enableImmediateExecution()
If this is enabled, the scheduler will attempt to hint to the local Scheduler
that there are executions to be executed after they are scheduled to
run now()
, or a time in the past. NB: If the call to schedule(..)
/reschedule(..)
occur from within a transaction, the scheduler might attempt to run
it before the update is visible (transaction has not committed). It is still persisted though, so even if it is a miss, it will run before the
next polling-interval
. You may also programmatically trigger an early check for due executions using the
Scheduler-method scheduler.triggerCheckForDueExecutions()
). Default false
.
⚙️ .registerShutdownHook()
Registers a shutdown-hook that will call Scheduler.stop()
on shutdown. Stop should always be called for a
graceful shutdown and to avoid dead executions.
⚙️ .shutdownMaxWait(Duration)
How long the scheduler will wait before interrupting executor-service threads. If you find yourself using this,
consider if it is possible to instead regularly check executionContext.getSchedulerState().isShuttingDown()
in the ExecutionHandler and abort long-running task. Default 30min
.
If you are running >1000 executions/s you might want to use the lock-and-fetch
polling-strategy for lower overhead
and higher througput (read more). If not, the default fetch-and-lock-on-execute
will be fine.
⚙️ .pollUsingFetchAndLockOnExecute(double, double)
Use default polling strategy fetch-and-lock-on-execute
.
If the last fetch from the database was a full batch (executionsPerBatchFractionOfThreads
), a new fetch will be triggered
when the number of executions left are less than or equal to lowerLimitFractionOfThreads * nr-of-threads
.
Fetched executions are not locked/picked, so the scheduler will compete with other instances for the lock
when it is executed. Supported by all databases.
Defaults: 0,5, 3.0
⚙️ .pollUsingLockAndFetch(double, double)
Use polling strategy lock-and-fetch
which uses select for update .. skip locked
for less overhead.
If the last fetch from the database was a full batch, a new fetch will be triggered
when the number of executions left are less than or equal to lowerLimitFractionOfThreads * nr-of-threads
.
The number of executions fetched each time is equal to (upperLimitFractionOfThreads * nr-of-threads) - nr-executions-left
.
Fetched executions are already locked/picked for this scheduler-instance thus saving one UPDATE
statement.
For normal usage, set to for example 0.5, 1.0
.
For high throughput
(i.e. keep threads busy), set to for example 1.0, 4.0
. Currently hearbeats are not updated for picked executions
in queue (applicable if upperLimitFractionOfThreads > 1.0
). If they stay there for more than
4 * hearbeat-interval
(default 20m
), not starting execution, they will be detected as dead and likely be
unlocked again (determined by DeadExecutionHandler
). Currently supported by postgres. sql-server also supports
this, but testing has shown this is prone to deadlocks and thus not recommended until understood/resolved.
⚙️ .heartbeatInterval(Duration)
How often to update the heartbeat timestamp for running executions. Default 5m
.
⚙️ .missedHeartbeatsLimit(int)
How many heartbeats may be missed before the execution is considered dead. Default 6
.
⚙️ .addExecutionInterceptor(ExecutionInterceptor)
Adds an ExecutionInterceptor
which may inject logic around executions. For Spring Boot, simply register a Bean of type ExecutionInterceptor
.
⚙️ .addSchedulerListener(SchedulerListener)
Adds an SchedulerListener
which will receive Scheduler- and Execution-related events. For Spring Boot, simply register a Bean of type SchedulerListener
.
⚙️ .schedulerName(SchedulerName)
Name of this scheduler-instance. The name is stored in the database when an execution is picked by a scheduler.
Default <hostname>
.
⚙️ .tableName(String)
Name of the table used to track task-executions. Change name in the table definitions accordingly when creating
the table. Default scheduled_tasks
.
⚙️ .serializer(Serializer)
Serializer implementation to use when serializing task data. Default to using standard Java serialization,
but db-scheduler also bundles a GsonSerializer
and JacksonSerializer
. See examples for a KotlinSerializer.
See also additional documentation under Serializers.
⚙️ .executorService(ExecutorService)
If specified, use this externally managed executor service to run executions. Ideally the number of threads it
will use should still be supplied (for scheduler polling optimizations). Default null
.
⚙️ .deleteUnresolvedAfter(Duration)
The time after which executions with unknown tasks are automatically deleted. These can typically be old recurring
tasks that are not in use anymore. This is non-zero to prevent accidental removal of tasks through a configuration
error (missing known-tasks) and problems during rolling upgrades. Default 14d
.
⚙️ .jdbcCustomization(JdbcCustomization)
db-scheduler tries to auto-detect the database used to see if any jdbc-interactions need to be customized. This
method is an escape-hatch to allow for setting JdbcCustomizations
explicitly. Default auto-detect.
⚙️ .commitWhenAutocommitDisabled(boolean)
By default no commit is issued on DataSource Connections. If auto-commit is disabled, it is assumed that
transactions are handled by an external transaction-manager. Set this property to true
to override this
behavior and have the Scheduler always issue commits. Default false
.
⚙️ .failureLogging(Level, boolean)
Configures how to log task failures, i.e. Throwable
s thrown from a task execution handler. Use log level OFF
to disable
this kind of logging completely. Default WARN, true
.
Tasks are created using one of the builder-classes in Tasks
. The builders have sensible defaults, but the following options can be overridden.
Option | Default | Description |
---|---|---|
.onFailure(FailureHandler) |
see desc. | What to do when a ExecutionHandler throws an exception. By default, Recurring tasks are rescheduled according to their Schedule one-time tasks are retried again in 5m. |
.onDeadExecution(DeadExecutionHandler) |
ReviveDeadExecution |
What to do when a dead executions is detected, i.e. an execution with a stale heartbeat timestamp. By default dead executions are rescheduled to now() . |
.initialData(T initialData) |
null |
The data to use the first time a recurring task is scheduled. |
The library contains a number of Schedule-implementations for recurring tasks. See class Schedules
.
Schedule | Description |
---|---|
.daily(LocalTime ...) |
Runs every day at specified times. Optionally a time zone can be specified. |
.fixedDelay(Duration) |
Next execution-time is Duration after last completed execution. Note: This Schedule schedules the initial execution to Instant.now() when used in startTasks(...) |
.cron(String) |
Spring-style cron-expression (v5.3+). The pattern - is interpreted as a disabled schedule. |
Another option to configure schedules is reading string patterns with Schedules.parse(String)
.
The currently available patterns are:
Pattern | Description |
---|---|
FIXED_DELAY|Ns |
Same as .fixedDelay(Duration) with duration set to N seconds. |
DAILY|12:30,15:30...(|time_zone) |
Same as .daily(LocalTime) with optional time zone (e.g. Europe/Rome, UTC) |
- |
Disabled schedule |
More details on the time zone formats can be found here.
A Schedule
can be marked as disabled. The scheduler will not schedule the initial executions for tasks with a disabled schedule,
and it will remove any existing executions for that task.
A task-instance may have some associated data in the field task_data
. The scheduler uses a Serializer
to read and write this
data to the database. By default, standard Java serialization is used, but a number of options is provided:
GsonSerializer
JacksonSerializer
- KotlinSerializer
For Java serialization it is recommended to specify a serialVersionUID
to be able to evolve the class representing the data. If not specified,
and the class changes, deserialization will likely fail with a InvalidClassException
. Should this happen, find and set the current auto-generated
serialVersionUID
explicitly. It will then be possible to do non-breaking changes to the class.
If you need to migrate from Java serialization to a GsonSerializer
, configure the scheduler to use a SerializerWithFallbackDeserializers
:
.serializer(new SerializerWithFallbackDeserializers(new GsonSerializer(), new JavaSerializer()))
- bekk/db-scheduler-ui is admin-ui for the scheduler. It shows scheduled executions and supplies simple admin-operations such as "rerun failed execution now" and "delete execution".
- rocketbase-io/db-scheduler-log is an extension providing a history of executions, including failures and exceptions.
- piemjean/db-scheduler-mongo is an extension for running db-scheduler with a MongoDB database.
- osoykan/db-scheduler-additions adds MongoDB & Couchbase support on top of Kotlin and Coroutines. It also provides a Ktor plugin for db-scheduler-ui.
For Spring Boot applications, there is a starter db-scheduler-spring-boot-starter
making the scheduler-wiring very simple. (See full example project).
- An existing Spring Boot application
- A working
DataSource
with schema initialized. (In the example HSQLDB is used and schema is automatically applied.)
- Add the following Maven dependency
NOTE: This includes the db-scheduler dependency itself.
<dependency> <groupId>com.github.kagkarlsson</groupId> <artifactId>db-scheduler-spring-boot-starter</artifactId> <version>14.0.3</version> </dependency>
- In your configuration, expose your
Task
's as Spring beans. If they are recurring, they will automatically be picked up and started. - If you want to expose
Scheduler
state into actuator health information you need to enabledb-scheduler
health indicator. Spring Health Information. - Run the app.
Configuration is mainly done via application.properties
. Configuration of scheduler-name, serializer and executor-service is done by adding a bean of type DbSchedulerCustomizer
to your Spring context.
# application.properties example showing default values
db-scheduler.enabled=true
db-scheduler.heartbeat-interval=5m
db-scheduler.polling-interval=10s
db-scheduler.polling-limit=
db-scheduler.table-name=scheduled_tasks
db-scheduler.immediate-execution-enabled=false
db-scheduler.scheduler-name=
db-scheduler.threads=10
# Ignored if a custom DbSchedulerStarter bean is defined
db-scheduler.delay-startup-until-context-ready=false
db-scheduler.polling-strategy=fetch
db-scheduler.polling-strategy-lower-limit-fraction-of-threads=0.5
db-scheduler.polling-strategy-upper-limit-fraction-of-threads=3.0
db-scheduler.shutdown-max-wait=30m
It is possible to use the Scheduler
to interact with the persisted future executions. For situations where a full
Scheduler
-instance is not needed, a simpler SchedulerClient
can be created using its builder:
SchedulerClient.Builder.create(dataSource, taskDefinitions).build()
It will allow for operations such as:
- List scheduled executions
- Reschedule a specific execution
- Remove an old executions that have been retrying for too long
- ...
A single database table is used to track future task-executions. When a task-execution is due, db-scheduler picks it and executes it. When the execution is done, the Task
is consulted to see what should be done. For example, a RecurringTask
is typically rescheduled in the future based on its Schedule
.
The scheduler uses optimistic locking or select-for-update (depending on polling strategy) to guarantee that one and only one scheduler-instance gets to pick and run a task-execution.
The term recurring task is used for tasks that should be run regularly, according to some schedule.
When the execution of a recurring task has finished, a Schedule
is consulted to determine what the next time for
execution should be, and a future task-execution is created for that time (i.e. it is rescheduled).
The time chosen will be the nearest time according to the Schedule
, but still in the future.
There are two types of recurring tasks, the regular static recurring task, where the Schedule
is defined statically in the code, and
the dynamic recurring tasks, where the Schedule
is defined at runtime and persisted in the database (still requiring only a single table).
The static recurring task is the most common one and suitable for regular background jobs since the scheduler automatically schedules
an instance of the task if it is not present and also updates the next execution-time if the Schedule
is updated.
To create the initial execution for a static recurring task, the scheduler has a method startTasks(...)
that takes a list of tasks
that should be "started" if they do not already have an existing execution. The initial execution-time is determined by the Schedule
.
If the task already has a future execution (i.e. has been started at least once before), but an updated Schedule
now indicates another execution-time,
the existing execution will be rescheduled to the new execution-time (with the exception of non-deterministic schedules
such as FixedDelay
where new execution-time is further into the future).
Create using Tasks.recurring(..)
.
The dynamic recurring task is a later addition to db-scheduler and was added to support use-cases where there is need for multiple instances
of the same type of task (i.e. same implementation) with different schedules. The Schedule
is persisted in the task_data
alongside any regular data.
Unlike the static recurring task, the dynamic one will not automatically schedule instances of the task. It is up to the user to create instances and
update the schedule for existing ones if necessary (using the SchedulerClient
interface).
See the example RecurringTaskWithPersistentScheduleMain.java for more details.
Create using Tasks.recurringWithPersistentSchedule(..)
.
The term one-time task is used for tasks that have a single execution-time.
In addition to encode data into the instanceId
of a task-execution, it is possible to store arbitrary binary data in a separate field for use at execution-time. By default, Java serialization is used to marshal/unmarshal the data.
Create using Tasks.oneTime(..)
.
For tasks not fitting the above categories, it is possible to fully customize the behavior of the tasks using Tasks.custom(..)
.
Use-cases might be:
- Tasks that should be either rescheduled or removed based on output from the actual execution
- ..
During execution, the scheduler regularly updates a heartbeat-time for the task-execution. If an execution is marked as executing, but is not receiving updates to the heartbeat-time, it will be considered a dead execution after time X. That may for example happen if the JVM running the scheduler suddenly exits.
When a dead execution is found, the Task
is consulted to see what should be done. A dead
RecurringTask
is typically rescheduled to now()
.
While db-scheduler initially was targeted at low-to-medium throughput use-cases, it handles high-throughput use-cases (1000+ executions/second) quite well due to the fact that its data-model is very simple, consisting of a single table of executions. To understand how it will perform, it is useful to consider the SQL statements it runs per batch of executions.
The original and default polling strategy, fetch-and-lock-on-execute
, will do the following:
select
a batch of due executions- For every execution, on execute, try to
update
the execution topicked=true
for this scheduler-instance. May miss due to competing schedulers. - If execution was picked, when execution is done,
update
ordelete
the record according to handlers.
In sum per batch: 1 select, 2 * batch-size updates (excluding misses)
In v10, a new polling strategy (lock-and-fetch
) was added. It utilizes the fact that most databases now have support for SKIP LOCKED
in SELECT FOR UPDATE
statements (see 2ndquadrant blog).
Using such a strategy, it is possible to fetch executions pre-locked, and thus getting one statement less:
select for update .. skip locked
a batch of due executions. These will already be picked by the scheduler-instance.- When execution is done,
update
ordelete
the record according to handlers.
In sum per batch: 1 select-and-update, 1 * batch-size updates (no misses)
To get an idea of what to expect from db-scheduler, see results from the tests run in GCP below. Tests were run with a few different configurations, but each using 4 competing scheduler-instances running on separate VMs. TPS is the approx. transactions per second as shown in GCP.
Throughput fetch (ex/s) | TPS fetch (estimates) | Throughput lock-and-fetch (ex/s) | TPS lock-and-fetch (estimates) | |
---|---|---|---|---|
Postgres 4core 25gb ram, 4xVMs(2-core) | ||||
20 threads, lower 4.0, upper 20.0 | 2000 | 9000 | 10600 | 11500 |
100 threads, lower 2.0, upper 6.0 | 2560 | 11000 | 11200 | 11200 |
Postgres 8core 50gb ram, 4xVMs(4-core) | ||||
50 threads, lower: 0.5, upper: 4.0 | 4000 | 22000 | 11840 | 10300 |
Observations for these tests:
- For
fetch-and-lock-on-execute
- TPS ≈ 4-5 * execution-throughput. A bit higher than the best-case 2 * execution-throughput, likely due the inefficiency of missed executions.
- throughput did scale with postgres instance-size, from 2000 executions/s on 4core to 4000 executions/s on 8core
- For
lock-and-fetch
- TPS ≈ 1 * execution-throughput. As expected.
- seem to consistently handle 10k executions/s for these configurations
- throughput did not scale with postgres instance-size (4-8 core), so bottleneck is somewhere else
Currently, polling strategy lock-and-fetch
is implemented only for Postgres. Contributions adding support for more databases are welcome.
There are a number of users that are using db-scheduler for high throughput use-cases. See for example:
-
There are no guarantees that all instants in a schedule for a
RecurringTask
will be executed. TheSchedule
is consulted after the previous task-execution finishes, and the closest time in the future will be selected for next execution-time. A new type of task may be added in the future to provide such functionality. -
The methods on
SchedulerClient
(schedule
,cancel
,reschedule
) will run using a newConnection
from theDataSource
provided. To have the action be a part of a transaction, it must be taken care of by theDataSource
provided, for example using something like Spring'sTransactionAwareDataSourceProxy
. -
Currently, the precision of db-scheduler is depending on the
pollingInterval
(default 10s) which specifies how often to look in the table for due executions. If you know what you are doing, the scheduler may be instructed at runtime to "look early" viascheduler.triggerCheckForDueExecutions()
. (See alsoenableImmediateExecution()
on theBuilder
)
See releases for release-notes.
Upgrading to 8.x
- Custom Schedules must implement a method
boolean isDeterministic()
to indicate whether they will always produce the same instants or not.
Upgrading to 4.x
- Add column
consecutive_failures
to the database schema. See table definitions for postgresql, oracle or mysql.null
is handled as 0, so no need to update existing records.
Upgrading to 3.x
- No schema changes
- Task creation are preferrably done through builders in
Tasks
class
Upgrading to 2.x
- Add column
task_data
to the database schema. See table definitions for postgresql, oracle or mysql.
Prerequisites
- Java 8+
- Maven
Follow these steps:
-
Clone the repository.
git clone https://github.com/kagkarlsson/db-scheduler cd db-scheduler
-
Build using Maven (skip tests by adding
-DskipTests=true
)mvn package
Recommended spec
Some users have experienced intermittent test failures when running on a single-core VMs. Therefore, it is recommended to use a minimum of:
- 2 cores
- 2GB RAM
The goal of db-scheduler
is to be non-invasive and simple to use, but still solve the persistence problem, and the cluster-coordination problem.
It was originally targeted at applications with modest database schemas, to which adding 11 tables would feel a bit overkill..
Update: Also, as of now (2024), Quartz does not seem to be actively maintained either.
KISS. It's the most common type of shared state applications have.
Please create an issue with the feature request and we can discuss it there. If you are impatient (or feel like contributing), pull requests are most welcome :)
Yes. It is used in production at a number of companies, and have so far run smoothly.