Tempus is a minimalist, blazing-fast, and scalable scheduler designed to handle time-based job execution with maximum efficiency and simplicity.
- ⚡ Reliable Job Execution: Execute scheduled jobs with built-in retry mechanisms and failure handling
- 🌐 Multi-Protocol Support: Support for both HTTP webhooks and Kafka message publishing
- 🔗 RESTful API: Complete CRUD operations for job management via HTTP API
- 💾 Database Persistence: PostgreSQL integration with Sea-ORM for reliable job storage
- 📅 Job Rescheduling: Update job execution times dynamically via API
- 🚀 Concurrent Processing: Multi-threaded job processing with configurable concurrency limits
- 🔄 Retry Logic: Configurable retry attempts with exponential backoff for failed jobs
- 📊 Job Status Tracking: Complete job lifecycle management (Scheduled, Processing, Completed, Failed, Deleted)
- 🛑 Graceful Shutdown: Signal handling for clean shutdown with running job completion
- ⚙️ Configuration Management: Environment-based configuration with sensible defaults
- 📝 Structured Logging: Comprehensive logging for monitoring and debugging
- 📊 Prometheus Metrics: Built-in metrics collection and export for monitoring and observability
Tempus is built using a clean hexagonal architecture with clear separation of concerns:
- Domain Layer: Core business logic and entities
- Infrastructure Layer: Database persistence, HTTP clients, and Kafka integration
- API Layer: RESTful endpoints for job management
- Engine Layer: Job processing engine with concurrent execution
- Rust 2024 edition
- PostgreSQL database
- Kafka cluster (optional, for Kafka jobs)
- Clone the repository
- Set up your environment variables (see Configuration section)
- Run database migrations:
cd migration && cargo run
- Build the project:
cargo build --release
Start the scheduler engine:
cargo run --bin tempusStart the API server:
cargo run --bin tempus-apiThe engine will expose metrics on http://localhost:3001/metrics and the API will be available on http://localhost:3000.
HTTP Job:
curl -X POST http://localhost:3000/jobs \
-H "Content-Type: application/json" \
-d '{
"type": "http",
"target": "https://api.example.com/webhook",
"time": "2024-01-01T12:00:00Z",
"payload": {
"message": "Hello World"
}
}'Kafka Job:
curl -X POST http://localhost:3000/jobs \
-H "Content-Type: application/json" \
-d '{
"type": "kafka",
"target": "my-topic",
"time": "2024-01-01T12:00:00Z",
"payload": {
"event": "user.created",
"userId": 123
}
}'curl -X PATCH http://localhost:3000/jobs/{job_id}/time \
-H "Content-Type: application/json" \
-d '{
"time": "2024-01-02T12:00:00Z"
}'curl -X DELETE http://localhost:3000/jobs/{job_id}Tempus provides comprehensive Prometheus metrics for monitoring job execution and system performance. All metrics are exposed by the engine on port 3001.
jobs_processed_total{status}: Counter of processed jobs by status (success, failure, retry)jobs_duration_seconds: Histogram of job execution durationjobs_http_requests_total{status_code}: Counter of HTTP requests made by jobsjobs_kafka_messages_total: Counter of Kafka messages publishedcurrent_processing_jobs: Gauge of currently processing jobs
# Get all metrics from the engine
curl http://localhost:3001/metrics
# Check engine health
curl http://localhost:3001/healthAdd the following to your prometheus.yml:
scrape_configs:
- job_name: 'tempus-engine'
static_configs:
- targets: ['localhost:3001']
scrape_interval: 5s
metrics_path: /metricsTempus uses environment variables for configuration. You can set the following variables:
DATABASE_URL: PostgreSQL connection stringDATABASE_MAX_CONNECTIONS: Maximum database connections (default: 100)DATABASE_MIN_CONNECTIONS: Minimum database connections (default: 30)DATABASE_CONNECT_TIMEOUT_SECS: Connection timeout in seconds (default: 8)DATABASE_ACQUIRE_TIMEOUT_SECS: Connection acquire timeout (default: 8)DATABASE_IDLE_TIMEOUT_SECS: Connection idle timeout (default: 60)DATABASE_MAX_LIFETIME_SECS: Connection max lifetime (default: 60)
ENGINE_MAX_CONCURRENT_JOBS: Maximum concurrent job processing (default: 10)ENGINE_RETRY_ATTEMPTS: Number of retry attempts for failed jobs (default: 3)ENGINE_BASE_DELAY_MINUTES: Base delay between retries in minutes (default: 2)
HTTP_PORT: API server port (default: 3000)HTTP_POOL_IDLE_TIMEOUT_SECS: HTTP client pool idle timeout (default: 30)HTTP_REQUEST_TIMEOUT_SECS: HTTP request timeout (default: 30)
KAFKA_BOOTSTRAP_SERVERS: Kafka bootstrap servers (default: localhost:9092)KAFKA_DEFAULT_TOPIC: Default topic for Kafka jobs (default: tempus-events)KAFKA_PRODUCER_TIMEOUT_SECS: Producer timeout in seconds (default: 30)KAFKA_PRODUCER_RETRIES: Number of producer retries (default: 5)KAFKA_BATCH_SIZE: Producer batch size (default: 16384)KAFKA_COMPRESSION_TYPE: Compression type (default: snappy)
cargo testTo create a new migration:
cd migration
cargo run -- generate MIGRATION_NAMETo run migrations:
cd migration
cargo runThe project includes Bruno API collection files in the bruno/ directory for testing the API endpoints.
⚠️ Note: Tempus is under active development. APIs and features may change as the project evolves.
