Back to Home

Cron Expression: Every 2 Hours (0 */2 * * *)

CronOS Team
cronschedulingevery-2-hoursperiodic-taskstutorial

Need to generate a cron expression?

Use CronOS to generate any cron expression you wish with natural language. Simply describe what you need, and we'll create the perfect cron expression for you. It's completely free!

Generate Cron Expression

Cron Expression: Every 2 Hours (0 */2 * * *)

The cron expression 0 */2 * * * executes a task every 2 hours at the top of the hour (minute 0), making it ideal for periodic backups, data synchronization, and less frequent maintenance tasks.

Expression Breakdown

bash
0 */2 * * *
│  │  │ │ │
│  │  │ │ └─── Day of week: * (every day)
│  │  │ └───── Month: * (every month)
│  │  └─────── Day of month: * (every day)
│  └────────── Hour: */2 (every 2 hours)
└───────────── Minute: 0 (at minute 0)

Field Values

FieldValueMeaning
Minute0At minute 0 (top of the hour)
Hour*/2Every 2 hours (0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22)
Day of Month*Every day (1-31)
Month*Every month (1-12)
Day of Week*Every day of week (0-7)

Step Value Syntax

The /2 in the hour field is a step value that means "every 2nd hour starting from 0":

  • Runs at: 00:00, 02:00, 04:00, 06:00, 08:00, 10:00, 12:00, 14:00, 16:00, 18:00, 20:00, 22:00

Common Use Cases

1. Periodic Backups

bash
0 */2 * * * /usr/local/bin/backup.sh

Create backups or snapshots of databases and critical files every 2 hours.

2. Data Synchronization

bash
0 */2 * * * /usr/bin/python3 /scripts/sync-data.py

Sync data between systems, databases, or external services.

3. Cache Refresh

bash
0 */2 * * * /usr/bin/python3 /scripts/refresh-cache.py

Refresh cached data, computed statistics, or API responses.

4. Log Aggregation

bash
0 */2 * * * /usr/local/bin/aggregate-logs.sh

Aggregate and process log files from multiple sources.

5. Health Monitoring

bash
0 */2 * * * /usr/local/bin/system-health-check.sh

Monitor system health, resource usage, or service availability.

6. Report Generation

bash
0 */2 * * * /usr/bin/python3 /scripts/generate-report.py

Generate periodic reports or analytics summaries.

Execution Frequency

This expression runs 12 times per day at:

  • 00:00, 02:00, 04:00, 06:00, 08:00, 10:00, 12:00, 14:00, 16:00, 18:00, 20:00, 22:00

Example Implementations

Backup Script

bash
#!/bin/bash
# /usr/local/bin/backup.sh

BACKUP_DIR="/var/backups/2hourly"
SOURCE_DIR="/var/data"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
LOG_FILE="/var/log/backups.log"

mkdir -p $BACKUP_DIR

# Create backup
tar -czf "$BACKUP_DIR/backup_$TIMESTAMP.tar.gz" \
    -C $(dirname $SOURCE_DIR) \
    $(basename $SOURCE_DIR) >> $LOG_FILE 2>&1

# Database backup (if using PostgreSQL)
# pg_dump -U dbuser app_db | gzip > "$BACKUP_DIR/db_backup_$TIMESTAMP.sql.gz"

# Clean up backups older than 7 days
find $BACKUP_DIR -name "*.tar.gz" -mtime +7 -delete
find $BACKUP_DIR -name "*.sql.gz" -mtime +7 -delete

echo "$(date): 2-hourly backup completed" >> $LOG_FILE

Python Data Synchronization

python
# sync-data.py
import requests
import json
from datetime import datetime
import sqlite3

def sync_data():
    try:
        # Fetch from external API
        response = requests.get(
            'https://api.external.com/data',
            timeout=120,
            headers={'Authorization': 'Bearer YOUR_TOKEN'}
        )
        response.raise_for_status()
        data = response.json()
        
        # Store in local database
        conn = sqlite3.connect('/var/data/app.db')
        cursor = conn.cursor()
        
        cursor.execute('''
            CREATE TABLE IF NOT EXISTS synced_data (
                id TEXT PRIMARY KEY,
                data TEXT,
                updated_at TIMESTAMP
            )
        ''')
        
        for item in data:
            cursor.execute('''
                INSERT OR REPLACE INTO synced_data 
                (id, data, updated_at) 
                VALUES (?, ?, ?)
            ''', (item['id'], json.dumps(item), datetime.now()))
        
        conn.commit()
        conn.close()
        
        print(f"{datetime.now()}: Synced {len(data)} records")
    except Exception as e:
        print(f"{datetime.now()}: Sync failed: {e}")

if __name__ == '__main__':
    sync_data()

Cache Refresh Script

python
# refresh-cache.py
import redis
import requests
from datetime import datetime
import json

def refresh_cache():
    r = redis.Redis(host='localhost', port=6379, db=0)
    
    try:
        # Fetch fresh data
        response = requests.get('https://api.example.com/data', timeout=60)
        response.raise_for_status()
        data = response.json()
        
        # Update cache with 3 hour TTL
        r.setex('cached_data', 10800, json.dumps(data))
        
        # Cache individual items
        for item in data:
            r.setex(f"item:{item['id']}", 10800, json.dumps(item))
        
        print(f"{datetime.now()}: Cache refreshed with {len(data)} items")
    except Exception as e:
        print(f"{datetime.now()}: Cache refresh failed: {e}")

if __name__ == '__main__':
    refresh_cache()

Log Aggregation Script

bash
#!/bin/bash
# /usr/local/bin/aggregate-logs.sh

LOG_DIRS=("/var/log/app1" "/var/log/app2" "/var/log/app3")
AGGREGATE_DIR="/var/log/aggregated"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)

mkdir -p $AGGREGATE_DIR

# Aggregate logs from last 2 hours
for log_dir in "${LOG_DIRS[@]}"; do
    if [ -d "$log_dir" ]; then
        find "$log_dir" -name "*.log" -mmin -120 -exec cat {} \; >> \
            "$AGGREGATE_DIR/aggregated_${TIMESTAMP}.log"
    fi
done

# Compress old aggregated logs
find $AGGREGATE_DIR -name "aggregated_*.log" -mtime +7 -exec gzip {} \;

# Remove compressed logs older than 30 days
find $AGGREGATE_DIR -name "aggregated_*.log.gz" -mtime +30 -delete

echo "$(date): Log aggregation completed"

Best Practices

  1. Execution Time: Tasks should complete within 110-115 minutes
  2. Locking: Use file locks or distributed locks to prevent concurrent execution
  3. Error Handling: Implement comprehensive error handling and logging
  4. Idempotency: Design tasks to be safely re-runnable
  5. Resource Management: Monitor CPU, memory, and I/O usage
  6. Backup Retention: Plan backup retention policies appropriately

When to Use

Good for:

  • Periodic backups
  • Data synchronization
  • Cache refresh operations
  • Log aggregation
  • Health monitoring
  • Report generation
  • Less frequent maintenance tasks

Avoid for:

  • Real-time critical operations
  • Tasks requiring immediate execution
  • Very long-running processes (over 110 minutes)
  • Operations needing sub-2-hour precision

Comparison with Other Intervals

IntervalExpressionRuns/DayBest For
Every hour0 * * * *24More frequent tasks
Every 2 hours0 */2 * * *12Periodic tasks
Every 3 hours0 */3 * * *8Less frequent tasks
Every 4 hours0 */4 * * *6Even less frequent

Real-World Example

A typical setup for periodic backups and data sync:

bash
# Periodic backup
0 */2 * * * /usr/local/bin/backup.sh

# Sync external data
0 */2 * * * /usr/bin/python3 /scripts/sync-data.py

# Refresh cache
0 */2 * * * /usr/bin/python3 /scripts/refresh-cache.py

Conclusion

The 0 */2 * * * expression is ideal for tasks that need regular execution but can tolerate a 2-hour interval. It's perfect for backups, data synchronization, and maintenance operations that don't require hourly execution, helping to balance system resource usage with operational requirements.

Need to generate a cron expression?

Use CronOS to generate any cron expression you wish with natural language. Simply describe what you need, and we'll create the perfect cron expression for you. It's completely free!

Generate Cron Expression