Back to Home

Cron Expression: Every 3 Hours (0 */3 * * *)

CronOS Team
cronschedulingevery-3-hoursperiodic-taskstutorial

Need to generate a cron expression?

Use CronOS to generate any cron expression you wish with natural language. Simply describe what you need, and we'll create the perfect cron expression for you. It's completely free!

Generate Cron Expression

Cron Expression: Every 3 Hours (0 */3 * * *)

The cron expression 0 */3 * * * executes a task every 3 hours at the top of the hour (minute 0), making it suitable for periodic backups, data synchronization, and less frequent maintenance operations.

Expression Breakdown

bash
0 */3 * * *
│  │  │ │ │
│  │  │ │ └─── Day of week: * (every day)
│  │  │ └───── Month: * (every month)
│  │  └─────── Day of month: * (every day)
│  └────────── Hour: */3 (every 3 hours)
└───────────── Minute: 0 (at minute 0)

Field Values

FieldValueMeaning
Minute0At minute 0 (top of the hour)
Hour*/3Every 3 hours (0, 3, 6, 9, 12, 15, 18, 21)
Day of Month*Every day (1-31)
Month*Every month (1-12)
Day of Week*Every day of week (0-7)

Step Value Syntax

The /3 in the hour field is a step value that means "every 3rd hour starting from 0":

  • Runs at: 00:00, 03:00, 06:00, 09:00, 12:00, 15:00, 18:00, 21:00

Common Use Cases

1. Periodic Backups

bash
0 */3 * * * /usr/local/bin/backup.sh

Create backups or snapshots of databases and critical files every 3 hours.

2. Data Synchronization

bash
0 */3 * * * /usr/bin/python3 /scripts/sync-data.py

Sync data between systems, databases, or external services.

3. Cache Refresh

bash
0 */3 * * * /usr/bin/python3 /scripts/refresh-cache.py

Refresh cached data, computed statistics, or API responses.

4. Health Monitoring

bash
0 */3 * * * /usr/local/bin/system-health-check.sh

Monitor system health, resource usage, or service availability.

5. Report Generation

bash
0 */3 * * * /usr/bin/python3 /scripts/generate-report.py

Generate periodic reports or analytics summaries.

6. Log Cleanup

bash
0 */3 * * * /usr/local/bin/cleanup-logs.sh

Clean up or archive old log files to manage disk space.

Execution Frequency

This expression runs 8 times per day at:

  • 00:00, 03:00, 06:00, 09:00, 12:00, 15:00, 18:00, 21:00

Example Implementations

Backup Script

bash
#!/bin/bash
# /usr/local/bin/backup.sh

BACKUP_DIR="/var/backups/3hourly"
SOURCE_DIR="/var/data"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
LOG_FILE="/var/log/backups.log"

mkdir -p $BACKUP_DIR

# Create backup
tar -czf "$BACKUP_DIR/backup_$TIMESTAMP.tar.gz" \
    -C $(dirname $SOURCE_DIR) \
    $(basename $SOURCE_DIR) >> $LOG_FILE 2>&1

# Database backup (if using PostgreSQL)
# pg_dump -U dbuser app_db | gzip > "$BACKUP_DIR/db_backup_$TIMESTAMP.sql.gz"

# Clean up backups older than 14 days
find $BACKUP_DIR -name "*.tar.gz" -mtime +14 -delete
find $BACKUP_DIR -name "*.sql.gz" -mtime +14 -delete

echo "$(date): 3-hourly backup completed" >> $LOG_FILE

Python Data Synchronization

python
# sync-data.py
import requests
import json
from datetime import datetime
import sqlite3

def sync_data():
    try:
        # Fetch from external API
        response = requests.get(
            'https://api.external.com/data',
            timeout=180,
            headers={'Authorization': 'Bearer YOUR_TOKEN'}
        )
        response.raise_for_status()
        data = response.json()
        
        # Store in local database
        conn = sqlite3.connect('/var/data/app.db')
        cursor = conn.cursor()
        
        cursor.execute('''
            CREATE TABLE IF NOT EXISTS synced_data (
                id TEXT PRIMARY KEY,
                data TEXT,
                updated_at TIMESTAMP
            )
        ''')
        
        for item in data:
            cursor.execute('''
                INSERT OR REPLACE INTO synced_data 
                (id, data, updated_at) 
                VALUES (?, ?, ?)
            ''', (item['id'], json.dumps(item), datetime.now()))
        
        conn.commit()
        conn.close()
        
        print(f"{datetime.now()}: Synced {len(data)} records")
    except Exception as e:
        print(f"{datetime.now()}: Sync failed: {e}")

if __name__ == '__main__':
    sync_data()

Cache Refresh Script

python
# refresh-cache.py
import redis
import requests
from datetime import datetime
import json

def refresh_cache():
    r = redis.Redis(host='localhost', port=6379, db=0)
    
    try:
        # Fetch fresh data
        response = requests.get('https://api.example.com/data', timeout=60)
        response.raise_for_status()
        data = response.json()
        
        # Update cache with 4 hour TTL
        r.setex('cached_data', 14400, json.dumps(data))
        
        # Cache individual items
        for item in data:
            r.setex(f"item:{item['id']}", 14400, json.dumps(item))
        
        print(f"{datetime.now()}: Cache refreshed with {len(data)} items")
    except Exception as e:
        print(f"{datetime.now()}: Cache refresh failed: {e}")

if __name__ == '__main__':
    refresh_cache()

Health Check Script

bash
#!/bin/bash
# /usr/local/bin/system-health-check.sh

LOG_FILE="/var/log/health-checks.log"
ALERT_EMAIL="admin@example.com"

check_disk_usage() {
    USAGE=$(df -h / | awk 'NR==2 {print $5}' | sed 's/%//')
    if [ $USAGE -gt 85 ]; then
        echo "$(date): WARNING - Disk usage at ${USAGE}%" >> $LOG_FILE
        return 1
    fi
    return 0
}

check_memory() {
    MEM_USAGE=$(free | grep Mem | awk '{printf "%.0f", $3/$2 * 100}')
    if [ $MEM_USAGE -gt 90 ]; then
        echo "$(date): WARNING - Memory usage at ${MEM_USAGE}%" >> $LOG_FILE
        return 1
    fi
    return 0
}

check_services() {
    SERVICES=("nginx" "postgresql" "redis")
    for service in "${SERVICES[@]}"; do
        if ! systemctl is-active --quiet $service; then
            echo "$(date): ERROR - Service $service is not running" >> $LOG_FILE
            return 1
        fi
    done
    return 0
}

# Run checks
check_disk_usage
check_memory
check_services

# Send alert if any check failed
if [ $? -ne 0 ]; then
    mail -s "System Health Alert" $ALERT_EMAIL < $LOG_FILE
fi

Best Practices

  1. Execution Time: Tasks should complete within 170-175 minutes
  2. Locking: Use file locks or distributed locks to prevent concurrent execution
  3. Error Handling: Implement comprehensive error handling and logging
  4. Idempotency: Design tasks to be safely re-runnable
  5. Resource Management: Monitor CPU, memory, and I/O usage
  6. Backup Retention: Plan appropriate backup retention policies

When to Use

Good for:

  • Periodic backups
  • Data synchronization
  • Cache refresh operations
  • Health monitoring
  • Report generation
  • Log cleanup
  • Less frequent maintenance tasks

Avoid for:

  • Real-time critical operations
  • Tasks requiring immediate execution
  • Very long-running processes (over 170 minutes)
  • Operations needing sub-3-hour precision

Comparison with Other Intervals

IntervalExpressionRuns/DayBest For
Every 2 hours0 */2 * * *12More frequent tasks
Every 3 hours0 */3 * * *8Periodic tasks
Every 4 hours0 */4 * * *6Less frequent tasks
Every 6 hours0 */6 * * *4Even less frequent

Real-World Example

A typical setup for periodic maintenance:

bash
# Periodic backup
0 */3 * * * /usr/local/bin/backup.sh

# Sync external data
0 */3 * * * /usr/bin/python3 /scripts/sync-data.py

# Health check
0 */3 * * * /usr/local/bin/system-health-check.sh

Conclusion

The 0 */3 * * * expression is suitable for tasks that need regular execution but can tolerate a 3-hour interval. It's perfect for backups, data synchronization, and maintenance operations that don't require more frequent execution, helping to reduce system resource usage while maintaining reasonable data freshness.

Need to generate a cron expression?

Use CronOS to generate any cron expression you wish with natural language. Simply describe what you need, and we'll create the perfect cron expression for you. It's completely free!

Generate Cron Expression