Back to Home

Cron Expression: Every Hour (0 * * * *)

CronOS Team
cronschedulingevery-hourhourly-taskstutorial

Need to generate a cron expression?

Use CronOS to generate any cron expression you wish with natural language. Simply describe what you need, and we'll create the perfect cron expression for you. It's completely free!

Generate Cron Expression

Cron Expression: Every Hour (0 * * * *)

The cron expression 0 * * * * executes a task every hour at the top of the hour (minute 0), making it one of the most commonly used patterns for hourly reports, backups, and maintenance operations.

Expression Breakdown

bash
0 * * * *
│ │ │ │ │
│ │ │ │ └─── Day of week: * (every day)
│ │ │ └───── Month: * (every month)
│ │ └─────── Day of month: * (every day)
│ └───────── Hour: * (every hour)
└─────────── Minute: 0 (at minute 0)

Field Values

FieldValueMeaning
Minute0At minute 0 (top of the hour)
Hour*Every hour (0-23)
Day of Month*Every day (1-31)
Month*Every month (1-12)
Day of Week*Every day of week (0-7)

Execution Times

This expression runs 24 times per day at:

  • 00:00, 01:00, 02:00, 03:00, ..., 23:00

Common Use Cases

1. Hourly Reports

bash
0 * * * * /usr/bin/python3 /scripts/generate-hourly-report.py

Generate hourly analytics reports, summaries, or data aggregations.

2. Database Backups

bash
0 * * * * /usr/local/bin/hourly-backup.sh

Create hourly backups or snapshots of databases and critical files.

3. Log Rotation

bash
0 * * * * /usr/local/bin/rotate-logs.sh

Rotate, compress, or archive log files on an hourly basis.

4. Cache Refresh

bash
0 * * * * /usr/bin/python3 /scripts/refresh-cache.py

Refresh cached data, computed statistics, or API responses.

5. Health Monitoring

bash
0 * * * * /usr/local/bin/system-health-check.sh

Monitor system health, resource usage, or service availability.

6. Data Synchronization

bash
0 * * * * /usr/bin/python3 /scripts/sync-data.py

Sync data between systems, databases, or external services.

Example Implementations

Hourly Report Generation

bash
#!/bin/bash
# /usr/local/bin/generate-hourly-report.sh

LOCK_FILE="/tmp/hourly-report.lock"
LOG_FILE="/var/log/reports.log"

if [ -f "$LOCK_FILE" ]; then
    echo "$(date): Report generation already running" >> $LOG_FILE
    exit 0
fi

touch $LOCK_FILE

/usr/bin/python3 /scripts/generate-hourly-report.py >> $LOG_FILE 2>&1

rm -f $LOCK_FILE

Python Hourly Report

python
# generate-hourly-report.py
import json
from datetime import datetime, timedelta
import sqlite3

def generate_hourly_report():
    conn = sqlite3.connect('/var/data/app.db')
    cursor = conn.cursor()
    
    # Get data from last hour
    since = datetime.now() - timedelta(hours=1)
    
    # Aggregate metrics
    cursor.execute('''
        SELECT 
            COUNT(*) as total_requests,
            AVG(response_time) as avg_response_time,
            COUNT(CASE WHEN status_code >= 400 THEN 1 END) as errors,
            COUNT(CASE WHEN status_code >= 500 THEN 1 END) as server_errors
        FROM requests
        WHERE timestamp >= ?
    ''', (since,))
    
    metrics = cursor.fetchone()
    
    report = {
        'timestamp': datetime.now().isoformat(),
        'period': '1_hour',
        'total_requests': metrics[0],
        'avg_response_time': round(metrics[1], 2) if metrics[1] else 0,
        'errors': metrics[2],
        'server_errors': metrics[3],
        'error_rate': round((metrics[2] / metrics[0] * 100), 2) if metrics[0] > 0 else 0
    }
    
    # Save report
    report_file = f'/var/reports/hourly_report_{datetime.now().strftime("%Y%m%d_%H%M%S")}.json'
    with open(report_file, 'w') as f:
        json.dump(report, f, indent=2)
    
    print(f"{datetime.now()}: Hourly report generated: {report}")
    conn.close()

if __name__ == '__main__':
    generate_hourly_report()

Hourly Backup Script

bash
#!/bin/bash
# /usr/local/bin/hourly-backup.sh

BACKUP_DIR="/var/backups/hourly"
SOURCE_DIR="/var/data"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
LOG_FILE="/var/log/backups.log"

mkdir -p $BACKUP_DIR

# Create backup
tar -czf "$BACKUP_DIR/backup_$TIMESTAMP.tar.gz" \
    -C $(dirname $SOURCE_DIR) \
    $(basename $SOURCE_DIR) >> $LOG_FILE 2>&1

# Database backup (if using PostgreSQL)
# pg_dump -U dbuser app_db | gzip > "$BACKUP_DIR/db_backup_$TIMESTAMP.sql.gz"

# Clean up backups older than 7 days
find $BACKUP_DIR -name "*.tar.gz" -mtime +7 -delete
find $BACKUP_DIR -name "*.sql.gz" -mtime +7 -delete

echo "$(date): Hourly backup completed" >> $LOG_FILE

Log Rotation Script

bash
#!/bin/bash
# /usr/local/bin/rotate-logs.sh

LOG_DIR="/var/log/app"
ARCHIVE_DIR="/var/log/archive"
RETENTION_DAYS=30

mkdir -p $ARCHIVE_DIR

# Rotate logs
for logfile in $LOG_DIR/*.log; do
    if [ -f "$logfile" ]; then
        filename=$(basename $logfile)
        archive_path="$ARCHIVE_DIR/${filename}.$(date +%Y%m%d_%H%M%S).gz"
        
        # Compress and archive
        gzip -c $logfile > $archive_path
        
        # Truncate original log
        > $logfile
        
        echo "$(date): Rotated $filename"
    fi
done

# Clean up old archives
find $ARCHIVE_DIR -name "*.gz" -mtime +$RETENTION_DAYS -delete

Node.js Cache Refresh

javascript
// refresh-cache.js
const redis = require('redis');
const axios = require('axios');

const client = redis.createClient({
  host: 'localhost',
  port: 6379
});

async function refreshCache() {
  try {
    // Fetch fresh data
    const response = await axios.get('https://api.example.com/data', {
      timeout: 30000
    });
    
    const data = response.data;
    
    // Update cache with 2 hour TTL
    await client.setex('cached_data', 7200, JSON.stringify(data));
    
    // Cache individual items
    for (const item of data) {
      await client.setex(
        `item:${item.id}`,
        7200,
        JSON.stringify(item)
      );
    }
    
    console.log(`${new Date().toISOString()}: Cache refreshed with ${data.length} items`);
  } catch (error) {
    console.error(`${new Date().toISOString()}: Cache refresh failed:`, error.message);
  } finally {
    client.quit();
  }
}

refreshCache();

Best Practices

  1. Execution Time: Tasks should complete within 50-55 minutes
  2. Locking: Use file locks or distributed locks to prevent concurrent execution
  3. Error Handling: Implement comprehensive error handling and logging
  4. Idempotency: Design tasks to be safely re-runnable
  5. Resource Management: Monitor CPU, memory, and I/O usage
  6. Timing: Tasks run at :00, so plan for potential system load

When to Use

Good for:

  • Hourly reports and analytics
  • Database backups
  • Log rotation
  • Cache refresh operations
  • Health monitoring
  • Data synchronization
  • Periodic maintenance tasks

Avoid for:

  • Real-time critical operations
  • Tasks requiring immediate execution
  • Very long-running processes (over 50 minutes)
  • Operations needing sub-hourly precision

Comparison with Other Intervals

IntervalExpressionRuns/DayBest For
Every 30 minutes*/30 * * * *48More frequent tasks
Every hour0 * * * *24Hourly tasks
Every 2 hours0 */2 * * *12Less frequent tasks
Every 3 hours0 */3 * * *8Even less frequent

Real-World Example

A typical production setup:

bash
# Generate hourly reports
0 * * * * /usr/bin/python3 /scripts/generate-hourly-report.py

# Hourly backup
0 * * * * /usr/local/bin/hourly-backup.sh

# Rotate logs
0 * * * * /usr/local/bin/rotate-logs.sh

Conclusion

The 0 * * * * expression is one of the most commonly used cron patterns for hourly tasks. It's perfect for reports, backups, and maintenance operations that need to run regularly but don't require real-time execution. The predictable timing (top of every hour) makes it easy to schedule and monitor.

Need to generate a cron expression?

Use CronOS to generate any cron expression you wish with natural language. Simply describe what you need, and we'll create the perfect cron expression for you. It's completely free!

Generate Cron Expression