Back to Home

Cron Expression: Every 30 Minutes (*/30 * * * *)

CronOS Team
cronschedulingevery-30-minutesperiodic-taskstutorial

Need to generate a cron expression?

Use CronOS to generate any cron expression you wish with natural language. Simply describe what you need, and we'll create the perfect cron expression for you. It's completely free!

Generate Cron Expression

Cron Expression: Every 30 Minutes (*/30 * * * *)

The cron expression */30 * * * * executes a task every 30 minutes, providing a practical interval for less frequent maintenance tasks, backups, and data synchronization.

Expression Breakdown

bash
*/30 * * * *
│    │ │ │ │
│    │ │ │ └─── Day of week: * (every day)
│    │ │ └───── Month: * (every month)
│    │ └─────── Day of month: * (every day)
│    └───────── Hour: * (every hour)
└────────────── Minute: */30 (every 30 minutes)

Field Values

FieldValueMeaning
Minute*/30Every 30 minutes (0, 30)
Hour*Every hour (0-23)
Day of Month*Every day (1-31)
Month*Every month (1-12)
Day of Week*Every day of week (0-7)

Step Value Syntax

The /30 is a step value that means "every 30th minute starting from 0":

  • Runs at: 00:00, 00:30
  • Then repeats every hour: 01:00, 01:30, 02:00, 02:30, and so on

Common Use Cases

1. Incremental Backups

bash
*/30 * * * * /usr/local/bin/incremental-backup.sh

Create incremental backups or snapshots of databases and files.

2. Data Synchronization

bash
*/30 * * * * /usr/bin/python3 /scripts/sync-data.py

Sync data between systems, databases, or external services.

3. Log Aggregation

bash
*/30 * * * * /usr/local/bin/aggregate-logs.sh

Aggregate and process log files from multiple sources.

4. Cache Refresh

bash
*/30 * * * * /usr/bin/python3 /scripts/refresh-cache.py

Refresh cached data, computed statistics, or API responses.

5. Health Monitoring

bash
*/30 * * * * /usr/local/bin/system-health-check.sh

Monitor system health, service availability, or resource usage.

Execution Frequency

This expression runs 2 times per hour (48 times per day) at:

  • :00 and :30 of every hour

Example Implementations

Incremental Backup Script

bash
#!/bin/bash
# /usr/local/bin/incremental-backup.sh

BACKUP_DIR="/var/backups/incremental"
SOURCE_DIR="/var/data"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
LOG_FILE="/var/log/backups.log"

# Create backup directory
mkdir -p $BACKUP_DIR

# Create incremental backup
rsync -av --link-dest=$BACKUP_DIR/latest \
    $SOURCE_DIR/ \
    $BACKUP_DIR/backup_$TIMESTAMP/ >> $LOG_FILE 2>&1

# Update latest symlink
rm -f $BACKUP_DIR/latest
ln -s backup_$TIMESTAMP $BACKUP_DIR/latest

# Clean up backups older than 7 days
find $BACKUP_DIR -maxdepth 1 -type d -mtime +7 -exec rm -rf {} \;

echo "$(date): Incremental backup completed" >> $LOG_FILE

Python Data Synchronization

python
# sync-data.py
import requests
import json
from datetime import datetime
import sqlite3

def sync_data():
    try:
        # Fetch from source API
        response = requests.get(
            'https://api.source.com/data',
            timeout=60,
            headers={'Authorization': 'Bearer YOUR_TOKEN'}
        )
        response.raise_for_status()
        data = response.json()
        
        # Store in local database
        conn = sqlite3.connect('/var/data/app.db')
        cursor = conn.cursor()
        
        # Create table if not exists
        cursor.execute('''
            CREATE TABLE IF NOT EXISTS synced_data (
                id TEXT PRIMARY KEY,
                data TEXT,
                updated_at TIMESTAMP
            )
        ''')
        
        # Upsert data
        for item in data:
            cursor.execute('''
                INSERT OR REPLACE INTO synced_data 
                (id, data, updated_at) 
                VALUES (?, ?, ?)
            ''', (item['id'], json.dumps(item), datetime.now()))
        
        conn.commit()
        conn.close()
        
        print(f"{datetime.now()}: Synced {len(data)} records")
    except Exception as e:
        print(f"{datetime.now()}: Sync failed: {e}")

if __name__ == '__main__':
    sync_data()

Log Aggregation Script

bash
#!/bin/bash
# /usr/local/bin/aggregate-logs.sh

LOG_DIRS=("/var/log/app1" "/var/log/app2" "/var/log/app3")
AGGREGATE_DIR="/var/log/aggregated"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)

mkdir -p $AGGREGATE_DIR

# Aggregate logs from last 30 minutes
for log_dir in "${LOG_DIRS[@]}"; do
    if [ -d "$log_dir" ]; then
        find "$log_dir" -name "*.log" -mmin -30 -exec cat {} \; >> \
            "$AGGREGATE_DIR/aggregated_${TIMESTAMP}.log"
    fi
done

# Compress old aggregated logs
find $AGGREGATE_DIR -name "aggregated_*.log" -mtime +7 -exec gzip {} \;

# Remove compressed logs older than 30 days
find $AGGREGATE_DIR -name "aggregated_*.log.gz" -mtime +30 -delete

echo "$(date): Log aggregation completed"

Node.js Cache Refresh

javascript
// refresh-cache.js
const redis = require('redis');
const axios = require('axios');

const client = redis.createClient({
  host: 'localhost',
  port: 6379
});

async function refreshCache() {
  try {
    // Fetch fresh data
    const response = await axios.get('https://api.example.com/data', {
      timeout: 30000
    });
    
    const data = response.data;
    
    // Update cache with 1 hour TTL
    await client.setex('cached_data', 3600, JSON.stringify(data));
    
    // Also cache individual items
    for (const item of data) {
      await client.setex(
        `item:${item.id}`,
        3600,
        JSON.stringify(item)
      );
    }
    
    console.log(`${new Date().toISOString()}: Cache refreshed with ${data.length} items`);
  } catch (error) {
    console.error(`${new Date().toISOString()}: Cache refresh failed:`, error.message);
  } finally {
    client.quit();
  }
}

refreshCache();

Best Practices

  1. Execution Time: Tasks should complete within 25-28 minutes
  2. Locking: Use file locks or distributed locks for critical operations
  3. Error Handling: Implement retry logic and comprehensive logging
  4. Idempotency: Ensure tasks can be safely re-run
  5. Resource Management: Monitor and limit resource consumption

When to Use

Good for:

  • Incremental backups
  • Data synchronization
  • Log aggregation
  • Cache refresh operations
  • Periodic health checks
  • Background job processing
  • Less frequent maintenance tasks

Avoid for:

  • Real-time critical operations
  • Tasks requiring immediate execution
  • Very long-running processes (over 25 minutes)
  • Operations needing sub-30-minute precision

Comparison with Other Intervals

IntervalExpressionRuns/HourBest For
Every 15 minutes*/15 * * * *4More frequent tasks
Every 30 minutes*/30 * * * *2Standard intervals
Every hour0 * * * *1Hourly tasks
Every 2 hours0 */2 * * *12/dayLess frequent tasks

Real-World Example

A typical setup for data synchronization and backups:

bash
# Incremental backup
*/30 * * * * /usr/local/bin/incremental-backup.sh

# Sync external data
*/30 * * * * /usr/bin/python3 /scripts/sync-data.py

# Aggregate logs
*/30 * * * * /usr/local/bin/aggregate-logs.sh

Conclusion

The */30 * * * * expression is ideal for tasks that need regular execution but don't require real-time updates. It's perfect for backups, data synchronization, and maintenance operations that can tolerate a 30-minute delay, while still keeping systems reasonably up-to-date.

Need to generate a cron expression?

Use CronOS to generate any cron expression you wish with natural language. Simply describe what you need, and we'll create the perfect cron expression for you. It's completely free!

Generate Cron Expression