The backup scripts support multiple compression methods with configurable levels and parallel processing options.
# In docker-stack-backup.sh or docker-stack-backup-manual.sh
# Compression Configuration
COMPRESSION_METHOD="gzip" # gzip, bzip2, xz, zstd, none
COMPRESSION_LEVEL=6 # 1-9 (1=fast, 9=best compression)
USE_PARALLEL=false # Enable multi-threaded compression
PARALLEL_THREADS=0 # 0=auto, or specify (e.g., 4)
EXCLUDE_PATTERNS=( # Skip files/directories
# "*/cache/*"
# "*/tmp/*"
# "*.log"
)Best for: General use, maximum compatibility
COMPRESSION_METHOD="gzip"
COMPRESSION_LEVEL=6Characteristics:
- File extension:
.tar.gz - Speed: Fast
- Compression ratio: Good
- CPU usage: Low
- Compatibility: Universal
- Parallel tool:
pigz(install:apt-get install pigz)
When to use:
- Default choice for most situations
- Need wide compatibility
- Moderate-sized appdata (< 50GB)
Compression levels:
1- Fastest, larger files (~3-5x original size)6- Default, good balance (~5-8x original size)9- Best compression, slower (~6-10x original size)
Best for: Better compression than gzip, still widely supported
COMPRESSION_METHOD="bzip2"
COMPRESSION_LEVEL=9Characteristics:
- File extension:
.tar.bz2 - Speed: Slower than gzip
- Compression ratio: Better than gzip
- CPU usage: Moderate
- Compatibility: Very good
- Parallel tool:
pbzip2(install:apt-get install pbzip2)
When to use:
- Storage space is limited
- Willing to trade time for space
- Text-heavy appdata (logs, configs)
Typical results:
- 10-20% smaller than gzip at same level
- 2-3x slower than gzip
Best for: Maximum compression, long-term archival
COMPRESSION_METHOD="xz"
COMPRESSION_LEVEL=6Characteristics:
- File extension:
.tar.xz - Speed: Slowest
- Compression ratio: Best
- CPU usage: High
- Memory usage: Can be very high
- Compatibility: Good (requires xz-utils)
- Parallel tool:
pxz(install:apt-get install pxz)
When to use:
- Archival backups (long-term storage)
- Storage space at premium
- Backup window is not critical
- Large databases or media files
Typical results:
- 20-40% smaller than gzip
- 5-10x slower than gzip
- Can use 700MB+ RAM per thread
Warning: Level 9 uses massive amounts of RAM (up to 674MB per thread)
Best for: Balance of speed and compression with modern features
COMPRESSION_METHOD="zstd"
COMPRESSION_LEVEL=3Characteristics:
- File extension:
.tar.zst - Speed: Fast (faster than gzip)
- Compression ratio: Good (similar to gzip)
- CPU usage: Moderate
- Compatibility: Requires
zstdpackage - Parallel: Built-in multi-threading
When to use:
- Modern systems with zstd support
- Want speed with good compression
- Large backups with tight windows
Typical results:
- Similar compression to gzip
- 2-3x faster than gzip
- Excellent scaling on multi-core
Note: Not as universally supported as gzip/bzip2
Best for: Speed over everything, or pre-compressed data
COMPRESSION_METHOD="none"Characteristics:
- File extension:
.tar - Speed: Fastest
- Compression ratio: None (1:1)
- CPU usage: Minimal
When to use:
- Data already compressed (videos, images)
- Network backup over fast LAN
- Compression will happen elsewhere
- Maximum speed required
Significantly speeds up compression on multi-core systems.
USE_PARALLEL=true
PARALLEL_THREADS=4 # Or 0 for auto-detect# For gzip (pigz)
apt-get install pigz
# For bzip2 (pbzip2)
apt-get install pbzip2
# For xz (pxz)
apt-get install pxz
# zstd has built-in parallel support
apt-get install zstdExample: 10GB appdata, 4-core CPU
| Method | Standard | Parallel | Speedup |
|---|---|---|---|
| gzip | 120s | 35s | 3.4x |
| bzip2 | 280s | 75s | 3.7x |
| xz | 450s | 125s | 3.6x |
| zstd | 90s | 25s | 3.6x |
CPU Usage:
- Standard: ~100% (single core)
- Parallel: ~400% (4 cores)
PARALLEL_THREADS=0Script automatically uses all available cores:
# Detection happens internally
nproc # Returns number of processorsSkip files/directories that don't need backup:
EXCLUDE_PATTERNS=(
"*/cache/*" # Cache directories
"*/tmp/*" # Temporary files
"*.log" # Log files
"*/Trash/*" # Trash folders
"*/.Trash-*/*" # Linux trash
"*/thumbnails/*" # Thumbnail caches
"*/__pycache__/*" # Python cache
"*/node_modules/*" # Node.js modules (if applicable)
)Pattern syntax:
*- Matches any characters*/cache/*- Matches anycachedirectory*.log- Matches files ending in.log- Patterns are relative to appdata directory
Benefits:
- Faster backups
- Smaller archives
- Less restore time
- Skip regeneratable data
COMPRESSION_METHOD="gzip"
COMPRESSION_LEVEL=1 # Speed over size
USE_PARALLEL=true
PARALLEL_THREADS=0 # Use all cores
EXCLUDE_PATTERNS=(
"*/cache/*"
"*/tmp/*"
"*.log"
)COMPRESSION_METHOD="gzip"
COMPRESSION_LEVEL=6 # Default balance
USE_PARALLEL=true
PARALLEL_THREADS=4
EXCLUDE_PATTERNS=(
"*/cache/*"
"*/tmp/*"
)COMPRESSION_METHOD="xz"
COMPRESSION_LEVEL=9 # Best compression
USE_PARALLEL=true
PARALLEL_THREADS=4
EXCLUDE_PATTERNS=(
"*/cache/*"
"*/tmp/*"
"*.log"
"*/thumbnails/*"
)COMPRESSION_METHOD="zstd"
COMPRESSION_LEVEL=3
USE_PARALLEL=true # Built-in
PARALLEL_THREADS=0
EXCLUDE_PATTERNS=(
"*/cache/*"
)COMPRESSION_METHOD="none"
# Fast local backups, compress later if neededDaily automated backups:
- Method:
gziporzstd - Level:
3-6 - Parallel:
true
Weekly full backups:
- Method:
gziporbzip2 - Level:
6-9 - Parallel:
true
Monthly archival:
- Method:
xz - Level:
6-9 - Parallel:
true(if RAM allows)
Pre-migration backup:
- Method:
gzip - Level:
6 - Parallel:
true
< 5 minutes available:
COMPRESSION_METHOD="zstd"
COMPRESSION_LEVEL=1
USE_PARALLEL=true5-30 minutes available:
COMPRESSION_METHOD="gzip"
COMPRESSION_LEVEL=6
USE_PARALLEL=trueHours available (overnight):
COMPRESSION_METHOD="xz"
COMPRESSION_LEVEL=9
USE_PARALLEL=trueStorage abundant:
COMPRESSION_METHOD="gzip"
COMPRESSION_LEVEL=3Storage limited:
COMPRESSION_METHOD="xz"
COMPRESSION_LEVEL=9Network backup (bandwidth limited):
COMPRESSION_METHOD="xz"
COMPRESSION_LEVEL=6
# Smaller files = faster transferTest different settings to find optimal balance:
# Create test backup of one stack
cd /mnt/datastor/appdata
# Test gzip levels
time tar -cz1f test-1.tar.gz stack-name # Level 1
time tar -cz6f test-6.tar.gz stack-name # Level 6
time tar -cz9f test-9.tar.gz stack-name # Level 9
# Compare
ls -lh test-*.tar.gz
# Cleanup
rm test-*.tar.gzCheck results:
# Size vs time tradeoff
du -h test-*.tar.gz- Lower
COMPRESSION_LEVEL - Enable
USE_PARALLEL=true - Switch to faster method (
zstdorgzip) - Check CPU usage:
top
- Lower
COMPRESSION_LEVEL(especially for xz) - Reduce
PARALLEL_THREADS - Switch to lower-memory method (
gzip)
# Install missing tools
apt-get install pigz pbzip2 pxz zstd
# Verify installation
which pigz pbzip2 pxz zstd- Some tools require specific compression utilities
- Ensure target system has appropriate tools installed
- Use
gzipfor maximum compatibility
# Test exclude patterns
tar -czf test.tar.gz --exclude="*.log" --exclude="*/cache/*" /path/to/test
# List contents to verify
tar -tzf test.tar.gz | grep -E '\.log|/cache/' # Should be emptyStart with defaults:
COMPRESSION_METHOD="gzip"
COMPRESSION_LEVEL=6
USE_PARALLEL=true
PARALLEL_THREADS=0Then optimize based on:
- Backup window duration
- Available storage
- CPU/RAM resources
- Restore time requirements
Monitor and adjust:
- Check backup logs for timing
- Measure archive sizes
- Test restore speed
- Adjust as needed
Best practice:
- Same compression across all hosts
- Document your settings
- Test restores regularly
- Balance speed, size, and compatibility