⚡ Quick Reference - Recommended Settings

Learning Rate

2e-4

Training Steps

1000-1500

Noise Offset

0.1

LoRA Rank

16

Alpha Value

32

Batch Size

1

🎯 Learning Rate Guide

The learning rate controls how quickly your model adapts to the training data. Too high causes instability, too low results in poor learning.

Learning Rate Use Case Quality
5e-5 Fine-tuning existing LoRAs Conservative
1e-4 Photorealistic characters Safe
3e-4 Anime/cartoon styles Aggressive
5e-4 Quick experiments only Risky

🔧 Noise Offset Settings

Noise offset adds controlled randomness during training, improving contrast and preventing overfitting to specific lighting conditions.

Noise Offset Best For Effect
0.0 Clean, controlled images No enhancement
0.05 Photorealistic portraits Subtle improvement
0.15 Anime/cartoon characters Enhanced vibrancy
0.2+ Experimental/artistic styles High contrast

🎨 Training Presets

📸

Photorealistic Portrait

Optimal for training realistic human characters and portraits with natural lighting.

learning_rate: 1e-4 training_steps: 1200 noise_offset: 0.05 lora_rank: 16 alpha: 32 precision: fp16
🎌

Anime Character

Perfect for anime, manga, and cartoon-style characters with vibrant colors.

learning_rate: 3e-4 training_steps: 1000 noise_offset: 0.15 lora_rank: 24 alpha: 48 precision: fp16

Quick Test

Fast training for quick experiments and proof-of-concept testing.

learning_rate: 2e-4 training_steps: 500 noise_offset: 0.1 lora_rank: 8 alpha: 16 precision: fp8
🎭

Artistic Style

For stylized art, illustrations, and creative character interpretations.

learning_rate: 2e-4 training_steps: 1500 noise_offset: 0.2 lora_rank: 32 alpha: 64 precision: fp16
💾

Low VRAM (8GB)

Optimized settings for GPUs with limited VRAM while maintaining quality.

learning_rate: 2e-4 training_steps: 1000 noise_offset: 0.1 lora_rank: 12 alpha: 24 precision: fp8 use_gguf: true
🏆

Maximum Quality

High-end settings for the best possible results with powerful hardware.

learning_rate: 1.5e-4 training_steps: 2000 noise_offset: 0.1 lora_rank: 64 alpha: 128 precision: fp16 gradient_accumulation: 2

🔧 Advanced Optimization Tips

Memory Optimization

  • Use FP8 precision for 8GB GPUs
  • Enable gradient checkpointing
  • Reduce batch size to 1
  • Use GGUF quantized models

Quality Enhancement

  • Increase LoRA rank for fine details
  • Adjust noise offset based on style
  • Use proper image preprocessing
  • Monitor training loss curves

Speed Optimization

  • Reduce training steps for experiments
  • Use lower LoRA rank for speed
  • Enable mixed precision training
  • Optimize CUDA settings

Consistency Tips

  • Keep alpha = 2 × rank
  • Use consistent image sizes
  • Maintain proper caption quality
  • Test different learning rates

🧠 Understanding Parameter Relationships

Learning Rate × Training Steps

Higher learning rates require fewer steps but risk instability. Lower rates need more steps for proper convergence.

Optimal Steps = Base Steps × (2e-4 / Your Learning Rate)

LoRA Rank × Alpha Ratio

Alpha should typically be 2× the rank value for balanced training dynamics and stable convergence.

Alpha = LoRA Rank × 2 (standard ratio)

Noise Offset × Style Type

Photorealistic images benefit from lower noise offset, while stylized art can handle higher values.

Photorealistic: 0.05-0.1 | Artistic: 0.1-0.2

⚙️ Advanced Parameter Configuration

📊 Training Scheduler

cosine_with_restarts Best for most training scenarios
linear Simple, predictable learning decay
polynomial Smooth transitions, good for fine-tuning

🚀 Optimizer Settings

AdamW Standard choice, weight decay included
AdaFactor Memory efficient, good for large models
8bit AdamW Reduces VRAM usage significantly

🎨 Quality Enhancement

gradient_clipping Prevents training instability
min_snr_gamma Improves training stability
v_prediction Alternative parameterization

💾 Memory Optimization

gradient_checkpointing Trade compute for memory
xformers Efficient attention computation
cpu_offload Move layers to CPU when not needed

🧠 Memory Requirements Calculator

Calculate exact VRAM requirements for your training configuration and find optimal settings for your GPU. Includes support for RTX 3060, 3070, 4080, and more.

Open VRAM Calculator

📊 Parameter Impact Analysis

Deep dive into how each parameter affects your training results and final quality:

🎯 Learning Rate Deep Dive

Too High (>5e-4)
⚠️ Training instability and loss spikes 📉 Poor convergence and quality degradation 🔄 May require restarting training
Optimal (1e-4 to 3e-4)
✅ Smooth, stable training progression 🎨 Good feature learning and retention 📈 Predictable quality improvements
Too Low (<5e-5)
🐌 Extremely slow learning progression ⏱️ May not converge within reasonable time 🔄 Requires significantly more steps

📐 LoRA Rank Analysis

Low (4-8)
⚡ Lightning-fast training (2-5 min) 💾 Small file sizes (50-100MB) 💰 Lower computational requirements
⚠️ May miss fine facial details 🔍 Less character consistency
High (64+)
🔍 Maximum detail capture 🏆 Best possible quality 🎭 Perfect for complex characters
⏱️ Longer training times (15-30 min) 💾 Larger file sizes (500MB+) 💻 Requires more powerful hardware

📅 Training Steps Strategy

100-300 steps 🚀 Quick concept validation 🟡 Basic feature recognition
500-800 steps ⚙️ Feature development phase 🟠 Good character recognition
1000-1500 steps 🏆 Optimal training zone 🟢 High quality, consistent results
2000+ steps ⚠️ Diminishing returns zone 🟡 Risk of overfitting

🔧 Common Issues & Solutions

🔥 Training Loss Spikes

Symptoms:
  • Loss suddenly increases mid-training
  • Generated images become distorted
  • Training becomes unstable
Solutions:
  • Reduce learning rate by 50%
  • Enable gradient clipping
  • Check for corrupted training data
  • Use cosine scheduler with restarts

🐌 Slow Convergence

Symptoms:
  • Loss decreases very slowly
  • No visible improvement after 1000 steps
  • Generated images remain generic
Solutions:
  • Increase learning rate (try 3e-4)
  • Increase LoRA rank to 32 or 64
  • Verify image quality and preprocessing
  • Check caption accuracy

💾 CUDA Out of Memory

Symptoms:
  • Training stops with CUDA OOM error
  • Unable to load models
  • System becomes unresponsive
Solutions:
  • Switch to FP8 precision models
  • Reduce LoRA rank to 8 or 12
  • Enable gradient checkpointing
  • Use GGUF quantized models
  • Full CUDA Fix Guide →

🎭 Poor Character Consistency

Symptoms:
  • Character features vary between generations
  • Some angles look completely different
  • Clothing or hair changes randomly
Solutions:
  • Increase LoRA rank to 32+
  • Train for more steps (1500-2000)
  • Use higher quality reference image
  • Adjust noise offset to 0.1-0.15

❓ Frequently Asked Questions

The learning rate is the most critical parameter. Start with 2e-4 and adjust based on your results. Too high causes instability, too low prevents proper learning. Everything else can be fine-tuned later.

Watch the training loss - it should decrease steadily without major spikes. Generate test images every 200-300 steps. Good settings show gradual improvement in character recognition and consistency.

Generally no - most parameters are set at the beginning. However, you can resume training from a checkpoint with different learning rates or steps. It's better to start fresh with corrected settings.

Anime/cartoon styles have simpler features and clearer boundaries, allowing for higher learning rates (3e-4) and noise offset (0.15). Photorealistic faces have subtle details requiring gentler settings (1e-4 learning rate, 0.05 noise offset).

Start with the same base settings, but adjust based on character complexity. Simple characters may work with lower LoRA ranks (16), while complex characters with intricate details benefit from higher ranks (32-64).

8GB: Use FP8 models, rank 12-16, basic settings. 12GB: FP16 models, rank 16-32, most settings work. 16GB+: Any settings, rank 64+, maximum quality. Use our VRAM calculator for precise requirements.