Active Learning Application

Active Learning is an intelligent approach to optimization that learns from previous evaluations to decide where to sample next. Instead of random sampling, it uses Bayesian optimization to make smart decisions about which designs to test.

Key Concepts:

🎯 Surrogate Model (Gaussian Process)

  • Purpose: Approximates the expensive function
  • Advantage: Provides both predictions AND uncertainty estimates
  • Learning: Updates beliefs as new data arrives

🔍 Acquisition Function

  • Purpose: Balances exploration vs exploitation
  • Exploration: Try uncertain regions (high variance)
  • Exploitation: Focus on promising regions (low mean)
  • Smart Sampling: Automatically finds the best trade-off

⚡ Efficiency Benefits

  • Fewer Evaluations: Find optima with minimal function calls
  • Global Search: Avoids getting stuck in local optima
  • Uncertainty Quantification: Know how confident predictions are
  • Adaptive: Learns the function's complexity automatically

Available Applications:

  • Single-Objective: Find the best design for one goal
  • Multi-Objective: Balance competing objectives (Pareto optimization)

What is Single-Objective Optimization?

Find the best design that minimizes (or maximizes) one specific goal. This is the most common optimization scenario in engineering.

How Bayesian Optimization Works:

🔄 The Iterative Process:

  1. Start: Begin with a few random samples
  2. Learn: Fit a Gaussian Process to the data
  3. Decide: Use acquisition function to pick next sample
  4. Evaluate: Run expensive simulation at chosen point
  5. Repeat: Continue until budget exhausted

🧠 Acquisition Functions:

  • EI (Expected Improvement): Balances improvement potential with uncertainty
  • UCB (Upper Confidence Bound): Optimistic exploration strategy
  • PI (Probability of Improvement): Focuses on probability of beating current best

Interactive Features:

  • Live Visualization: Watch the optimization progress
  • Model Evolution: See how the surrogate learns
  • Acquisition Plots: Understand sampling decisions
  • Convergence Tracking: Monitor optimization progress

💡 Pro Tips:

  • Start with EI for most problems
  • Use UCB for more exploration
  • Higher noise requires more exploration
  • More iterations = better results (up to a point)
Benchmark problem
Fidelity level (per problem)
0 0.2
2 20
1 50
Acquisition
0 0.2
0.1 10
50 2000
20 150
0 10000
1 1