Active Learning Application
Active Learning is an intelligent approach to optimization that learns from previous evaluations to decide where to sample next. Instead of random sampling, it uses Bayesian optimization to make smart decisions about which designs to test.
Key Concepts:
🎯 Surrogate Model (Gaussian Process)
- Purpose: Approximates the expensive function
- Advantage: Provides both predictions AND uncertainty estimates
- Learning: Updates beliefs as new data arrives
🔍 Acquisition Function
- Purpose: Balances exploration vs exploitation
- Exploration: Try uncertain regions (high variance)
- Exploitation: Focus on promising regions (low mean)
- Smart Sampling: Automatically finds the best trade-off
⚡ Efficiency Benefits
- Fewer Evaluations: Find optima with minimal function calls
- Global Search: Avoids getting stuck in local optima
- Uncertainty Quantification: Know how confident predictions are
- Adaptive: Learns the function's complexity automatically
Available Applications:
- Single-Objective: Find the best design for one goal
- Multi-Objective: Balance competing objectives (Pareto optimization)
What is Single-Objective Optimization?
Find the best design that minimizes (or maximizes) one specific goal. This is the most common optimization scenario in engineering.
How Bayesian Optimization Works:
🔄 The Iterative Process:
- Start: Begin with a few random samples
- Learn: Fit a Gaussian Process to the data
- Decide: Use acquisition function to pick next sample
- Evaluate: Run expensive simulation at chosen point
- Repeat: Continue until budget exhausted
🧠 Acquisition Functions:
- EI (Expected Improvement): Balances improvement potential with uncertainty
- UCB (Upper Confidence Bound): Optimistic exploration strategy
- PI (Probability of Improvement): Focuses on probability of beating current best
Interactive Features:
- Live Visualization: Watch the optimization progress
- Model Evolution: See how the surrogate learns
- Acquisition Plots: Understand sampling decisions
- Convergence Tracking: Monitor optimization progress
💡 Pro Tips:
- Start with EI for most problems
- Use UCB for more exploration
- Higher noise requires more exploration
- More iterations = better results (up to a point)
Benchmark problem
Fidelity level (per problem)
0 0.2
2 20
1 50
0 0.2
0.1 10
50 2000
20 150
0 10000
1 1
What is Multi-Objective Optimization?
Real engineering problems often involve competing objectives that can't all be optimized simultaneously. Multi-objective optimization finds the Pareto front - the set of optimal trade-offs between objectives.
Key Concepts:
🏆 Pareto Optimality
- Definition: A solution is Pareto optimal if no other solution is better in ALL objectives
- Trade-offs: Improving one objective worsens another
- Pareto Front: The curve/surface of all Pareto optimal solutions
📊 Hypervolume Metric
- Purpose: Measures the "volume" of dominated space
- Higher is Better: Larger hypervolume = better Pareto front
- Convergence: Tracks optimization progress over time
🎯 Multi-Objective Acquisition Functions
- EHI (Expected Hypervolume Improvement): Maximizes expected hypervolume gain
- POI (Probability of Improvement): Probability of improving hypervolume
Visualization Features:
- 3D Objective Surfaces: See how objectives vary with parameters
- Pareto Front Evolution: Watch the front improve over iterations
- Hypervolume Convergence: Track optimization progress
- Interactive Exploration: Examine different trade-offs
💡 Pro Tips:
- Reference Point: Set appropriately for hypervolume calculation
- EHI vs POI: EHI usually performs better
- More Objectives: Harder to visualize but same principles apply
- Decision Making: Use Pareto front to make informed trade-offs
Multi-objective Problem
Fidelity level
0 0.2
3 20
1 50
0.5 2
0.5 2
20 100
0 10000
1 1