Active Learning Application
Active Learning is an intelligent approach to optimization that learns from previous evaluations to decide where to sample next. Instead of random sampling, it uses Bayesian optimization to make smart decisions about which designs to test.
Key Concepts:
🎯 Surrogate Model (Gaussian Process)
- Purpose: Approximates the expensive function
- Advantage: Provides both predictions AND uncertainty estimates
- Learning: Updates beliefs as new data arrives
🔍 Acquisition Function
- Purpose: Balances exploration vs exploitation
- Exploration: Try uncertain regions (high variance)
- Exploitation: Focus on promising regions (low mean)
- Smart Sampling: Automatically finds the best trade-off
⚡ Efficiency Benefits
- Fewer Evaluations: Find optima with minimal function calls
- Global Search: Avoids getting stuck in local optima
- Uncertainty Quantification: Know how confident predictions are
- Adaptive: Learns the function's complexity automatically
Available Applications:
- Single-Objective: Find the best design for one goal
- Multi-Objective: Balance competing objectives (Pareto optimization)
What is Single-Objective Optimization?
Find the best design that minimizes (or maximizes) one specific goal. This is the most common optimization scenario in engineering.
How Bayesian Optimization Works:
🔄 The Iterative Process:
- Start: Begin with a few random samples
- Learn: Fit a Gaussian Process to the data
- Decide: Use acquisition function to pick next sample
- Evaluate: Run expensive simulation at chosen point
- Repeat: Continue until budget exhausted
🧠 Acquisition Function (EI):
- Expected Improvement balances improvement potential with uncertainty.
Interactive Features:
- Live Visualization: Watch the optimization progress
- Model Evolution: See how the surrogate learns
- Acquisition Plots: Understand sampling decisions
- Convergence Tracking: Monitor optimization progress
💡 Pro Tips:
- EI already balances exploration vs. exploitation
- Higher observation noise benefits from a slightly larger ξ
- More iterations = better results (up to a point)
What is Multi-Objective Optimization?
Real engineering problems often involve competing objectives that can't all be optimized simultaneously. Multi-objective optimization finds the Pareto front - the set of optimal trade-offs between objectives.
Key Concepts:
🏆 Pareto Optimality
- Definition: A solution is Pareto optimal if no other solution is better in ALL objectives
- Trade-offs: Improving one objective worsens another
- Pareto Front: The curve/surface of all Pareto optimal solutions
📊 Hypervolume Metric
- Purpose: Measures the "volume" of dominated space
- Higher is Better: Larger hypervolume = better Pareto front
- Convergence: Tracks optimization progress over time
🎯 Multi-Objective Acquisition Functions
- EHI (Expected Hypervolume Improvement): Maximizes expected hypervolume gain
Visualization Features:
- 3D Objective Surfaces: See how objectives vary with parameters
- Pareto Front Evolution: Watch the front improve over iterations
- Hypervolume Convergence: Track optimization progress
- Interactive Exploration: Examine different trade-offs
💡 Pro Tips:
- Reference Point: Set appropriately for hypervolume calculation
- More Objectives: Harder to visualize but same principles apply
- Decision Making: Use Pareto front to make informed trade-offs
Real Use-Case: Propeller Optimization
In this tab we use pre-trained surrogate models of a propeller to perform multi-objective active learning. The objectives are:
- Objective 1: Power (scaled by 1/1000)
- Objective 2: Cavitation objective
The underlying CFD or experimental model is not called directly here;
instead, we use surrogates (Gaussian Process / Kriging) that you provide
as .joblib files.
The acquisition strategy is based on hypervolume improvement: at each iteration, we choose a new propeller design that is expected to maximally improve the Pareto front in terms of dominated volume.