Svr / Regressor Layer
Support Vector Regression (SVR): Advanced regression with kernels.
Mathematical formulation: where:
- αᵢ, αᵢ* are Lagrange multipliers
- K(x,y) is the kernel function
- b is the bias term
Key characteristics:
- ε-insensitive loss function
- Kernel-based learning
- Robust to outliers
- Non-linear modeling
Advantages:
- High prediction accuracy
- Good generalization
- Handles non-linearity
- Robust performance
Common applications:
- Financial forecasting
- Function approximation
- System modeling
- Time series prediction
Outputs:
- Predicted Table: Results with predictions
- Validation Results: Cross-validation metrics
- Test Metric: Hold-out performance
- Feature Importances: Support vector weights
Note: Computational complexity is O(n²). For large datasets (>10,000 samples), consider LinearSVR or SGDRegressor instead.
SelectFeatures
[column, ...]Feature column selection for SVR:
Requirements:
-
Data properties:
- Numeric values only
- No missing values
- Finite numbers
- Comparable scales
-
Preprocessing needs:
- Standardization/scaling crucial
- Outlier handling
- Feature correlation check
- Missing value treatment
-
SVR considerations:
- Feature relevance
- Kernel compatibility
- Dimensionality impact
- Computational cost
-
Best practices:
- Scale to [-1, 1] or [0, 1]
- Remove redundant features
- Handle outliers
- Check correlations
Note: If empty, uses all numeric columns except target
SelectTarget
columnTarget column specification for SVR:
Requirements:
-
Data type:
- Numeric continuous
- No missing values
- Finite numbers
- Real-valued
-
Statistical properties:
- Scale consideration
- Distribution check
- Outlier presence
- Noise level
-
SVR specifics:
- ε-tube compatibility
- Error tolerance
- Prediction range
- Loss function scale
-
Preprocessing:
- Scaling recommended
- Outlier treatment
- Transform if needed
- Range normalization
Note: Must be a single numeric column
Params
oneofDefault SVR configuration:
-
Model structure:
- C = 1.0 (regularization strength)
- RBF kernel (versatile default)
- ε = 0.1 (error tube width)
-
Kernel settings:
- γ = 'scale' (adaptive radius)
- degree = 3 (polynomial)
- coef₀ = 0.0 (independent term)
-
Optimization parameters:
- Cache = 200MB (memory usage)
- tol = 0.001 (convergence)
- Shrinking = true (speed)
- max_iter = -1 (unlimited)
Best suited for:
- Initial modeling
- Medium-sized datasets
- Unknown patterns
- General regression tasks
Customizable SVR parameters:
Parameter categories:
-
Model complexity:
- Regularization (C)
- Kernel selection
- Error tolerance (ε)
-
Kernel configuration:
- Function type
- Shape parameters
- Feature mapping
-
Optimization control:
- Memory usage
- Convergence criteria
- Algorithm behavior
Trade-offs:
- Accuracy vs complexity
- Speed vs precision
- Memory vs computation
CFactor
f64Regularization parameter (C):
Impact on model:
- Large C: High variance, low bias
- Small C: Low variance, high bias
Selection guide:
- Small: 0.1-1.0 (more regularization)
- Medium: 1.0-10.0 (balanced)
- Large: 10.0-100.0 (less regularization)
Considerations:
- Noise level
- Training size
- Outlier presence
- Model complexity
Kernel
enumKernel functions for SVR:
-
Linear: K(x,y) = x·y
- Fastest computation
- Linear relationships
- High-dimensional data
-
Polynomial: K(x,y) = (γx·y + coef₀)^degree
- Feature interactions
- Degree controls complexity
- Useful for normalized data
-
RBF: K(x,y) = exp(-γ||x-y||²)
- Most versatile kernel
- Infinite dimensions
- Local influence
-
Sigmoid: K(x,y) = tanh(γx·y + coef₀)
- Neural network relation
- S-shaped responses
- Binary patterns
Selection impact:
- Model complexity
- Training time
- Prediction accuracy
- Generalization ability
Linear kernel function:
Formula:
Properties:
- Simplest kernel
- Fast computation
- Memory efficient
- Linear separation
Best for:
- High-dimensional data
- Text classification
- Sparse features
- Linear relationships
Polynomial kernel function:
Formula:
Properties:
- Feature interactions
- Controlled complexity
- Bounded response
- Global influence
Best for:
- Feature combinations
- Normalized data
- Moderate non-linearity
- Pattern recognition
Radial Basis Function kernel:
Formula:
Properties:
- Infinite dimensions
- Local sensitivity
- Universal approximator
- Distance-based
Best for:
- Unknown relationships
- Non-linear patterns
- Continuous features
- General-purpose use
Sigmoid kernel function:
Formula:
Properties:
- Neural network relation
- S-shaped response
- Non-monotonic
- Bounded output
Best for:
- Neural network alternative
- Binary patterns
- Signal processing
- Specific non-linearities
Degree
u32Polynomial kernel degree:
Effect on learning:
- Controls feature interactions
- Affects model complexity
- Impacts training time
Common values:
- 2: Quadratic relationships
- 3: Cubic patterns (default)
- 4+: Higher-order interactions
Note: Only used with polynomial kernel
Gamma
enumKernel coefficient strategies:
Mathematical impact:
Role in kernels:
- RBF: Controls radius of influence
- Polynomial: Scales inner product
- Sigmoid: Affects slope
Selection guidelines:
- Small γ: Large influence radius
- Large γ: Small influence radius
- Optimal γ: Data-dependent
Impact on learning:
- Feature importance
- Model complexity
- Training stability
- Generalization
Scale-dependent gamma:
Formula:
Properties:
- Data-adaptive scaling
- Variance-aware
- Feature-normalized
- Robust behavior
Advantages:
- Handles different scales
- Modern default choice
- Automatic adaptation
- Stable performance
Feature-based gamma:
Formula:
Properties:
- Dimension-based scaling
- Simple heuristic
- Scale-independent
- Legacy default
Best for:
- Normalized features
- Similar scales
- Historical compatibility
- Quick baselines
User-defined gamma:
Configuration:
- Manual value setting
- Expert knowledge input
- Problem-specific tuning
- Fine control
Use cases:
- Known domain requirements
- Cross-validation results
- Performance optimization
- Research experiments
GammaF
f64Custom gamma value:
Impact on kernels:
- RBF: Influence radius
- Poly: Feature scaling
- Sigmoid: Response slope
Typical ranges:
- Small: 0.0001-0.001 (wide influence)
- Medium: 0.001-0.1 (balanced)
- Large: 0.1-1.0 (narrow influence)
Note: Only used when Gamma='Custom'
Coef0
f64Independent term in kernel:
Usage in kernels:
- Polynomial: Offset term
- Sigmoid: Threshold
Impact:
- Controls model flexibility
- Affects feature interactions
- Influences decision boundary
Common range: [-1.0, 1.0]
Shrinking
boolWhether to use the shrinking heuristic.
When enabled:
- Removes bounded variables
- Speeds up optimization
- Reduces memory usage
Trade-offs:
- Speed vs precision
- Memory vs accuracy
- Training efficiency
Epsilon
f64ε-tube width parameter:
Definition:
- Defines insensitive region
- Controls prediction precision
- Affects support vector count
Selection guide:
- Small ε: High precision, more SVs
- Large ε: Lower precision, fewer SVs
Typical range: [0.01, 0.5]
Tol
f64Optimization tolerance:
Controls:
- Convergence precision
- Training duration
- Solution accuracy
Common values:
- Strict: 1e-4 or smaller
- Standard: 1e-3 (default)
- Relaxed: 1e-2 or larger
CacheSize
u64Kernel cache size (MB):
Impact:
- Training speed
- Memory usage
- Computation efficiency
Guidelines:
- Small: 50-100MB
- Medium: 200MB (default)
- Large: 500MB+
Trade-off: Speed vs memory
MaxIter
i64Maximum iterations limit:
Settings:
- -1: No limit (default)
- >0: Maximum iterations
Purpose:
- Controls training time
- Prevents endless loops
- Resource management
Note: May affect convergence
SVR hyperparameter optimization:
Search space organization:
-
Model complexity:
- Regularization ranges
- Kernel selection
- Error tolerance
-
Kernel configuration:
- Function types
- Shape parameters
- Scale factors
-
Optimization settings:
- Convergence criteria
- Resource limits
- Algorithm behavior
Best practices:
- Start with coarse grid
- Refine promising regions
- Consider computation cost
- Monitor resource usage
CFactor
[f64, ...]Regularization parameter search:
Common search patterns:
-
Logarithmic scale:
- [0.1, 1.0, 10.0, 100.0]
- Wide exploration
-
Fine-tuning:
- [0.8, 1.0, 1.2]
- Around promising value
-
Problem-specific:
- Based on data scale
- Noise sensitivity
- Model complexity needs
Kernel
[enum, ...]Kernel functions for SVR:
-
Linear: K(x,y) = x·y
- Fastest computation
- Linear relationships
- High-dimensional data
-
Polynomial: K(x,y) = (γx·y + coef₀)^degree
- Feature interactions
- Degree controls complexity
- Useful for normalized data
-
RBF: K(x,y) = exp(-γ||x-y||²)
- Most versatile kernel
- Infinite dimensions
- Local influence
-
Sigmoid: K(x,y) = tanh(γx·y + coef₀)
- Neural network relation
- S-shaped responses
- Binary patterns
Selection impact:
- Model complexity
- Training time
- Prediction accuracy
- Generalization ability
Linear kernel function:
Formula:
Properties:
- Simplest kernel
- Fast computation
- Memory efficient
- Linear separation
Best for:
- High-dimensional data
- Text classification
- Sparse features
- Linear relationships
Polynomial kernel function:
Formula:
Properties:
- Feature interactions
- Controlled complexity
- Bounded response
- Global influence
Best for:
- Feature combinations
- Normalized data
- Moderate non-linearity
- Pattern recognition
Radial Basis Function kernel:
Formula:
Properties:
- Infinite dimensions
- Local sensitivity
- Universal approximator
- Distance-based
Best for:
- Unknown relationships
- Non-linear patterns
- Continuous features
- General-purpose use
Sigmoid kernel function:
Formula:
Properties:
- Neural network relation
- S-shaped response
- Non-monotonic
- Bounded output
Best for:
- Neural network alternative
- Binary patterns
- Signal processing
- Specific non-linearities
Degree
[u32, ...]Polynomial degree search:
Search spaces:
-
Standard range:
- [2, 3, 4]
- Common patterns
-
Extended:
- [2, 3, 4, 5]
- Higher complexity
-
Specific:
- Based on domain
- Known relationships
Note: For polynomial kernel
Gamma
[enum, ...]Kernel coefficient strategies:
Mathematical impact:
Role in kernels:
- RBF: Controls radius of influence
- Polynomial: Scales inner product
- Sigmoid: Affects slope
Selection guidelines:
- Small γ: Large influence radius
- Large γ: Small influence radius
- Optimal γ: Data-dependent
Impact on learning:
- Feature importance
- Model complexity
- Training stability
- Generalization
Scale-dependent gamma:
Formula:
Properties:
- Data-adaptive scaling
- Variance-aware
- Feature-normalized
- Robust behavior
Advantages:
- Handles different scales
- Modern default choice
- Automatic adaptation
- Stable performance
Feature-based gamma:
Formula:
Properties:
- Dimension-based scaling
- Simple heuristic
- Scale-independent
- Legacy default
Best for:
- Normalized features
- Similar scales
- Historical compatibility
- Quick baselines
User-defined gamma:
Configuration:
- Manual value setting
- Expert knowledge input
- Problem-specific tuning
- Fine control
Use cases:
- Known domain requirements
- Cross-validation results
- Performance optimization
- Research experiments
GammaF
[f64, ...]Custom gamma value search:
Search patterns:
-
Log scale:
- [0.0001, 0.001, 0.01, 0.1]
- Wide exploration
-
Fine-grained:
- Around best gamma
- Narrow range
-
Data-driven:
- Based on feature scales
- Problem characteristics
Coef0
[f64, ...]Independent term search:
Search spaces:
-
Standard:
- [0.0, 0.5, 1.0]
- Basic range
-
Extended:
- [-1.0, 0.0, 1.0]
- Full range
-
Fine-tuning:
- Around best value
- Small increments
For poly and sigmoid kernels
Shrinking
[bool, ...]Shrinking heuristic search:
Options:
-
Default: [true]
- Enable optimization
- Faster training
-
Compare: [true, false]
- Performance impact
- Speed vs precision
-
Specific:
- Based on dataset
- Resource constraints
Epsilon
[f64, ...]ε-tube width search:
Search ranges:
-
Standard:
- [0.05, 0.1, 0.2]
- Common values
-
Precision focus:
- [0.01, 0.05, 0.1]
- Higher accuracy
-
Wide range:
- [0.01, 0.1, 0.5]
- Explore tolerance
Tol
[f64, ...]Convergence tolerance search:
Search patterns:
-
Standard:
- [1e-4, 1e-3, 1e-2]
- Common range
-
High precision:
- [1e-5, 1e-4, 1e-3]
- Exact solutions
-
Quick convergence:
- [1e-3, 1e-2]
- Faster training
CacheSize
f64Kernel cache memory size:
Guidelines:
-
Small: 50-100MB
- Limited memory
- Smaller datasets
-
Medium: 200MB
- Balanced choice
- Default setting
-
Large: 500MB+
- Fast training
- Large datasets
MaxIter
[i64, ...]Maximum iterations search:
Search options:
-
Unlimited: [-1]
- Full convergence
- No time limit
-
Limited: [1000, 2000, 5000]
- Time constrained
- Resource managed
-
Quick runs: [500, 1000]
- Fast iterations
- Initial testing
RefitScore
enumRegression model evaluation metrics:
Purpose:
- Model performance evaluation
- Error measurement
- Quality assessment
- Model comparison
Selection criteria:
- Error distribution
- Scale sensitivity
- Domain requirements
- Business objectives
Model's native scoring method:
- Typically R² score
- Model-specific implementation
- Standard evaluation
- Quick assessment
Coefficient of determination (R²):
Formula:
Properties:
- Range: (-∞, 1]
- 1: Perfect prediction
- 0: Constant model
- Negative: Worse than mean
Best for:
- General performance
- Variance explanation
- Model comparison
- Standard reporting
Explained variance score:
Formula:
Properties:
- Range: (-∞, 1]
- Accounts for bias
- Variance focus
- Similar to R²
Best for:
- Variance analysis
- Bias assessment
- Model stability
Maximum absolute error:
Formula:
Properties:
- Worst case error
- Original scale
- Sensitive to outliers
- Upper error bound
Best for:
- Critical applications
- Error bounds
- Safety margins
- Risk assessment
Negative mean absolute error:
Formula:
Properties:
- Linear error scale
- Robust to outliers
- Original units
- Negated for optimization
Best for:
- Robust evaluation
- Interpretable errors
- Outlier presence
Negative mean squared error:
Formula:
Properties:
- Squared error scale
- Outlier sensitive
- Squared units
- Negated for optimization
Best for:
- Standard optimization
- Large error penalty
- Statistical analysis
Negative root mean squared error:
Formula:
Properties:
- Original scale
- Outlier sensitive
- Interpretable units
- Negated for optimization
Best for:
- Standard reporting
- Interpretable errors
- Model comparison
Negative mean squared logarithmic error:
Formula:
Properties:
- Relative error scale
- For positive values
- Sensitive to ratios
- Negated for optimization
Best for:
- Exponential growth
- Relative differences
- Positive predictions
Negative median absolute error:
Formula:
Properties:
- Highly robust
- Original scale
- Outlier resistant
- Negated for optimization
Best for:
- Robust evaluation
- Heavy-tailed errors
- Outlier presence
Negative Poisson deviance:
Formula:
Properties:
- For count data
- Non-negative values
- Poisson assumption
- Negated for optimization
Best for:
- Count prediction
- Event frequency
- Rate modeling
Negative Gamma deviance:
Formula:
Properties:
- For positive continuous data
- Constant CV assumption
- Relative errors
- Negated for optimization
Best for:
- Positive continuous data
- Multiplicative errors
- Financial modeling
Negative mean absolute percentage error:
Formula:
Properties:
- Percentage scale
- Scale independent
- For non-zero targets
- Negated for optimization
Best for:
- Relative performance
- Scale-free comparison
- Business metrics
D² score with absolute error:
Formula:
Properties:
- Range: (-∞, 1]
- Robust version of R²
- Linear error scale
- Outlier resistant
Best for:
- Robust evaluation
- Non-normal errors
- Alternative to R²
D² score with pinball loss:
Properties:
- Quantile focus
- Asymmetric errors
- Risk assessment
- Distribution modeling
Best for:
- Quantile regression
- Risk analysis
- Asymmetric costs
- Distribution tails
D² score with Tweedie deviance:
Properties:
- Compound Poisson-Gamma
- Flexible dispersion
- Mixed distributions
- Insurance modeling
Best for:
- Insurance claims
- Mixed continuous-discrete
- Compound distributions
- Specialized modeling
Split
oneofStandard train-test split configuration optimized for general classification tasks.
Configuration:
- Test size: 20% (0.2)
- Random seed: 98
- Shuffling: Enabled
- Stratification: Based on target distribution
Advantages:
- Preserves class distribution
- Provides reliable validation
- Suitable for most datasets
Best for:
- Medium to large datasets
- Independent observations
- Initial model evaluation
Splitting uses the ShuffleSplit strategy or StratifiedShuffleSplit strategy depending on the field stratified
. Note: If shuffle is false then stratified must be false.
Configurable train-test split parameters for specialized requirements. Allows fine-tuning of data division strategy for specific use cases or constraints.
Use cases:
- Time series data
- Grouped observations
- Specific train/test ratios
- Custom validation schemes
RandomState
u64Random seed for reproducible splits. Ensures:
- Consistent train/test sets
- Reproducible experiments
- Comparable model evaluations
Same seed guarantees identical splits across runs.
Shuffle
boolData shuffling before splitting. Effects:
- true: Randomizes order, better for i.i.d. data
- false: Maintains order, important for time series
When to disable:
- Time dependent data
- Sequential patterns
- Grouped observations
TrainSize
f64Proportion of data for training. Considerations:
- Larger (e.g., 0.8-0.9): Better model learning
- Smaller (e.g., 0.5-0.7): Better validation
Common splits:
- 0.8: Standard (80/20 split)
- 0.7: More validation emphasis
- 0.9: More training emphasis
Stratified
boolMaintain class distribution in splits. Important when:
- Classes are imbalanced
- Small classes present
- Representative splits needed
Requirements:
- Classification tasks only
- Cannot use with shuffle=false
- Sufficient samples per class
Cv
oneofStandard cross-validation configuration using stratified 3-fold splitting.
Configuration:
- Folds: 3
- Method: StratifiedKFold
- Stratification: Preserves class proportions
Advantages:
- Balanced evaluation
- Reasonable computation time
- Good for medium-sized datasets
Limitations:
- May be insufficient for small datasets
- Higher variance than larger fold counts
- May miss some data patterns
Configurable stratified k-fold cross-validation for specific validation requirements.
Features:
- Adjustable fold count with
NFolds
determining the number of splits. - Stratified sampling
- Preserved class distributions
Use cases:
- Small datasets (more folds)
- Large datasets (fewer folds)
- Detailed model evaluation
- Robust performance estimation
NFolds
u32Number of cross-validation folds. Guidelines:
- 3-5: Large datasets, faster training
- 5-10: Standard choice, good balance
- 10+: Small datasets, thorough evaluation
Trade-offs:
- More folds: Better evaluation, slower training
- Fewer folds: Faster training, higher variance
Must be at least 2.
K-fold cross-validation without stratification. Divides data into k consecutive folds for iterative validation.
Process:
- Splits data into k equal parts
- Each fold serves as validation once
- Remaining k-1 folds form training set
Use cases:
- Regression problems
- Large, balanced datasets
- When stratification unnecessary
- Continuous target variables
Limitations:
- May not preserve class distributions
- Less suitable for imbalanced data
- Can create biased splits with ordered data
NSplits
u32Number of folds for cross-validation. Selection guide: Recommended values:
- 5: Standard choice (default)
- 3: Large datasets/quick evaluation
- 10: Thorough evaluation/smaller datasets
Trade-offs:
- Higher values: More thorough, computationally expensive
- Lower values: Faster, potentially higher variance
Must be at least 2 for valid cross-validation.
RandomState
u64Random seed for fold generation when shuffling. Important for:
- Reproducible results
- Consistent fold assignments
- Benchmark comparisons
- Debugging and validation
Set specific value for reproducibility across runs.
Shuffle
boolWhether to shuffle data before splitting into folds. Effects:
- true: Randomized fold composition (recommended)
- false: Sequential splitting
Enable when:
- Data may have ordering
- Better fold independence needed
Disable for:
- Time series data
- Ordered observations
Stratified K-fold cross-validation maintaining class proportions across folds.
Key features:
- Preserves class distribution in each fold
- Handles imbalanced datasets
- Ensures representative splits
Best for:
- Classification problems
- Imbalanced class distributions
- When class proportions matter
Requirements:
- Classification tasks only
- Sufficient samples per class
- Categorical target variable
NSplits
u32Number of stratified folds. Guidelines: Typical values:
- 5: Standard for most cases
- 3: Quick evaluation/large datasets
- 10: Detailed evaluation/smaller datasets
Considerations:
- Must allow sufficient samples per class per fold
- Balance between stability and computation time
- Consider smallest class size when choosing
RandomState
u64Seed for reproducible stratified splits. Ensures:
- Consistent fold assignments
- Reproducible results
- Comparable experiments
- Systematic validation
Fixed seed guarantees identical stratified splits.
Shuffle
boolData shuffling before stratified splitting. Impact:
- true: Randomizes while maintaining stratification
- false: Maintains data order within strata
Use cases:
- true: Independent observations
- false: Grouped or sequential data
Class proportions maintained regardless of setting.
Random permutation cross-validator with independent sampling.
Characteristics:
- Random sampling for each split
- Independent train/test sets
- More flexible than K-fold
- Can have overlapping test sets
Advantages:
- Control over test size
- Fresh splits each iteration
- Good for large datasets
Limitations:
- Some samples might never be tested
- Others might be tested multiple times
- No guarantee of complete coverage
NSplits
u32Number of random splits to perform. Consider: Common values:
- 5: Standard evaluation
- 10: More thorough assessment
- 3: Quick estimates
Trade-offs:
- More splits: Better estimation, longer runtime
- Fewer splits: Faster, less stable estimates
Balance between computation and stability.
RandomState
u64Random seed for reproducible shuffling. Controls:
- Split randomization
- Sample selection
- Result reproducibility
Important for:
- Debugging
- Comparative studies
- Result verification
TestSize
f64Proportion of samples for test set. Guidelines: Common ratios:
- 0.2: Standard (80/20 split)
- 0.25: More validation emphasis
- 0.1: More training data
Considerations:
- Dataset size
- Model complexity
- Validation requirements
It must be between 0.0 and 1.0.
Stratified random permutation cross-validator combining shuffle-split with stratification.
Features:
- Maintains class proportions
- Random sampling within strata
- Independent splits
- Flexible test size
Ideal for:
- Imbalanced datasets
- Large-scale problems
- When class distributions matter
- Flexible validation schemes
NSplits
u32Number of stratified random splits. Guidelines: Recommended values:
- 5: Standard evaluation
- 10: Detailed analysis
- 3: Quick assessment
Consider:
- Sample size per class
- Computational resources
- Stability requirements
RandomState
u64Seed for reproducible stratified sampling. Ensures:
- Consistent class proportions
- Reproducible splits
- Comparable experiments
Critical for:
- Benchmarking
- Research studies
- Quality assurance
TestSize
f64Fraction of samples for stratified test set. Best practices: Common splits:
- 0.2: Balanced evaluation
- 0.3: More thorough testing
- 0.15: Preserve training size
Consider:
- Minority class size
- Overall dataset size
- Validation objectives
It must be between 0.0 and 1.0.
Time Series cross-validator. Provides train/test indices to split time series data samples that are observed at fixed time intervals, in train/test sets. It is a variation of k-fold which returns first k
folds as train set and the k + 1
th fold as test set. Note that unlike standard cross-validation methods, successive training sets are supersets of those that come before them. Also, it adds all surplus data to the first training partition, which is always used to train the model.
Key features:
- Maintains temporal dependence
- Expanding window approach
- Forward-chaining splits
- No future data leakage
Use cases:
- Sequential data
- Financial forecasting
- Temporal predictions
- Time-dependent patterns
Note: Training sets are supersets of previous iterations.
NSplits
u32Number of temporal splits. Considerations: Typical values:
- 5: Standard forward chaining
- 3: Limited historical data
- 10: Long time series
Impact:
- Affects training window growth
- Determines validation points
- Influences computational load
MaxTrainSize
u64Maximum size of training set. Should be strictly less than the number of samples. Applications:
- 0: Use all available past data
- >0: Rolling window of fixed size
Use cases:
- Limit historical relevance
- Control computational cost
- Handle concept drift
- Memory constraints
TestSize
u64Number of samples in each test set. When 0:
- Auto-calculated as n_samples/(n_splits+1)
- Ensures equal-sized test sets
Considerations:
- Forecast horizon
- Validation requirements
- Available future data
Gap
u64Number of samples to exclude from the end of each train set before the test set.Gap between train and test sets. Uses:
- Avoid data leakage
- Model forecast lag
- Buffer periods
Common scenarios:
- 0: Continuous prediction
- >0: Forward gap for realistic evaluation
- Match business forecasting needs