Executive Summary

Executive Summary

Key Metrics

r_squared
0.8339
rmse
5.0352
n_selected_vars
4
n_predictors
6

Key Findings

findingvalue
Model QualityGood fit (R² > 0.7)
Alpha (Mixing)0.5 — Elastic Net (alpha=0.5)
Variables Selected4 of 6 predictors
Variables Excluded2 predictors set to 0
R-Squared83.4%
RMSE5.035
Optimal Lambda1.0006

Summary

Bottom Line: Elastic Net regression (alpha=0.5) identified 4 of 6 predictor variables as relevant, achieving R-squared = 83.4% and RMSE = 5.035.

Variable Selection:
• 4 predictors have non-zero coefficients — these are the important predictors
• 2 predictors were shrunk to zero — excluded from the model
• Alpha=0.5 blends L1 (LASSO) + L2 (Ridge) — handles correlated predictors better than pure LASSO
• Lambda selection method: lambda.1se (= 1.0006)

Model Performance:
• R-squared: 83.4% of variance in the outcome explained
• RMSE: 5.035 average prediction error
• Deviance explained: 83.4%

Recommendation: Focus resources on the 4 selected predictors. Review the Alpha Sensitivity chart to confirm alpha=0.5 is appropriate for your data. If predictors are highly correlated, lower alpha (closer to 0) may improve stability.

Interpretation

Purpose

This executive summary evaluates the Elastic Net regression model's ability to identify predictive variables and forecast outcomes. The analysis demonstrates how regularization techniques balance model complexity with predictive accuracy—critical for understanding whether the selected variables justify business investment and whether the model is ready for deployment.

Key Findings

  • R-Squared: 0.834 — The model explains 83.4% of outcome variance, indicating strong predictive power across the 300 observations with no data loss
  • Variable Reduction: 4 of 6 predictors selected — Elastic Net eliminated 2 predictors entirely, reducing model complexity while maintaining performance
  • RMSE: 5.035 — Average prediction error is modest relative to the outcome range (5.9–72.52), suggesting reliable point estimates
  • Alpha Stability: 0.5 (Elastic Net) — Performance remains consistent across alpha values (0.0–1.0), with R² hovering at 0.83 and variable selection stable at 4 predictors
  • Lambda Selection: 1.001 (1se method) — Conservative regularization choice prioritizes stability over minimal error, reducing overfitting risk

Interpretation

The model achieves strong explanatory power while successfully performing automatic feature selection. The four retained predictors—led by predictor_1 (

Overview

Analysis Overview

Analysis overview and configuration

Configuration

Analysis TypeElastic Net
CompanyTest Company
ObjectiveIdentify which advertising channels drive sales using Elastic Net regression
Analysis Date2026-03-15
Processing Idelastic_net_test_20260315_145209
Total Observations300

Module Parameters

ParameterValue_row
alpha0.5alpha
n_folds10n_folds
lambda_choicelambda.1selambda_choice
standardizeTRUEstandardize
Elastic Net analysis for Test Company

Interpretation

Purpose

This Elastic Net regression analysis identifies which of six advertising channels drive sales for Test Company. The model uses regularization (alpha=0.5) to balance feature selection with prediction accuracy, selecting the optimal complexity via 10-fold cross-validation. Understanding the model setup and performance validates whether the analysis reliably answers the business question.

Key Findings

  • R-Squared: 0.834 - The model explains 83.4% of sales variance, indicating strong predictive power for identifying channel drivers
  • Variables Selected: 4 of 6 - Elastic Net eliminated 2 channels (predictor_4, predictor_5) as non-drivers, reducing model complexity
  • RMSE: 5.04 / MAE: 4.11 - Average prediction error of ~4-5 units; residuals are normally distributed with mean zero, confirming unbiased predictions
  • Lambda Selection: 1.001 (lambda.1se) - Conservative regularization chosen to prevent overfitting while maintaining interpretability
  • Alpha Stability: 0.83 R² across all alpha values - Performance remains consistent whether using Ridge (α=0), Elastic Net (α=0.5), or LASSO (α=1), validating robustness

Interpretation

The model demonstrates reliable identification of advertising channel

Data Preparation

Data Quality

Data preprocessing and column mapping

Data Quality

Initial Rows300
Final Rows300
Rows Removed0
Retention Rate100

Data Quality

MetricValue
Initial Rows300
Final Rows300
Rows Removed0
Retention Rate100%
Processed 300 observations, retained 300 (100.0%) after cleaning

Interpretation

Purpose

This section documents the data preprocessing pipeline for the Elastic Net regression analysis. It shows that all 300 observations were retained without any rows removed during cleaning, indicating either pristine input data or minimal preprocessing requirements. Understanding data retention is critical for assessing whether the final model is trained on a representative sample and whether any systematic data loss could bias the regression results.

Key Findings

  • Initial Rows: 300 observations with no exclusions during preprocessing
  • Retention Rate: 100% — all records passed quality checks and were included in model training
  • Rows Removed: 0 — no filtering, outlier removal, or missing value imputation was necessary
  • Train/Test Split: Not explicitly documented, though cross-validation (10-fold CV) was applied during model fitting

Interpretation

The perfect retention rate suggests the input dataset was clean and complete, with no missing values, duplicates, or quality issues requiring remediation. This is favorable for model reliability, as the full sample size of 300 observations supports the Elastic Net's ability to estimate 6 predictors and achieve an R² of 0.834. However, the absence of documented train/test splits means model performance metrics (RMSE=5.035, MAE=4.106) reflect cross-validated estimates rather than holdout validation.

Context

The 10-fold cross-

Figure 4

Non-Zero Coefficients

Features selected by elastic net

Non-zero coefficients at the selected lambda — the variables chosen by Elastic Net

Interpretation

Purpose

This section identifies which variables drive the outcome and quantifies their individual impact. By showing non-zero coefficients at the optimal regularization level (lambda=1.001), it reveals the final set of predictors the model retained after balancing fit quality with simplicity. Understanding variable importance is essential for interpreting model behavior and identifying the key drivers of predictions.

Key Findings

  • Variables Selected: 4 of 6 predictors retained—predictor_1 (2.85), predictor_2 (1.36), predictor_3 (-1.6), and predictor_6 (0.26)
  • Strongest Predictor: predictor_1 with coefficient 2.85, indicating the largest positive effect on the outcome per unit increase
  • Negative Effect: predictor_3 (coefficient -1.6) is the only inverse relationship, decreasing the outcome
  • Excluded Variables: predictor_4 and predictor_5 were shrunk to zero by regularization, indicating they add no predictive value given the selected features
  • Regularization Balance: Alpha=0.5 (Elastic Net) retained correlated predictors that pure LASSO would eliminate, improving stability

Interpretation

The model achieves R²=0.834 using only 4

Figure 5

Actual vs Predicted

Actual vs predicted scatter plot showing model fit quality

Interpretation

Purpose

This section evaluates how accurately the Elastic Net model captures the relationship between predictors and the outcome variable. The metrics quantify prediction accuracy and explain how much variance in the target is accounted for by the selected features, which is essential for assessing whether the model is suitable for practical application.

Key Findings

  • R-Squared (0.834): The model explains 83.4% of variance in the outcome, indicating strong explanatory power with the 4 selected predictors capturing the majority of meaningful variation.
  • RMSE (5.035): Average prediction error is approximately 5 units on the outcome scale (mean=43.01), representing roughly 11.7% relative error.
  • MAE (4.106): Median absolute deviation of 4.1 units suggests typical predictions deviate by this amount, slightly lower than RMSE, indicating few extreme outliers.
  • Residual Distribution: Mean residual near zero (0.09) with symmetric distribution (skew=-0.06) shows no systematic bias in predictions across the outcome range.

Interpretation

The model demonstrates solid predictive performance with predictions clustering near the 45-degree diagonal. The high R² combined with moderate error metrics suggests the Elastic Net successfully identified the 4 most influential predictors while regularization prevented overfitting. The balanced residual

Want to run this analysis on your own data? Upload CSV — Free Analysis See Pricing