DIMENSIONALITY MEETS ADAPTABILITY: PORTFOLIO OPTIMIZATION WITH EIGEN PORTFOLIO MANAGEMENT AND REINFORCEMENT LEARNING
A. F. Galindo-Manrique. EGADE Business School, Tecnológico de Monterrey, México. E-mail: alicia.galindo@tec.mx
N. P. Rojas. Instituto Tecnológico y de Estudios Superiores de Monterrey, Escuela de Negocios, Nuevo León, México. E-mail: nuriarojas@tec.mx
R. Caballero-Fernández. Instituto Tecnológico y de Estudios Superiores de Monterrey, Escuela de Negocios, Nuevo León, México. E-mail: rodrigocaballero@tec.mx
- Fuzzy Economic Review: Volume 30, Number 1, 2025
- DOI: 10.25102/fer.2025.01.01
Abstract
This study examines the comparative performance of Eigen-Portfolio Management (EPM) and Reinforcement Learning (RL) in portfolio optimization across Large-Cap, Mid-Cap, and Small-Cap equities. EPM leverages eigenvectors and eigenvalues from the covariance matrix of asset returns to construct orthogonal portfolios. On the other hand, RL applies adaptive learning through dynamic rebalancing guided by a Sharpe ratio–based reward function. The objective of this research was to evaluate the two approaches (EPM and RL) against equal-weighted and market-cap benchmarks, employing daily data from the iShares Core S&P 500 ETF (IVV) and its mid- and small-cap counterparts (2015–2025). The results demonstrated that EPM consistently outperforms the equal-weighted benchmark only in Large-Cap and Mid-Cap portfolios, delivering cumulative returns of 122.92% and 158.07% with Sharpe ratios of 2.18 and 1.73, respectively. However, its performance weakens in Small-Cap portfolios due to higher idiosyncratic noise, yielding only 3.61% return with a Sharpe ratio of 0.06. By contrast, RL underperforms the benchmark across all capitalization levels, achieving lower returns but mitigating volatility; this suggests a more risk-averse allocation pattern. The findings suggests that EPM’s effectiveness is due to the usage of latent structures, while RL powerful in theory, still presents limitations when applied to real financial market information. This study emphasizes how EPM outperforms RL techniques, specifically Deep Q-learning.