The Microstructure, Strategic Evolution, and Macro-Paradigm Shift of Systematic Investing
- lx2158
- Feb 15
- 37 min read
Updated: Aug 19
Abstract
This paper aims to deeply analyze multiple core aspects of the systematic investing landscape, from anomalies in market microstructure and the theoretical underpinnings and evolution of investment strategies, to the profound impact of macro-economic paradigm shifts on asset pricing. We begin by deconstructing the Cost of Carry Model in futures pricing theory, using the recent technical imbalance in the gold market as a case study to explore market anomalies driven by inventory, logistics, and geopolitical factors. Subsequently, we shift our focus to the alternative investment space, where the evolution of hedge fund fee structures becomes a focal point of our analysis. By contrasting the classic "2/20" model with the emerging "Multi-Strat Platform" and "Pass-Through" fee models, this paper reveals the complex interplay of industry transparency and interest alignment mechanisms amidst the trend of institutionalization.
The core of the dissertation is dedicated to the construction and evaluation of systematic strategies. We elaborate on the critical decision in the portfolio expansion path—the trade-off between increasing market breadth and deepening model diversity—and introduce correlation analysis as the theoretical basis for this decision. Building on this, the paper mathematically deconstructs the risk characteristics of leveraged ETFs, particularly the "Volatility Decay" phenomenon, and discusses the determination of optimal leverage using the Kelly Criterion. Addressing the validity of strategy backtesting, we compare the merits of ETF-based versus futures contract-based approaches, emphasizing the fundamental principle of "Test what you trade, and trade what you test," and quantify the composite impact of roll costs, liquidity, management fees, and the risk-free rate on total returns.
At the level of signal generation, this paper explores how to integrate traditional econometric models (such as linear regression) into modern forecasting frameworks, clarifying their intrinsic similarities with non-parametric scaling techniques like Z-Score normalization. We further analyze how, in an era of "factor saturation," the research frontier for large quantitative institutions has shifted from traditional signal mining to alternative data, market microstructure, and high-efficiency execution algorithms. The paper also delves into the non-normality of asset return distributions, particularly the value of Skewness in investment decision-making. Through simulation analysis, we challenge the conventional wisdom that "positive skew commands a high premium," arguing that in the pursuit of maximizing long-term Compound Annual Growth Rate (CAGR), the Sharpe Ratio remains the more critical optimization metric.
Finally, the paper broadens its scope to market dynamics across time scales. By interpreting the latest academic research, we paint a panoramic picture of market behavior from the minute to the decadal level, revealing the shifting dominance of Momentum and Mean Reversion at different frequencies. In the concluding section, we construct a macroeconomic analysis framework to prospectively assess the potential economic consequences of major political events, including tariffs, labor market shocks, supply chain restructuring, and regulatory uncertainty. We explore how these macro variables can reshape the pricing logic of global asset classes through inflation expectations and risk premium channels. The ultimate purpose of this paper is to provide systematic investors with a comprehensive analytical framework that is both theoretically deep and practically instructive in an increasingly complex and uncertain global financial environment.
Chapter I: Market Microstructure Anomalies and the Real-World Challenges to Futures Pricing Theory—A Case Study of the Gold Market
Theoretical models in financial markets are often built on assumptions of efficiency and the absence of friction. However, the complexity of the real world frequently challenges the boundaries of these theories. The recent abnormal fluctuations in the gold market provide an excellent case for an in-depth exploration of the micro-foundations of futures pricing and its performance when faced with physical, institutional, and geopolitical constraints.
1.1 The Cost of Carry Model in Futures Pricing
Theoretically, in a market with no arbitrage opportunities, a deterministic mathematical relationship exists between the price of a futures contract (Ft,T) and the spot price of its underlying asset (St). The core of this relationship is the Cost of Carry Model. For commodity assets that do not generate income (like gold), the cost of carry is primarily composed of two parts: financing costs and storage costs.
Its basic formula can be expressed as:
Ft,T=St⋅e(r+s−c)(T−t)
Where:
Ft,T is the futures price observed at time t for a contract maturing at time T.
St is the spot price at time t.
r is the risk-free interest rate, representing the cost of financing the holding of the physical asset.
s is the storage cost rate per unit of the asset (Storage Cost).
c is the convenience yield per unit of the asset (Convenience Yield), which is typically considered to be close to zero for an asset like gold, which is primarily a financial asset rather than a consumable industrial commodity.
(T−t) is the time to maturity of the futures contract.
Within this framework, a situation where the futures price is higher than the spot price is known as "Contango." This implies that an arbitrageur holding the physical asset and selling a futures contract can lock in a return exceeding the risk-free rate, with the excess return compensating for storage costs. Theoretically, the annualized rate of this spread, (r+s−c), should fluctuate stably around the risk-free rate plus a small and relatively constant storage cost.
1.2 The Anomaly in the Gold Market: An Explosive Rise in Borrowing Costs
The peculiarity of the recent gold market lies in the fact that its contango has far exceeded theoretical expectations. Data indicates that the annualized borrowing cost (implied in the futures-spot spread) has surged from levels typically highly correlated with the federal funds rate (around 4.5%-5%) to astonishing heights of 10%, 11%, or even 12%. This phenomenon cannot be explained solely by the risk-free rate or conventional storage costs.
While storage cost is part of the cost of carry, it is inherently subject to economies of scale. For large institutions or central banks, the marginal cost of storing vast quantities of gold is minimal. The operating costs of a fixed, high-security vault (security personnel, insurance, maintenance, etc.), when amortized over tons of gold, result in a negligible unit cost rate. Therefore, the core factor driving the abnormal widening of the spread must be a structural change in another variable in the model—the financing cost r—or, a "friction cost" not fully captured by the standard model has risen sharply.
Here, "borrowing cost" or "financing cost" does not merely refer to an abstract risk-free rate; it more concretely reflects the actual cost of borrowing physical gold (for delivery or to meet short-term demand) in the physical market. When this cost skyrockets, it signifies a severe physical shortage or a "squeeze" in the market. The driving force behind this is the observed physical or legal transfer of ownership of "gold from London to the US."
This large-scale inventory transfer disrupts the balance of stock between warehouses in different geographical locations (such as LBMA-accredited vaults in London and COMEX-approved depositories in New York). If market participants anticipate that future demand for delivery in New York will far exceed the locally available inventory, they will be willing to pay a higher price to lock in forward delivery of gold, thereby driving up futures prices. This panic over future physical availability translates directly into a massive risk premium, which is incorporated into the cost of carry and manifests as a surge in borrowing costs.
1.3 Implications for Systematic Investors: The Distortion of the Carry Signal
For systematic investors, especially those whose models incorporate a "Carry" factor, this market anomaly presents a complex signal interpretation challenge.
The core idea of a Carry strategy is to be long assets with high carry (i.e., futures price is well below the expected future spot price, typically exhibiting Backwardation) and short assets with low carry (Contango). The logic is that holders of high-carry assets can earn a positive "Roll Yield." However, in the current extreme contango situation in the gold market, a traditional carry signal would unequivocally point to "shorting gold."
Yet, at the same time, the root causes driving this extreme contango—political uncertainty, panic demand for future physical gold—are themselves strong bullish signals. The upward momentum of the gold price is clearly positive, reflecting the market's flight to safe-haven assets.
Thus, systematic investors face a dilemma:
Price Momentum Signal: Bullish.
Carry Signal: Bearish.
This signal conflict highlights the limitations of relying solely on historical data and standardized factor definitions. A sophisticated systematic model needs to be able to identify and respond to such "model failure" risks caused by abrupt changes in market microstructure. This may require the introduction of more complex mechanisms, such as:
State-Space Models: To identify whether the market is in a "normal state" or a "squeeze state," and to assign different weights to factor signals under different states.
Alternative Data Integration: Incorporating non-price data such as warehouse inventory levels, cross-regional transport costs, and geopolitical risk indices into the model to more directly capture the root causes of the abnormal carry signal.
Signal Interaction Analysis: Systematically reducing reliance on one or both signals when momentum and carry exhibit extreme divergence, and shifting into a more cautious risk management mode.
In summary, the case of the gold market profoundly reveals that even classic, seemingly robust financial theories like the Cost of Carry Model can become fragile in the face of real-world frictions and irrational expectations. For systematic investors, understanding and modeling these "extra-theoretical" microstructural dynamics is key to whether their strategies can remain robust in complex market environments.
Chapter II: The Evolution of Fee Structures in Alternative Investments and the Transparency Dilemma
Since its inception, the fee structure of the hedge fund industry has been a focal point of investor attention and controversy. From the classic "Two-and-Twenty" model to today's increasingly complex arrangements, this evolutionary process not only reflects changes in the competitive landscape but also profoundly reveals the negotiation of interest alignment between investors and managers, as well as the growing challenge of transparency in the wave of institutionalization.
2.1 The Economic Logic and Decline of the Classic "Two-and-Twenty" Model
The traditional "Two-and-Twenty" model refers to a fee structure where the fund manager charges a 2% management fee and a 20% performance fee (or incentive fee). This structure has its theoretical rationale:
Performance Fee (20%): This is a reward for the fund manager's ability to generate alpha. It aims to align the manager's interests with those of the investors. By sharing in the profits, managers are incentivized to pursue higher absolute returns, not just to outperform a benchmark.
However, over time, especially after the 2008 financial crisis, institutional investors (such as pension funds and sovereign wealth funds) significantly increased their allocations to alternative investments. Wielding their immense capital and sophisticated due diligence capabilities, they began to challenge the high fee structures. Criticisms primarily focused on the following points:
Beta masquerading as Alpha: Many funds charged performance fees on returns that were not derived from the manager's unique timing or selection skills (alpha), but were merely compensation for bearing market risk (beta). Investors argued that paying a 20% incentive fee for simple market exposure was unreasonable.
High-Water Mark and Asymmetry: Although performance fees typically include a high-water mark provision (meaning the fund's NAV must reach a new peak before a fee is charged), the fee structure is still inherently "asymmetric." The manager shares in the profits but does not directly bear the losses (other than the opportunity cost of future fees).
"Heads I win, tails you lose" nature of the Management Fee: Regardless of performance, the 2% management fee was seen as a substantial fixed cost, creating a significant drag on investors' total returns, especially in a low-interest-rate environment.
Under these pressures, the industry's average fee rates have steadily declined over the past decade. Many funds began offering "one-and-fifteen" (1/15) or even lower rates, or introduced more complex tiered fee structures.
2.2 The Rise of Multi-Strat Platforms and the Pass-Through Fee Model
In recent years, a new business model—the multi-strategy platform or "Pod Shop"—has rapidly emerged, bringing with it a more complex and opaque fee structure. In this model, a large platform hedge fund employs dozens or even hundreds of independent portfolio manager (PM) teams (the "pods"), each responsible for a specific strategy. The platform provides capital, technology, risk management, and operational support.
The core of its fee structure is the "Pass-Through Model." Unlike traditional fund-level fee charging, this model passes through the vast majority of the platform's operating costs directly to the investors. These costs are all-encompassing, going far beyond simple trading commissions, and may include:
PM Team Compensation and Bonuses: This is the largest cost component. Each pod team's bonus is deducted directly from the profits it generates, with the payout ratio potentially being as high as 15%-25%.
Data and Research Fees: Including Bloomberg terminals, Reuters data, alternative data procurement, etc.
Technology Infrastructure Costs: Servers, software licenses, network connectivity, etc.
Legal and Compliance Fees.
Office Space and Administrative Staff Costs.
The Platform's Financing Costs.
2.3 Mathematical Deconstruction of the Fee Structure and "Hidden" Leverage
Let's deconstruct the astonishing nature of this model with a concrete example. A certain fund achieved a gross return of 15.2% before all fees. However, after layers of pass-through costs, the net return to the investor was merely 2.8%. This implies a Total Expense Ratio of a staggering 12.4%!
If we convert this actual fee rate back to the classic "X and Y" model, i.e., "X% management fee + Y% performance fee," the situation is even more shocking. Assuming no management fee, this is equivalent to a performance fee of 81.6% (12.4 / 15.2)! Even if we assume a portion of this is a fixed "quasi-management fee," the actual performance fee percentage far exceeds the traditional 20%. Some analyses have even estimated that the equivalent fee rates for certain platforms are as high as "seven-and-twenty" (7/20) or even "fifteen-and-twenty" (15/20).
Another problem with this model stems from an inherent flaw of the "Fund of Funds" model: the lack of performance netting. In a multi-strategy platform, if Team A makes $100 million and Team B loses $50 million, the fund's overall net profit is $50 million. However, the platform still needs to pay Team A its share of the profits (e.g., $20 million), and this money is ultimately borne by the investors' total assets. This leads to a situation where, even if the fund as a whole performs modestly or even loses money, the total fees can still be very high due to the outstanding performance of some teams. This further inflates the total costs and erodes investor returns.
2.4 The Retreat in Transparency and the Dilemma for Institutional Investors
This complex fee structure marks a significant step backward for the industry in terms of transparency. The traditional "two-and-twenty" model, despite its criticisms, had a clear and straightforward calculation method. Investors could explicitly know the amounts of management and performance fees. In the pass-through model, however, a vast amount of operating costs are bundled and passed on, making it very difficult for investors to accurately estimate or audit the reasonableness of each expense item.
This raises a paradox: why would large institutional investors, the main force driving down fees, accept this seemingly more "expensive" and opaque model?
Path Dependency on Net Returns: Many multi-strategy platforms have indeed delivered excellent and stable net returns over the past few years. Institutional investors may be more focused on the final, risk-adjusted net return they receive, and less sensitive to the process and cost structure of achieving it.
Scarcity of Talent and Strategies: Top investment talent and unique strategies are scarce resources. Multi-strategy platforms, with their high-powered incentive mechanisms, have successfully attracted the industry's top PM teams. To gain access to these scarce sources of alpha, institutional investors may be forced to accept more demanding fee terms.
Information Asymmetry and Complexity: The complexity of pass-through fees itself creates a barrier to understanding. During due diligence, institutional investors may find it difficult to fully penetrate and assess the reasonableness of all cost items, thus underestimating the total fee drag.
In stark contrast is the retail investment space, especially under the European UCITS (Undertakings for Collective Investment in Transferable Securities) framework, where regulations on fee disclosure and transparency are very strict. Metrics like the Total Expense Ratio (TER) or the Ongoing Charges Figure (OCF) allow retail investors to clearly compare the costs of different products. However, even this highly regulated area is beginning to see a "gray area" where "soft" costs like research expenses are excluded from official fee ratios, which is undoubtedly a warning sign.
In conclusion, the evolution of hedge fund fee structures is a complex story of interests, information, and power. On the surface, the industry seems to be responding to investor calls for lower fees. In reality, through structural innovation, a segment of top-tier managers may be capturing a larger share of the profits than ever before, in a more concealed and complex manner. This poses an unprecedented challenge to the ecosystem of the entire asset management industry, especially to the governance and oversight capabilities of institutional investors.
Chapter III: The Construction of Systematic Investment Portfolios: Breadth, Depth, and Instrument Selection
Building a robust and efficient systematic investment portfolio is a complex engineering task involving multi-dimensional trade-offs. Investors must not only allocate among different strategy types but also decide between strategic breadth (the number of markets covered) and depth (the diversity of models). Furthermore, the choice of trading instruments—be it traditional futures contracts or emerging exchange-traded funds (ETFs)—profoundly impacts the strategy's implementation costs, risk exposures, and ultimate performance. This chapter will delve into these core building blocks, providing an analytical framework that integrates both theory and practice.
3.1 The Portfolio Expansion Path: Adding Markets vs. Adding Models
When a systematic investor receives additional capital, a classic question arises: should the money be used to trade more markets, or to run more trading systems (models) on the existing markets? The answer to this question is not static; its core lies in understanding the mathematical essence of diversification—correlation.
Modern Portfolio Theory (MPT) teaches us that the risk of a portfolio (typically measured by variance or standard deviation) depends not only on the risk of individual assets but, more importantly, on the correlation between them. The total portfolio risk, σp2, can be expressed as:
σp2=i=1∑nwi2σi2+i=1∑nj=1,j=i∑nwiwjσiσjρij
where wi is the weight of asset i, σi is its standard deviation, and ρij is the correlation coefficient between assets i and j. The benefit of diversification comes from the second part of the formula: when the correlation ρij between assets is low (or even negative), the total portfolio risk will be significantly lower than the weighted average of the individual asset risks.
Based on this framework, we can analyze the pros and cons of "adding markets" versus "adding models":
Expanding Market Universe: When the initial portfolio has a narrow market coverage (e.g., only 10 markets), adding new, uncorrelated markets is usually the most direct and effective way to enhance diversification benefits. The correlation between different asset classes (e.g., equities, bonds, commodities, currencies), and even between different instruments within the same asset class, is often low. For instance, the correlation of a newly added agricultural futures contract with an existing basket of stock index futures might be only 0.2-0.4. According to the MPT formula, introducing such a low-correlation asset can significantly reduce the overall portfolio volatility, thereby increasing the expected return for a given level of risk, i.e., improving the Sharpe ratio.
Expanding Model Diversity: Once the portfolio already covers a sufficiently broad market universe (e.g., over 100 markets), the marginal diversification benefit of adding new markets diminishes. At this point, adding different types of trading models may become the better option. The key here is that the new models must have a low correlation with the existing ones.
Low-Correlation Models: If the existing model is a medium- to long-term trend-following system, adding a short-term mean-reversion system or a strategy based on fundamental value (like a Carry strategy) could result in a strategy return series with a relatively low correlation to trend-following (e.g., 0.5-0.7). This "strategy-level" diversification can capture profit sources from different market environments, thus smoothing the overall portfolio return curve.
High-Correlation Models: Conversely, if one simply adds another trend-following model with slightly different parameters under the existing framework (e.g., adding a 60/120-day moving average crossover system to a 50/100-day system), the return correlation between these two models could be very high (e.g., 0.8-0.9). In this case, the diversification benefit from adding the model is very limited.
Additionally, the two approaches differ in capital efficiency. Adding new markets typically requires additional margin and capital allocation. On the other hand, if new models are implemented within the existing market and capital framework through signal combination and risk allocation, their marginal capital usage could be very low, or even zero. This gives model addition an edge in terms of capital efficiency.
Conclusion: The portfolio expansion path should follow a dynamic, marginal-benefit-driven principle. Initially, priority should be given to expanding the breadth of the market universe to quickly capture the diversification benefits across asset classes. Once market coverage reaches a certain level, the focus should shift to increasing the depth and diversity of strategies, introducing alternative risk premia (like Carry, Value, etc.) that are lowly correlated with the core strategy.
3.2 Choice of Trading Instruments: ETF vs. Futures
When executing systematic strategies, the choice of instrument is crucial. Traditional Managed Futures strategies primarily use futures contracts, but with the development of financial markets, ETF products tracking various assets have become increasingly abundant, offering an alternative for systematic traders.
Principle: "Test what you trade, and trade what you test." This means your backtesting system should simulate, as accurately as possible, the real trading instruments you plan to use, along with all their associated costs and frictions.
3.2.1 Comparison of Cost and Fee Structures
Futures: The explicit costs for a trader are mainly trading commissions and rolling costs. Rolling cost refers to the potential price difference loss when closing a position in a near-month contract and simultaneously opening a position in a far-month contract before expiration. Implicit costs include the bid-ask spread and slippage.
ETFs: Besides the bid-ask spread and commissions, traders must also pay the ETF's management fee, typically expressed as the Total Annual Expense Ratio (TER). For futures-based ETFs, the rolling is done internally, and the roll cost is already reflected in the ETF's Net Asset Value (NAV). However, the manager may face higher impact costs when conducting large-scale rolls. Furthermore, as discussed in Chapter II, one must be wary of potential "hidden" pass-through costs beyond the TER.
Comparison: For investors with larger capital and mature trading systems, trading futures directly is usually more cost-effective as it avoids the layer of ETF management fees. While the economies of scale of an ETF could theoretically lower trading costs, its need for profit and operational overhead often offsets this advantage.
3.2.2 Accurate Measurement of Total Return
When conducting backtest comparisons, it is imperative to ensure a "level playing field," i.e., to compare their "True Total Return Series."
Futures Total Return: The unique feature of futures trading is its margin system. A trader only needs to deposit a small fraction of the contract's notional value as margin. The vast majority of the capital can be held in risk-free assets (like short-term Treasury bills) to earn interest. Therefore, the total return of a futures strategy should be:
RFutures Total=RFutures PnL+RRisk-Free
where RFutures PnL is the profit and loss from the futures position itself, and RRisk-Free is the risk-free interest earned on the idle cash in the account. Many standard backtesting software or data feeds provide futures price series that do not include this interest income, which must be added manually for a fair comparison.
ETF Total Return: For an ETF, its price usually already incorporates the interest earned on its cash holdings. If the ETF pays dividends, its total return should be the price appreciation plus the return from reinvested dividends. Therefore, when comparing, one must either add the risk-free rate to the futures return or subtract it from the ETF return to ensure comparability.
3.2.3 Liquidity, Market Access, and Contract Size
Liquidity: Mainstream futures contracts (like S&P 500 index futures, US Treasury futures) typically have extremely high liquidity and very tight bid-ask spreads. In contrast, many ETFs, especially those tracking niche markets or complex strategies, may suffer from insufficient liquidity and wider bid-ask spreads.
Market Access and Contract Size: ETFs have advantages in certain aspects. They provide a convenient channel for accessing some overseas markets or specific commodity markets that are difficult to enter directly. Moreover, futures contracts are often large in size (e.g., one crude oil futures contract represents 1,000 barrels of oil), making fine-tuned position management and risk diversification difficult for retail investors with smaller capital. The trading unit of an ETF (one share) is much smaller in value, and it is even possible to trade fractional shares. This significantly lowers the entry barrier and facilitates more granular asset allocation.
Conclusion: For well-capitalized professional investors seeking cost efficiency and high liquidity, futures contracts remain the preferred instrument for executing systematic strategies. For investors with limited capital seeking convenient market access and granular position management, ETFs offer a valuable alternative. Regardless of the instrument chosen, building a backtesting system that can accurately simulate all its cost and return components is a prerequisite for successful strategy development.
Chapter IV: Generation and Evaluation of Predictive Signals: From Econometric Models to Machine Learning
The core of systematic investing lies in transforming market information into executable trading signals. This process, the construction of a predictive model, is one of the most creative and challenging aspects of quantitative investing. This chapter will explore how to generate predictive signals from raw "features," compare traditional econometric methods (like linear regression) with simpler scaling techniques, and further discuss the profound shift in the research focus of large quantitative institutions in the current era of "factor saturation."
4.1 The Signal Generation Framework: From Feature to Forecast
No matter how complex a strategy is, its ultimate goal is to answer a simple question: for a given asset, should we be long or short, and what should the position size be? The starting point of this decision-making process is the "feature," which is any piece of information believed to have predictive power for future price movements, such as the return over the past N days, a fundamental metric, a market sentiment index, etc. The journey from a feature to a final trading decision (usually quantified as a "forecast" or "position" value) requires a process of transformation and normalization.
4.1.1 The Forecast Scaling Framework
A simple yet robust method is to scale the raw feature (which we'll call the "signal") in some way to turn it into a "forecast" with desirable statistical properties. This forecast value is typically designed to be between -1 and +1 (or some standardized range), where its sign represents direction and its absolute value represents the strength of conviction.
A common technique is to use a normalization method similar to the Z-Score. Assuming we have a raw signal St, we can calculate the standardized forecast Ft as follows:
Ft=σSSt−μS
where μS is the historical mean of the signal St, and σS is its historical standard deviation. This process transforms the raw signal into a series with a mean of 0 and a standard deviation of 1. The benefits of doing this are:
Comparability: Signals from different assets and of different types (e.g., a momentum signal in percentage terms vs. a value signal as the inverse of a P/E ratio) can be compared and combined within the same framework after normalization.
Risk Control: A standardized forecast can be more intuitively linked to a risk budget, facilitating subsequent position sizing.
To constrain the forecast value within a specific range (e.g., -1 to +1), one can further use a squashing function, such as the tanh function or simple capping.
4.1.2 The Linear Regression Framework
A more formal approach is to use econometric models, the most basic of which is linear regression. In this framework, we attempt to build an explicit model to predict future returns.
Objective Function: First, we need to be clear about what we are trying to predict. An excellent systematic strategy's ultimate goal is not to predict raw price returns, but to predict Risk-Adjusted Returns. This is often defined as the return over a future period divided by the volatility over the same period. This is because our position size will ultimately be adjusted for risk, so a high-return, high-risk opportunity may be less attractive than a moderate-return, low-risk one.
Dependent Variable Yt: Rt+1/σt, where Rt+1 is the return over the next period, and σt is the current forecast of future volatility.
Predictive Model: We can establish the following linear regression model:
Yt=α+β⋅Xt+ϵt
where Xt is the raw feature we constructed (the "signal" from before). By fitting this model to historical data, we can obtain the parameters α (intercept) and β (slope).
Forecast Value: Once the model is estimated, for each new feature value Xt, we can generate a predicted risk-adjusted return: Y^t=α^+β^⋅Xt. This Y^t can then be used directly as our "forecast" to guide trading.
4.1.3 Comparison and Connection Between the Two Frameworks
On the surface, Z-Score normalization and linear regression are different methods, but they are highly similar in essence.
Slope β and Scaling: The role of the regression coefficient β is essentially to scale the raw feature Xt. A statistically significant and large-in-magnitude β implies that the feature Xt has strong predictive power for future returns and should thus be given more weight. This is philosophically similar to scaling the signal by its standard deviation σS in the Z-Score method—both adjust the signal's strength based on its historical performance, although the calculation is different.
Intercept α and Systematic Bias: The intercept α represents the average risk-adjusted return of the asset when the feature Xt is zero. In a regression framework, the forecast Y^t automatically adjusts for this systematic bias. For example, if an asset tends to rise (α>0) even when there is no clear trend (Xt=0), the regression model will capture this. In a simple Z-Score framework, one typically subtracts the mean of the signal, which to some extent also serves to remove systematic long/short biases.
To Remove or Not to Remove Systematic Bias: An interesting question is whether we should always remove this systematic bias. The answer is no. For example, the stock market has historically had a positive risk premium; the expected return from holding stocks long-term is positive even when a momentum signal is neutral. If a strategy aims to capture both time-series momentum (timing) and the long-term market risk premium (beta), then this positive bias should not be completely removed. Linear regression offers an option to retain α, whereas a simple normalization method would require a more deliberate design to decide whether to preserve the asset's long-term average return.
Conclusion: For constructing predictive signals, whether using simple normalization scaling or formal linear regression models, the core idea is consistent: transform a feature into a standardized forecast value that can be used to guide position sizing, based on its historical predictive ability. Linear regression provides a more rigorous and interpretable framework, but it may also introduce more model risk and overfitting issues. For most systematic strategies, a well-designed, non-parametric scaling framework based on statistical properties often proves to be more robust.
4.2 The Research Frontier in an Era of "Factor Saturation"
After decades of intense research by both academia and the industry on traditional factors (such as trend, value, carry), it is a common view that finding new alpha in liquid public markets simply by applying more complex mathematical transformations to price and volume data has become exceedingly difficult. This is an era of "factor saturation" or "alpha decay." Consequently, large, mature quantitative investment firms, despite employing numerous PhDs and researchers, have strategically shifted their research focus.
Three Main Directions of the Research Frontier:
Expanding to New Markets (Alt Markets): Alpha may not have disappeared, but merely migrated to less efficient, harder-to-access markets. Therefore, a significant research direction is to apply existing systematic strategies (like trend-following) to a broader and more alternative range of markets. This includes:
Stock index and interest rate futures in emerging market countries.
Non-mainstream commodity futures, such as electricity, carbon emissions, and niche agricultural products.
Over-the-counter (OTC) derivatives markets.
The challenges in these markets are inconsistent data quality, high transaction costs, variable liquidity, and even political and regulatory risks. Overcoming these hurdles to successfully extend strategies to these "virgin territories" is in itself a powerful competitive advantage and a source of alpha.
Improving Trade Execution: For large funds managing billions or even tens of billions of dollars, transaction cost is one of the most critical factors affecting final performance. When trade sizes are massive, every order has a market impact, leading to execution prices that are worse than expected. Therefore, research into execution algorithms has become a core competency. This includes:
Optimal Order Placement Strategies: Developing algorithms (like VWAP, TWAP, Implementation Shortfall) to decide how to break a large order into many smaller ones over a period to minimize market impact and transaction costs.
Microstructure Modeling: Deeply studying the dynamics of the order book to predict short-term liquidity changes, thereby choosing the best time and venue for trading.
Liquidity Sourcing: Using complex algorithms to find hidden liquidity in multiple exchanges and dark pools.
For a large fund, saving a few dozen basis points per year through improved execution algorithms can contribute more to total returns than discovering a brand-new alpha factor.
Leveraging Alternative Data (Alt Data): If traditional quantitative research was about applying "new" methods to "old" data (price, volume, financial statements), a significant paradigm shift today is applying "old" methods (like trend-following, value investing) to "new" data. The sources of alternative data are extremely diverse, for example:
Satellite Imagery: Predicting retail sales by analyzing the number of cars in parking lots, or forecasting crude oil supply by tracking the routes of oil tankers.
Credit Card Transaction Data: Tracking consumer spending patterns in real-time.
Social Media Sentiment: Analyzing public opinion online to gauge market sentiment or brand reputation.
Supply Chain Data: Predicting a company's production and sales by tracking logistics information.
The challenge of alternative data is that it is often unstructured, making data cleaning and processing extremely difficult. It also requires specialized domain knowledge to be interpreted correctly. Successfully converting these unique information sources into tradable signals is one of the most exciting frontiers in quantitative research today.
Conclusion: In today's highly competitive quantitative investment landscape, simple "signal mining" is no longer the sole path to success. True, sustainable competitive advantages increasingly come from breadth of execution (the ability to enter new markets), depth of execution (efficient trading algorithms), and uniqueness of information (leveraging alternative data). Research that merely focuses on fitting more complex machine learning models (like deep neural networks) to traditional price data, while academically interesting, is likely to fall into the trap of overfitting in practice and fail to generate true, sustainable alpha.
Chapter V: Beyond the Sharpe Ratio: The Role and Trade-off of Skewness in Investment Decisions
In the evaluation of investment strategies, the Sharpe Ratio has long been the primary metric. It measures the excess return per unit of risk (as measured by volatility). However, the Sharpe Ratio is based on an implicit assumption that investors are only concerned with the mean and variance of returns and are indifferent to other features of the return distribution, such as skewness and kurtosis. In reality, this is not the case for investors. Particularly for strategies like trend-following, which exhibit significant positive skew, correctly assessing the value of skewness and trading it off against the Sharpe ratio is a critical issue.
5.1 The Definition and Financial Intuition of Skewness
Skewness is a statistical measure of the asymmetry of a probability distribution.
Positive Skew: The tail of the distribution extends to the right, implying the possibility of generating extreme positive returns ("big wins"), while the likelihood of extreme negative returns ("big losses") is relatively small. It is characterized by "a few large wins and many small losses." Trend-following strategies are classic examples of positive skew strategies; they may incur small losses most of the time due to market chop, but they can achieve huge, disproportionate gains when they catch a major trend.
Negative Skew: The tail of the distribution extends to the left, implying the possibility of "black swan" events. It is characterized by "many small wins and a few large losses." Option selling strategies are typical negative skew strategies; they generate steady, small gains most of the time by collecting option premiums, but can face catastrophic losses if the market experiences a violent move.
Investor preferences are typically asymmetric: people are averse to negative skew (fear of huge losses) and have a preference for positive skew (the lottery effect). Therefore, in theory, strategies that offer positive skew should command some sort of "premium," meaning investors are willing to accept a relatively lower Sharpe ratio for it.
5.2 The Impact of Dynamic Position Sizing on Skewness
The return distribution of a strategy is not static; it is profoundly affected by its position sizing rules. Taking trend-following as an example, we can compare three different position sizing methods:
Fixed Contracts: Holding a fixed number of contracts regardless of price movements.
Fixed Notional: Maintaining a constant notional value of the position. For example, if the target exposure is $100,000, one would need to sell some contracts as the price rises and buy some as it falls.
Volatility Targeting / Fixed Risk: Maintaining a constant risk contribution of the position (often approximated by notional value multiplied by volatility). When market volatility increases, the position size is systematically reduced; when volatility decreases, it is increased.
These three methods have distinctly different impacts on the strategy's Sharpe ratio and skewness:
Sharpe Ratio: A large body of empirical research shows that the volatility targeting method typically produces the highest Sharpe ratio. This is because it effectively controls drawdowns and smooths the return curve by proactively de-leveraging when the market is most unstable and risky.
Skewness: In contrast to the Sharpe ratio ranking, the fixed contracts method typically produces the highest positive skew. The intuitive logic is this: when a trend starts and persists (e.g., a surge in cocoa prices), both price and volatility tend to rise simultaneously.
Under a fixed contracts regime, the number of contracts you hold remains constant. Therefore, your notional exposure automatically increases as the price goes up. This gives you a "positive gamma" exposure in the trend, amplifying profits and creating extreme positive returns.
Under a volatility targeting regime, when price and volatility surge together, the system forces you to sell some of your profitable position to control risk. This is equivalent to "cutting the tail" of the trend. While it protects the portfolio from sharp reversals, it also sacrifices the opportunity to capture maximum profit, thereby reducing positive skew.
The fixed notional regime lies somewhere in between.
This creates a core trade-off: Should we pursue the higher Sharpe ratio offered by volatility targeting, or the higher positive skew offered by fixed contracts?
5.3 The Ultimate Goal: Maximizing Long-Term Compound Annual Growth Rate (CAGR)
To answer this question, we must return to the ultimate purpose of investing: to maximize the final wealth at the end of the investment horizon, at an acceptable level of risk. The best metric for this goal is the Compound Annual Growth Rate (CAGR), also known as the geometric mean return.
An approximate formula connecting the Sharpe ratio, volatility, and CAGR is (this formula is a simplification for understanding, not a rigorous mathematical derivation):
$$\text{CAGR} \approx \mu - \frac{\sigma^2}{2}$$where $\mu$ is the arithmetic mean annualized return, and $\sigma$ is the annualized volatility. If we adjust the strategy's leverage to a target volatility $\sigma_T$, then $\mu = \text{Sharpe} \cdot \sigma_T$. Substituting this in, we get:$$\text{CAGR} \approx \text{Sharpe} \cdot \sigma_T - \frac{\sigma_T^2}{2}$$
This formula reveals that, for a given target volatility, CAGR is positively correlated with the Sharpe ratio. However, this formula itself does not directly include a term for skewness. The impact of skewness is indirect; it affects the path of the return series, which in turn affects the final compounding effect.
To more accurately assess the trade-off between Sharpe ratio and skewness, we can use Monte Carlo simulations. We can generate simulated return series with different combinations of Sharpe ratio and skewness, and then calculate which combination produces the highest median final wealth or the highest CAGR over the long term.
Simulation Results and Conclusion
Through in-depth analysis of such simulations, we arrive at a somewhat counter-intuitive but very important conclusion: In the pursuit of maximizing long-term wealth (CAGR), the importance of the Sharpe ratio far outweighs that of skewness.
In other words, the "price" one should be willing to pay in terms of a lower Sharpe ratio to obtain higher positive skew is actually very low. A strategy with a slightly higher Sharpe ratio but lower skew will almost always outperform a strategy with a slightly lower Sharpe ratio but very high skew in terms of long-term compounding.
Why is this so?
The Curse of Compounding (Volatility Drag): High volatility is the enemy of compounding. Even if a strategy has the potential to generate huge positive returns, if its day-to-day volatility is also high, that volatility itself will severely drag down the long-term geometric mean return. The core advantage of a volatility targeting strategy is that it directly and systematically suppresses this volatility drag.
The "Cost" of Paying for Skewness: The high skewness generated by a fixed contract model comes at the cost of taking on ever-increasing risk exposure as a trend develops. This escalating risk, while it may lead to home-run profits, also dramatically increases the probability of a sudden reversal causing a massive drawdown. Over the long run, the risk cost of this "gambling" behavior often outweighs its potential rewards.
Implications for Investors:
The Sharpe Ratio is Still King: When evaluating and designing systematic strategies, maximizing risk-adjusted return (i.e., the Sharpe ratio) should remain the primary objective. A high Sharpe ratio strategy, even if its return distribution is less "sexy" (i.e., less positively skewed), is the most reliable engine for creating long-term wealth through its robust compounding power.
Beware the Over-Adoration of "Crisis Alpha": Trend-following strategies are highly regarded for their ability to deliver positive returns during market crises (like in 2008 and 2020), which is a manifestation of their positive skew. This characteristic undoubtedly has huge portfolio hedging value. However, investors should not choose a version of the strategy with a significantly lower long-term Sharpe ratio just to excessively chase this "lottery-ticket"-like hedge. A better approach is to select a high-Sharpe-ratio, volatility-targeted trend-following strategy and adjust its overall risk allocation level in the portfolio according to hedging needs.
Be Extremely Cautious of Negative Skew Strategies: Our analysis also implies, conversely, that investors must be extremely careful with strategies that exhibit a high Sharpe ratio but have significant negative skew. A seemingly smooth high Sharpe ratio may be masking a huge risk of a future "black swan" event. For such strategies, relying solely on the Sharpe ratio for evaluation is far from sufficient; rigorous stress testing and scenario analysis of their tail risk are essential.
In summary, while skewness is an important dimension for understanding the return characteristics of an investment strategy, its value should not be overstated when constructing a portfolio aimed at maximizing long-term wealth. A well-designed systematic strategy that focuses on maximizing the Sharpe ratio will, through its powerful compounding engine, ultimately create the most sustainable and robust returns for investors.
Chapter VI: Market Dynamics Across Time Scales: A Panoramic View of Momentum and Mean Reversion
The behavior of financial markets is not monolithic; it exhibits distinctly different characteristics at different time scales. For a long time, academics and practitioners have observed that at certain frequencies, prices tend to continue their past movements, exhibiting "momentum" or trendiness. At other frequencies, prices tend to revert to their long-term mean, exhibiting "mean reversion." Understanding the shifting dominance of these two forces across different time scales is crucial for designing effective, multi-frequency systematic trading strategies. A recent comprehensive study covering data from the minute to the decadal level has painted an unprecedented panoramic picture of these market dynamics.
6.1 The Theoretical Foundations of Momentum and Mean Reversion
These two seemingly contradictory market behaviors have different explanations in economics and behavioral finance:
Momentum:
Behavioral Finance Explanations: The emergence of momentum is often attributed to investors' cognitive biases. For example, underreaction to new information, where investors are too slow and conservative in responding, causes prices to take time to fully reflect new information, thus forming a trend. In addition, the herding effect and positive feedback loops also play a fueling role, where rising prices attract more buyers, pushing prices even higher.
Institutional Factors: The trading behavior of large institutional investors (e.g., slow position building or liquidation to meet compliance requirements or fund cash flows) can also create price drifts that last for weeks or months.
Mean Reversion:
Efficient Market Hypothesis: In the long run, asset prices should fluctuate around their intrinsic value, which is determined by fundamentals. Any excessive deviation due to short-term sentiment or liquidity shocks will eventually be corrected by rational arbitrageurs, causing prices to revert to their value. This is the core logic of long-term mean reversion and is in line with the philosophy of value investing.
Market Microstructure: At very short time scales (like seconds or minutes), mean reversion is driven by market supply and demand dynamics and the behavior of market makers. Market makers earn the spread by providing bid and ask quotes, and their actions naturally suppress short-term unidirectional price movements, causing prices to tend to oscillate between the bid and ask prices. This phenomenon is known as the "bid-ask bounce."
6.2 Empirical Evidence Across Time Scales
The comprehensive study systematically examined market behavior patterns across different holding periods by analyzing a massive dataset spanning over three hundred years. Its core findings can be summarized in a clear time-scale switching map:
1. Ultra-High-Frequency Domain (less than 1 hour): Mean Reversion Dominates
At the minute-level time scale, the market exhibits strong mean-reversion characteristics. This means that if an asset has risen in the past few minutes, its probability of falling in the next few minutes is slightly higher.
Peak: The effect of mean reversion is strongest in a time window of approximately 5 minutes.
Driving Forces: The mean reversion in this frequency band mainly reflects the influence of market microstructure, including market makers' inventory management, order book imbalances, and the short-term absorption and rebound from large order impacts.
Strategic Implications: This is the main battlefield for high-frequency trading (HFT) firms. Their strategies, such as statistical arbitrage and market making, are essentially capitalizing on this microsecond- to minute-level mean reversion phenomenon. For ordinary investors, direct participation in this frequency band is almost impossible due to transaction costs (commissions, spreads, slippage) and technological barriers (low-latency trading systems and data lines).
2. Medium-High to Medium-Long-Term Domain (1 hour to approx. 2 years): Momentum Dominates
When the time scale is extended to over 1 hour, market behavior undergoes a fundamental shift, and momentum begins to be the dominant force.
Pervasiveness: This momentum effect persists over a very wide range of frequencies, from hours, days, weeks, months, up to about one to one and a half years. This is the "sweet spot" for traditional trend-following and time-series momentum strategies.
Peak: The strength of the momentum effect peaks at holding periods of a few months to about a year. This is highly consistent with classic academic research (e.g., Jegadeesh and Titman, 1993) that found a 3-12 month momentum phenomenon.
Strategic Implications: This frequency band is the core profit source for the vast majority of Commodity Trading Advisors (CTAs) and systematic macro strategies. By using technical indicators like moving averages and breakout systems to capture trends lasting from weeks to months, investors can systematically harvest the momentum risk premium. It is noteworthy that even in longer-term intraday trading (e.g., several hours), momentum strategies have more theoretical and empirical support than mean reversion strategies. However, the faster the trading frequency, the more severely transaction costs will erode the strategy's profitability, so a detailed cost-benefit analysis is essential.
3. Long and Ultra-Long-Term Domain (greater than 2 years): Mean Reversion Dominates Again
When the holding period exceeds about 2 years, the momentum effect begins to decay and even reverse, and the market re-enters a paradigm dominated by mean reversion.
Strength: The longer the holding period (e.g., 3-5 years or more), the stronger the mean reversion effect. This means that assets that have performed best over the past few years are more likely to underperform in the next few years, and vice versa.
Driving Forces: Long-term mean reversion reflects the fundamental laws of economic cycles, capital cycles, and valuation mean reversion. When an industry or asset class experiences a long period of prosperity, high profits attract a large amount of new capital, increasing competition and ultimately depressing future returns. Conversely, in an industry that has experienced a long-term depression, capital exits and competition decreases, laying the groundwork for high future returns.
Strategic Implications: This is the theoretical basis for value investing and contrarian strategies. By buying assets that have been long abandoned by the market and are cheaply valued, and selling assets that are over-hyped and expensively valued, investors can achieve returns over the very long term. For systematic investors, a signal based on multi-year negative momentum (i.e., a negative return over the past 3-5 years) can be constructed to systematically implement such a contrarian strategy.
6.3 Strategy Integration and Final Thoughts
This panoramic view of market dynamics across time scales provides a blueprint for systematic investors to construct multi-layered, all-weather strategy portfolios:
Core Allocation: Given that momentum has shown robust effectiveness over the broad frequency band from 1 hour to 2 years, a momentum strategy with a core of medium- to long-term trend-following should reasonably be the cornerstone of most systematic portfolios.
Satellite Allocation:
A long-term mean-reversion (value) strategy can be introduced as a satellite allocation. Because its returns are negatively correlated with momentum strategies over the long term, it can greatly smooth the portfolio's long-term fluctuations, especially during periods of "trend reversal" or "style rotation" when momentum strategies might experience prolonged difficulties.
For investors with the requisite technological and cost-control capabilities, a short-term mean-reversion strategy can be added to the portfolio. Although its standalone Sharpe ratio may not be high (especially after deducting high transaction costs), it can still provide valuable diversification benefits due to its extremely low return correlation with medium- to long-term momentum strategies.
The final conclusion is that the market is not a simple random walk, but a complex multi-scale system.Any single strategy that claims to be a "one-size-fits-all" solution is bound to fail in certain market environments or at certain time scales. A truly robust systematic investment framework must acknowledge and embrace this complexity. By allocating to complementary strategies (momentum and mean reversion) at different frequencies, one can construct a truly all-weather portfolio that can adapt to different market paradigms. This research provides the most comprehensive and solid empirical support to date for this philosophy.
Chapter VII: Macro-Paradigm Shift: Geopolitics, Institutional Trust, and the Re-pricing of Major Asset Classes
Traditional asset pricing models, such as the Capital Asset Pricing Model (CAPM) and its multi-factor extensions, are mostly built in a relatively stable and predictable macroeconomic and political environment. However, we are now in an era where the geopolitical landscape is being dramatically reshaped, global supply chains are facing challenges, and societal trust in core institutions is continually eroding. These profound macro-paradigm shifts are fundamentally challenging the logic of global asset pricing through multiple channels, including inflation, risk premia, and growth expectations. This chapter aims to construct a forward-looking analytical framework to explore the potential risks and opportunities for various asset classes under this new paradigm.
7.1 The Transmission Mechanisms of Macro Shocks
Major political events or policy shifts do not directly affect asset prices; rather, they are transmitted to financial markets through a series of interconnected economic variables. We can identify several key transmission channels:
1. The Inflation Channel
Tariffs and Trade Barriers: Raising tariffs directly increases the cost of imported goods. This not only pushes up the Consumer Price Index (CPI) but also propagates through the industrial chain, raising input costs for domestic producers and thus triggering broader cost-push inflation. Furthermore, retaliatory tariffs from trading partners will further disrupt global trade, reduce efficiency, and exacerbate global inflationary pressures.
Labor Market Shocks: Policies that restrict immigration or involve large-scale deportation of undocumented workers will directly reduce the labor supply in specific sectors, especially in agriculture, construction, and hospitality. A reduction in labor supply, with demand held constant, will inevitably lead to upward pressure on wages, further fueling inflation through a wage-price spiral.
Supply Chain Disruptions: Whether from trade wars, geopolitical conflicts, or pandemics, any event that disrupts the finely-tuned global supply chain network will lead to production bottlenecks, rising transportation costs, and shortages of key components. These supply-side shocks will lower the economy's potential output and trigger sharp inflation in the short term.
2. The Risk Premium Channel
Policy Uncertainty: When government policies become frequent, unpredictable, and lack a clear rule-based foundation (e.g., arbitrarily changing tariff rates, interfering with corporate operations), businesses and investors face enormous uncertainty. To compensate for this risk, investors will demand higher expected returns, leading to a rise in the Equity Risk Premium (ERP). A higher ERP means a higher discount rate for stocks, which puts direct downward pressure on their valuations (such as P/E ratios).
Erosion of Institutional Trust: Trust in core institutions such as the rule of law, central bank independence, and the sanctity of government contracts is the bedrock of a modern market economy. Any action that undermines these foundations will increase investor concerns about the security of property rights and the stability of future cash flows. This can lead to a surge in the Country Risk Premium, negatively affecting the valuation of all assets in that country (including stocks, bonds, and currency).
Geopolitical Risk: Tensions in international relations increase the risk of conflict, thereby pushing up the global risk premium. Traditional safe-haven assets like gold and the US dollar typically benefit in such an environment, while risk assets (especially those highly correlated with global trade and emerging markets) will be hit.
3. The Growth Expectation Channel
Reduced Investment: High levels of uncertainty will inhibit long-term capital expenditure by businesses. When companies cannot predict future trade policies, regulatory environments, and macroeconomic stability, they will postpone investment plans, thereby dragging down long-term economic growth potential.
De-globalization: Trade barriers and supply chain decoupling will reduce the operational efficiency of the global economy and hinder the free flow of technology and capital, which will in the long run harm global productivity and economic growth.
7.2 The Outlook for Major Asset Classes in the New Paradigm
Based on the transmission mechanisms above, we can project the outlook for major asset classes in the new macro paradigm:
Bonds: Negative Outlook. Multiple channels mentioned above (tariffs, labor shortages, supply chain issues) all point clearly to higher and more volatile inflation. To combat inflation, central banks may be forced to maintain interest rates at higher levels, or even if they do not hike rates, rising inflation expectations will push up long-term bond yields. Therefore, bond prices (especially long-duration government bonds) face significant downward pressure. In addition, the erosion of institutional trust may trigger concerns about sovereign creditworthiness, further pushing up bond yields.
Equities: Complex and Skewed-Negative Outlook. The stock market faces a multi-pronged assault. On one hand, rising inflation and interest rates will erode corporate profits and increase the discount rate for valuations. On the other hand, a rising risk premium will directly compress P/E ratios. While some policies (like deregulation, tax cuts) might be beneficial to specific sectors' profits in the short term, the negative impacts from uncertainty, supply chain chaos, and slowing global growth are likely to dominate in the long run. The internal structure of the market may see significant divergence; companies that can pass on costs to consumers, are primarily focused on the domestic market, and are not affected by international supply chains may perform relatively better.
Gold: Positive Outlook. Gold is the traditional hedge against geopolitical risk and inflation expectations. In an environment of heightened uncertainty, declining institutional trust, and high inflationary pressures, gold's appeal as the "ultimate safe-haven asset" and store of value will be significantly enhanced.
Systematic Strategies (especially Trend-Following CTAs): Highly Attractive Outlook. The unique value of a trend-following strategy lies in its "paradigm neutrality." It does not rely on any specific macroeconomic assumptions or value judgments, but simply follows price trends. In an era of dramatic macro-paradigm shifts where traditional asset class correlations may break down, this strategy has the following core advantages:
Adaptability: Whether the market enters inflation or deflation, whether gold rises or the stock market falls, as long as a persistent trend is formed, a trend-following strategy can profit from it.
Crisis Alpha: Historically, trend-following strategies have often performed well during market crises because they can be short falling risk assets while being long rising safe-haven assets (like government bonds, gold), providing valuable portfolio hedging.
Coping with Uncertainty: When the market is thrown into large directional swings due to uncertainty, that is precisely the environment where trend-following strategies can most easily capture profits. It transforms macro "uncertainty" into strategic "opportunity."
7.3 Divergence in Market Perception and Long-Term Paradigm Thinking
There currently appears to be a cognitive divergence in the market: the bond and gold markets have already begun to price in higher inflation and risk, showing clear concern. Meanwhile, the stock market, especially certain indices, seems to remain immersed in optimism about economic growth and technological innovation, with volatility at low levels. This divergence is similar to the situation on the eve of the 2007-2008 financial crisis, when the credit derivatives market (CDS) had long been sounding alarms while the stock market remained at high levels for a considerable period. This may reflect differences in the structure of market participants and their risk appetites—bond investors are generally more conservative and focused on macro risks, while equity investors (especially retail) may be more optimistic and focused on micro-narratives.
From a longer historical perspective, we may be in a period of social and institutional restructuring similar to that described in the theory of "The Fourth Turning." In this cycle, old social contracts and institutional frameworks are broken, and a new order is born out of chaos and conflict. This means that the risks of assets considered "safe" over the past few decades (like sovereign bonds) are being re-evaluated, and assets once considered "risky" (like specific physical assets or corporate equities with strong pricing power) may become the new "safe havens."
Conclusion: We can no longer extrapolate future asset returns based on the stable macroeconomic environment of the past few decades. Investors must recognize that the macro-paradigm shift is rewriting the rulebook of asset pricing. In this new era of uncertainty, investment strategies that are flexible, do not depend on stable macro assumptions, and can profit from market volatility, such as systematic trend-following, will have unprecedented strategic value in a portfolio. The core of portfolio construction has shifted from simple asset class diversification to a deep understanding and active management of different macroeconomic scenarios and risk paradigms.


Comments