Access the full text.
Sign up today, get DeepDyve free for 14 days.
Miles Lopes, Laurent Jacob, M. Wainwright (2011)
A More Powerful Two-Sample Test in High Dimensions using Random Projection
Wei Lan, Hansheng Wang, Chih-Ling Tsai (2013)
Testing covariates in high-dimensional regressionAnnals of the Institute of Statistical Mathematics, 66
Fan (2011)
High Dimensional Covariance Matrix Estimation in Approximate Factor ModelAnnals of Statistics, 39
E. Fama, K. French (1993)
Common risk factors in the returns on stocks and bondsJournal of Financial Economics, 33
M. Pesaran (2012)
Testing Weak Cross-Sectional Dependence in Large PanelsEconometric Reviews, 34
M. Jensen (1967)
The Performance of Mutual Funds in the Period 1945-1964Harvard Business School: Negotiation
D. Hsu (1982)
A Bayesian Robust Detection of Shift in the Risk Structure of Stock Market ReturnsJournal of the American Statistical Association, 77
P. Gagliardini, Elisa Ossola, O. Scaillet (2013)
Financial Valuation and Risk Management Working Paper No . 725 Time-Varying Risk Premium in Large Cross-Sectional Equity Datasets
Marie-Claude Beaulieu, Jean-Marie Dufour, Lynda Khalaf (2007)
Multivariate Tests of Mean–Variance Efficiency With Possibly Non-Gaussian ErrorsJournal of Business & Economic Statistics, 25
M. Gibbons, S. Ross, Jay Shanken (1989)
A Test of the Efficiency of a Given PortfolioEconometrica, 57
Yan Li, Liyan Yang (2011)
Testing Conditional Factor Models: A Nonparametric ApproachFox: Finance (Topic)
W. Sharpe (1964)
CAPITAL ASSET PRICES: A THEORY OF MARKET EQUILIBRIUM UNDER CONDITIONS OF RISK*Journal of Finance, 19
R. Tibshirani (1996)
Regression Shrinkage and Selection via the LassoJournal of the royal statistical society series b-methodological, 58
Ang (2012)
Testing Conditional Factor ModelsJournal of Financial Economics, 106
M. Pesaran, Takashi Yamagata (2012)
Testing CAPM with a Large Number of AssetsCapital Markets: Asset Pricing & Valuation eJournal
J. Goeman, H. Houwelingen, L. Finos (2011)
Testing against a high-dimensional alternative in the generalized linear model: asymptotic type I error controlBiometrika, 98
Sermin Gungor, Richard Luger (2009)
Exact distribution-free tests of mean-variance efficiencyJournal of Empirical Finance, 16
Jianqing Fan, Yuan Liao, Martina Mincheva (2011)
High Dimensional Covariance Matrix Estimation in Approximate Factor ModelsCapital Markets: Asset Pricing & Valuation eJournal
J. Affleck-Graves, B. Mcdonald (1989)
Nonnormalities and Tests of Asset Pricing TheoriesJournal of Finance, 44
Gagliardini (2016)
Time-Varying Risk Premium in Large Cross-Sectional Equity DatasetsEconometrica, 84
Natalia Bailey, G. Kapetanios, M. Pesaran (2012)
Exponent of Cross-Sectional Dependence: Estimation and InferenceEconometrics: Applied Econometric Modeling in Financial Economics eJournal
A. Mackinlay, M. Richardson (1991)
Using Generalized Method of Moments to Test Mean‐Variance EfficiencyJournal of Finance, 46
Robert Blattberg, Nicholas Gonedes (1974)
A Comparison of the Stable and Student Distributions as Statistical Models for Stock Prices: ReplyThe Journal of Business, 50
J. Lintner (1965)
THE VALUATION OF RISK ASSETS AND THE SELECTION OF RISKY INVESTMENTS IN STOCK PORTFOLIOS AND CAPITAL BUDGETSThe Review of Economics and Statistics, 47
Sermin Gungor, Richard Luger (2013)
Testing Linear Factor Pricing Models With Large Cross Sections: A Distribution-Free ApproachJournal of Business & Economic Statistics, 31
Abstract We propose in this article a new procedure, based on random projections, for testing widely used linear asset pricing models (Sharpe, 1964; Linter, 1965; Fama and French, 1993). This new testing procedure is particularly suitable when the number of assets N is much larger than the number of observations T, and outperforms the existing methods by admitting the covariance matrix of the idiosyncratic term to be nonsparse. Under some mild conditions, we show theoretically that the test statistic is asymptotically normal as long as min{N,T} goes to infinity. The finite sample performance is investigated by extensive Monte Carlo experiments. The practical utility of the new testing procedure is further justified by treating the U.S. stock market. Employing this new testing procedure, we found that the Fama–French (FF) three-factor model (Fama and French, 1993) is better than the capital asset pricing model (Sharpe, 1964) in explaining the mean–variance efficiency of the U.S. stock market. 1 Introduction According to the notable capital asset pricing model (CAPM) of Sharpe (1964) and Lintner (1965), under the assumption that all investors can borrow or lend at the risk-free rate and that all investors hold mean–variance preferences, investors will hold a combination of the risk-free asset and the market portfolio as their optimal choice. As a result, the risk premium (the expected return minus the risk-free rate) of an asset or portfolio would be linearly related to the risk premium of the market portfolio through the slope beta without intercept alpha, that is, the intercept alpha should be zero for all the assets or portfolios. In reality, whether a single asset’s beta is sufficient to explain the asset’s risk is still debatable in the finance literature. To assess this issue, a large number of papers in empirical finance have been devoted to testing the validity of the Sharpe–Lintner asset pricing model. The existing research has concentrated on testing whether the intercept is zero for all the assets while regressing their excess returns (returns minus risk-free rate) on some common factors. Among these papers, Gibbons, Ross, and Shanken (1989) firstly proposed an exact multivariate F-test (hereafter, the GRS test) to examine the implications of the CAPM under the assumption that assets’ returns are independent and normally distributed and that the number of assets N is fixed or much smaller than the number of time periods T. Ever since the pioneering work of Gibbons, Ross, and Shanken (1989), much effort has been devoted to modify the testing procedure by relaxing the conditions in the following two directions. In the first direction, people try to get rid of the normality assumption of the returns or be robust on the model specification (e.g., Affleck-Graves and McDonald, 1989; Mackinlay and Richardson, 1991; Zou, 1993; Beaulieu et al., 2007; Gospodinov et al., 2013; Gungor and Luger, 2009, 2013; Gagliardini, Scaillet, and Ossola, 2016), because it has long been recognized that financial returns may severely depart from the normal distribution and specification of the common factor may be misleading; for more details, we refer to Blattberg and Gonedes (1974), Hsu (1982), and Gungor and Luger (2009). As for the second direction, people have tried to extend the GRS test to the high-dimensional setting—that is, where the number of assets N is comparable or much larger than the number of time periods T. As noted by Pesaran and Yamagata (2015) and Gagliardini, Scaillet, and Ossola (2016), all of the existing methods are only applicable when N is fixed or much smaller than T. In contrast, when N > T, all of the existing methods listed above are no longer applicable. One possible solution when N is sufficiently large is to group the assets into portfolios and, thereby, reduce the number of assets (Jensen, 1968). However, it can be shown that such a portfolio-based testing procedure may lose information and degrade the power of the test. Moreover, the use of large T is likely to increase the possibility of structural changes of the slope beta (β) and adversely affect the performance of the GRS test (Pesaran and Yamagata, 2015). Thus, it is of great importance to create a new testing procedure which is able to effectively handle the situation of relatively small T or even N > T. To this end, Pesaran and Yamagata (2015) proposed two new tests (hereafter, the PY test) based on the threshold covariance estimator of Fan, Liao, and Mincheva (2011) under the assumption that the covariance matrix of the idiosyncratic term is highly sparse—that is, many entries of the covariance matrix are zero or nearly so. According to the theoretical results of Fan, Liao, and Mincheva (2011), when the population covariance matrix is indeed sparse, by thresholding the sample covariance matrix, the threshold covariance estimator or its inverse is consistent under some mild conditions. Nevertheless, when the sparsity assumption of the covariance matrix is violated, the resulting estimator could be misleading, and whether it is still applicable is questionable and needs further investigation. Unfortunately, the true covariance matrix of the idiosyncratic term is very often unknown in reality, and it is extremely hard to estimate in the high-dimensional data setting. Thus, it is a challenging task to develop a test applicable to both high-dimensional setting and a non-sparse covariance structure of the idiosyncratic term, and this problem has remains open as far as we know. To solve these issues, we, in this article, propose a new testing procedure based on the random projection method motivated from Lopes, Jacob, and Mainwright (2015). The new testing procedure is applicable even when the number of assets N is much larger than the number of time periods T, and it still maintains very reasonable performance when the error term is indeed from nonindependent and nonnormal distributions. First, we randomly project the N-dimensional excess returns into a low-dimensional space of dimension k≤min{N,T}; then, we are able to apply the traditional GRS test statistic in the low-dimensional setting ℝk. By repeating the random projection process many times, we can obtain a series of test statistics. We then average the test statistics over the ensemble of projection matrices to form the final test statistic. More specifically, as mentioned, to test the mean–variance efficiency of the excess return vector Y∈ℝN×T, we need to test the intercept α=(α1,⋯,αN)⊤=0 in one particular linear asset pricing model. Note that to test α=0, it is equivalent to test Pk⊤α=0 for any randomly generated projection matrix Pk∈ℝN×k. Therefore, by this random projection method, we reduce the dimension of the excess return vector Y from N to k with k≤min{N,T}, and then the traditional GRS test can be directly applied on the projected low-dimensional data. In fact, this kind of projection method is also equivalent to testing the efficiency of a small number of portfolios of size k that are sorted by a weight matrix Pk. As noted by Pesaran and Yamagata (2015), testing the efficiency of portfolios may lose information. Nevertheless, as we will show, by randomly generating the portfolios, the power of the testing procedure is not damaged, it outperforms the high-dimensional PY tests for all of the simulation settings. Compared with the extant methods, the random projection approach proposed in this article has the following advantages. First, by randomly sorting a relatively small number of portfolios, the GRS test can be used in the low-dimensional setting. Thus, the random projection method can link the testing problem in both high and low dimensions, and in the low-dimensional setting, by setting k = N, it includes the GRS test as a special case. Second, compared with the PY test, the proposed method is applicable to a non-sparse covariance structure of the error term, and the resulting procedure is valid as long as min{N, T}→∞, whereas in the procedure of Pesaran and Yamagata (2015), to make the test statistic asymptotically normal, they need the assumption N/T3→0. And last, under some mild conditions, the power of our test can theoretically outperform the method proposed by Pesaran and Yamagata (2015) without the sparsity assumption on the covariance matrix of the idiosyncratic error terms. That is, our method is statistically more powerful in certain cases. To justify the superiorities of our new testing procedure, we have conducted extensive Monte Carlo simulation studies. Among these experiments, we also present the results of the PY test for comparison. As our simulation results show, our random projection test outperforms the PY test in terms of power across all simulation settings while preserving reasonable size. Motivated by these simulation studies, we next use the proposed test to assess the dynamic movement of the efficiency of the U.S. stock market; we find that the U.S. stock market is mostly efficient in our studying periods and the FF three-factor model (Fama and French, 1993) is better than the CAPM (Sharpe, 1964) in explaining the mean–variance efficiency of the U.S. stock market. The rest of this article is organized as follows. Section 2 introduces the testing method, the corresponding asymptotic distribution, and the power comparison with the PY test. The simulation studies are presented in Section 3 to illustrate the finite sample performance of the proposed test. An empirical example from the U.S. stock market is also provided to demonstrate the usefulness of the test in Section 4. Finally, we conclude the article with a brief discussion in Section 5. All technical details are relegated to the Appendix. 2 Theoretical Framework 2.1 Model and Notations Assume that there are a total of N assets and each asset has T observations. Let Yit be the excess return of asset i at time t. In addition, let Xt∈ℝp be the observed common factors, which may represent the excess return over the market portfolio or the excess return on the three factors of FF (Fama and French, 1993); we assume p is fixed through the entire work. If the N assets are mean–variance efficient, then we should have E(Yit)=βi⊤E(Xt) for any i=1,⋯,N, where βi=cov(Yit,Xt)/var(Xt)∈ℝp stands for the systematic risk on each common factors. For these N assets, we further write the expectation form in the following multivariate linear regression (Gibbons, Ross, and Shanken, 1989) Yit=αi+βi⊤Xt+ɛit. (2.1) For notational convenience, we write Yi=(Yi1,⋯,YiT)⊤∈ℝT, Yt=(Y1t,⋯,YNt)⊤∈ℝN and Y=(Y1,⋯,YN)∈ℝT×N. In addition, let α=(α1,⋯,αN)⊤∈ℝN, which collects all the intercepts for every asset, and let X=(X1,⋯,XT)⊤∈ℝT×p be the common factor matrix. Then, Et=(ɛ1t,⋯,ɛNt)⊤∈ℝN is the random error collected at time t, which follows a multivariate normal distribution with mean 0 and covariance matrix Σ. Throughout the article, we assume that the number of assets N is much larger than the number of time periods T, while T tends to infinity for asymptotic behavior. To test the mean–variance efficiency of the N assets, our focus is on testing H0:α=0 versus H1:α≠0. As mentioned in the last section, the traditional GRS test cannot be applied when N > T. To resolve this issue, we consider the random projection method to reduce the asset dimension. The following proposition shows that testing α = 0 is equivalently to testing P˜k⊤α=0 for any random projection matrix P˜k∈ℝN×k, where each column of P˜k is independently generated from a standard normal distribution and k≤min(N,T) is an arbitrary constant. Proposition 1: For any positive integer k≤min{N,T}, if P˜k⊤α=0 for any random projection matrix P˜k, then with probability approaching1, we have α=0. Proposition 1 can be easily proved as follows. By the randomness of the projection matrix P˜k, one can easily verify that E(P˜k⊤α)=0 and var(P˜k⊤α)=diag(α⊤α). Because P˜k⊤α=0 for any random projection matrix P˜k, we immediately have var(P˜k⊤α)=diag(α⊤α)=0, which leads to α=0. According to Proposition 1, in order to test α=0, we can firstly project the high-dimensional response Y to a low-dimensional space by Y˜=YPk∈ℝT×k—namely, we randomly construct k portfolios using the weight matrix Pk, and then focus on testing the intercept associated with the transformed portfolios. Benefiting from the random projection, we are able to convert the initial high-dimensional testing space, α=0 of dimension N, to lower dimensional testing spaces, Pk⊤α=0, of dimension k. As the dimension k is much smaller than T, the traditional GRS test can be applied to Y˜ and X. In detail, we regress Y˜ on X, and the intercept is estimated by α˜=(1⊤Q1)−1Y˜⊤Q1=(1⊤Q1)−1Pk⊤Y⊤Q1 where 1=(1,⋯,1)⊤∈ℝT is a vector of 1s, Q=IT−X(X⊤X)−1X⊤ is the projection matrix related to X, and IT is the identity matrix of dimension T. Further, note that cov(Pk⊤Et|Pk)=Pk⊤ΣPk. Thus, the corresponding GRS test statistic can be written as follows T1=(1⊤Q1)α˜⊤(Pk⊤Σ^Pk)−1α˜, (2.2) where Σ^ is the sample covariance matrix of Σ. Because Pk is random, for computational purposes and also to make the different Pks comparable, we need to focus on Pk in a small subspace and specify its possible distribution. To this purpose, following the discussion in Lopes, Jacob, and Mainwright (2015), we consider Pk from the Haar distribution. Specifically, for an arbitrary positive integer k∈{1,⋯,min{N,T}}, we define ΛN,k to be the Haar distribution on the sets of matrices ΛN,k={Pk∈ℝN×k,Pk⊤Pk=Ik}. For any fixed k, Pk is randomly generated from ΛN,k. Motivated by the results in Proposition 1, we independently draw samples from ΛN,k several times to obtain as many possible Pks as possible, and average the test statistics T1 to get the final test statistic. In summary, we define the following random–projection-based test statistic Tave=(1⊤Q1)EPk∈ΛN,k{α˜⊤(Pk⊤Σ^Pk)−1α˜}. (2.3) 2.2 Asymptotic Properties To derive the asymptotic distribution of Tave, we firstly give the following conditions for the projection dimension k and the distribution of the error term Et. (C1) There exists some positive constant κ<1 such that k/T=κ+O(T−1/2). (C2) Et are independent and normally distributed with mean 0 and covariance matrix Σ. In addition, there exist two finite positive constants λmax and λmin such that 0<λmin<λmin(Σ)≤λmax(Σ)<λmax<∞, where λmin(A) and λmax(A) represent the smallest and largest eigenvalues of any arbitrary matrix A. (C3) The common factors, Xt, are distributed independent of the errors. Moreover, T−1X˜⊤X˜ with X˜=(1,X), is positive definite for all T and 1⊤Q1/T>τmin for some positive constant τmin.According to condition (C1), the projection dimension k should not be too small or too large. If k is too small, as the dimension N diverges, we may lose information and, thus, reduce the power of the test. When k is too large, or is comparable with T, the estimation error involved in the test statistic Tave would also be very huge while projecting the initial N dimensions to k dimensions, and would finally decrease the power. As a result, how the appropriate projection dimension k is selected is crucial for implementing the proposed testing method. To make the proposed testing procedure practically useful, we intensively discuss how to choose the value of k in Section 2.3 in detail. Condition (C2) also implies that the random errors are serially uncorrelated, which is a standard condition for the testing problem and was also assumed by Pesaran and Yamagata (2015). Condition (C3) is also borrowed from Pesaran and Yamagata (2015); it is quite a standard aspect in the literature on tests of linear pricing models. Under conditions (C1)–(C3), the asymptotic distribution of Tave is presented in the following theorem. Theorem 1: Assume conditions (C1)–(C3) are satisfied. Then, for any 1≤k≤min{N,T}, as long as min{N,T}→∞, we have (Tave−μN)/σN→dN(0,1), where μN=EWN{tr(A)} and σN2=2EWN{tr(A2)} with A=EPk{Pk(Pk⊤WNPk)−1Pk⊤}. Here, T·WN is a random Wishart matrix with parameter IN. The proof is given in Appendix B. This theorem allows us to test the mean–variance efficiency when the number of assets N is possibly much larger than the number of time periods T. More specifically, for a given significance level γ, we should reject the null hypothesis of α=0 as long as Tave>μN+ZγσN, where Zγ stands for the γth quantile of the standard normal distribution. It is worth mentioning that Pk is randomly drawn from a Haar distribution on the Steifel manifold ΛN,k={Pk∈ℝN×k,Pk⊤Pk=Ik}. As a result, Pk⊤Σ^Pk is positive definite, with probability tending to 1. Consequently, T1 is a well-defined test statistic with respect to the projection matrix Pk. Nevertheless, as discussed in Pesaran and Yamagata (2015), testing based on a single projection matrix Pk may lead to a loss of information and reduce the power of the test. To this end, we propose forming a testing procedure by averaging over all projection matrices with given dimension k. As Pk is random, taking all of the Pks together, it is possible to collect information from different aspects. As further discussed in Section 2.3, by doing this, the power of the test may be preserved. To make the random projection test practically useful, we need to consistently estimate the unknown parameters μN and σN. Intuitively, the unknown values of μN and σN can be estimated by a simple simulation-based method. More specifically, we randomly generate a sample of Pk from the distribution ΛN,k and T·WN from a Wishart distribution with parameter IN, and then evaluate the average value of Pk(Pk⊤WNPk)−1Pk⊤ across all WN and Pks to derive both μN and σN. However, this procedure involves calculating μN and σN separately by the simulation-based method, and its asymptotic normality relies on the accuracy of μN and σN. Further, note that both μN and σN are pivotal—that is, their values may not be related to any parameters. For this reason, we propose a simulation-based method that directly focuses on the testing statistic Tave. The procedure is as follows. (2.1) At the sth replication for s=1,⋯,Na, generate a T·WN from a Wishart distribution with parameter IN, and generate Nb samples of Pk from the distribution ΛN,k. Next, evaluate its average: Tas=(1⊤Q1)∑Pk1⊤QE(s)Pk(Pk⊤WNPk)−1Pk⊤E(s)⊤Q1/Nb, where E(s) is a random sample with mean 0 and covariance matrix WN. (2.2) Implement the above procedure Na times for s=1,⋯,Na. We then get a sequence of statistics, Ta1,⋯,TaNa. (2.3) The p-value of the test statistic can be estimated by ∑sI(Tas>Tave)/Na. In this article, we set Na and Nb equal to 1000, and the results are satisfactory. 2.3 Power of the Random Projection Test We, next, study the power of the random projection test (hereafter, the RP test). Consider the alternative α⊤Σ−1α=o(1). (2.4) We define the power function of the test statistic Tave to be β(α)=P(Tave>μN+ZγσN|α≠0). Further define Δ¯k=EPk(Δk) with Δk=α⊤Pk(Pk⊤ΣPk)−1Pk⊤α. We obtain the following asymptotic power function. Theorem 2: Under the local alternative (2.4), as min(T,N)→∞ together with conditions (C1)–(C3), the power function satisfies βRP(α)=Φ(−zγ+T(T−p)T−k−1Δ¯k2tr(A2))+o(1),where Φ(·) is the cumulative function of a standard normal distribution. According to the above theorem, if Δ¯k=0, one can easily verify that α=0 with probability tending to 1, which is indeed the null hypothesis. In this setting, βRP(α)=Φ(−zγ)+o(1)=γ+o(1)—that is, the test statistic Tave can control the size well asymptotically. In contrast, when α≠0, Δ¯k measures the discrepancy between α and 0. Moreover, one can easily verify that tr(A2)=O(T) by employing the results from Lopes, Jacob, and Mainwright (2015). Thus, the test statistic Tave is consistent as long as T1/2Δ¯k→∞. This result can hold for very small values of α. Thus, our random projection test can identify a very tiny signal of α. This finding is further confirmed by our simulation studies in Section 3.1. As the power function βRP(α) varies in the unknowns α, Σ, and k, to better assess the properties of βRP(α), following the discussion of Lopes, Jacob, and Mainwright (2015), we consider the special case where α follows a multivariate distribution with mean 0 and covariance Σ. Consequently, similar to the arguments in Lopes, Jacob, and Mainwright (2015), βRP(α) can be further expressed as follows. βRP(α)=Φ(−zγ+0.5κ(1−κ)T3/2||α||2tr(Σ))+op(1). Subsequently, the projection dimension k=κT can be selected by maximizing the above power functions, which yields κ=1/2 and k=T/2. As a result, we focus on k=T/2 in the remainder of this article, and βRP(α) can be further written as βRP(α)=Φ(−zγ+8−1/2T3/2||α||2tr(Σ))+op(1). (2.5) It is worthy noting that the expression (2.5) is only valid under the special case that α follows a multivariate distribution with mean 0 and covariance Σ. As a result, it is naturally to ask whether the selection of k is robust to this assumption. To answer this question, we define f(k)=kT−k−11tr(A2) (2.6) which is directly borrowed from the results of Theorem 2, and choose some grid points, such as k=[κT], κ=0.1,0.2,⋯,0.9 to calculate f(k) by the same simulation based method as proposed below Theorem 1, and find the maximum value among these grid points. The values of f(k) for different k under various settings are presented in Table 1. It can be shown that all the maximum values of f(k) reached at k=[T/2], which suggests that the selection of k is indeed robust. This finding is also confirmed by our later simulation studies and real data analysis. Table 1. The empirical results of f(k) with different T, N, and κ T N κ 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 50 100 0.0419 0.0553 0.0623 0.0655 0.0667 0.0631 0.0552 0.0440 0.0196 200 0.0415 0.0552 0.0624 0.0660 0.0665 0.0627 0.0558 0.0424 0.0194 300 0.0416 0.0553 0.0626 0.0661 0.0661 0.0631 0.0556 0.0420 0.0199 100 100 0.0298 0.0396 0.0450 0.0478 0.0482 0.0468 0.0428 0.0357 0.0230 200 0.0297 0.0395 0.0451 0.0479 0.0484 0.0469 0.0429 0.0354 0.0223 300 0.0297 0.0395 0.0450 0.0479 0.0485 0.0469 0.0425 0.0350 0.0233 T N κ 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 50 100 0.0419 0.0553 0.0623 0.0655 0.0667 0.0631 0.0552 0.0440 0.0196 200 0.0415 0.0552 0.0624 0.0660 0.0665 0.0627 0.0558 0.0424 0.0194 300 0.0416 0.0553 0.0626 0.0661 0.0661 0.0631 0.0556 0.0420 0.0199 100 100 0.0298 0.0396 0.0450 0.0478 0.0482 0.0468 0.0428 0.0357 0.0230 200 0.0297 0.0395 0.0451 0.0479 0.0484 0.0469 0.0429 0.0354 0.0223 300 0.0297 0.0395 0.0450 0.0479 0.0485 0.0469 0.0425 0.0350 0.0233 Table 1. The empirical results of f(k) with different T, N, and κ T N κ 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 50 100 0.0419 0.0553 0.0623 0.0655 0.0667 0.0631 0.0552 0.0440 0.0196 200 0.0415 0.0552 0.0624 0.0660 0.0665 0.0627 0.0558 0.0424 0.0194 300 0.0416 0.0553 0.0626 0.0661 0.0661 0.0631 0.0556 0.0420 0.0199 100 100 0.0298 0.0396 0.0450 0.0478 0.0482 0.0468 0.0428 0.0357 0.0230 200 0.0297 0.0395 0.0451 0.0479 0.0484 0.0469 0.0429 0.0354 0.0223 300 0.0297 0.0395 0.0450 0.0479 0.0485 0.0469 0.0425 0.0350 0.0233 T N κ 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 50 100 0.0419 0.0553 0.0623 0.0655 0.0667 0.0631 0.0552 0.0440 0.0196 200 0.0415 0.0552 0.0624 0.0660 0.0665 0.0627 0.0558 0.0424 0.0194 300 0.0416 0.0553 0.0626 0.0661 0.0661 0.0631 0.0556 0.0420 0.0199 100 100 0.0298 0.0396 0.0450 0.0478 0.0482 0.0468 0.0428 0.0357 0.0230 200 0.0297 0.0395 0.0451 0.0479 0.0484 0.0469 0.0429 0.0354 0.0223 300 0.0297 0.0395 0.0450 0.0479 0.0485 0.0469 0.0425 0.0350 0.0233 2.4 Comparison of the Power with That of Existing Methods According to Pesaran and Yamagata (2015), the power of the PY test ( Jα(D)) can be expressed as: βPY(α)=Φ(−zγ+Tα⊤D−1α2tr(R2)), where Σ=D1/2RD1/2, D is the diagonal matrix of Σ, and R is the corresponding correlation matrix. For simplicity, we assume D = Ip. Thus, the resulting powers of these two tests are given by βPY(α)=Φ(−zγ+T2tr(Σ2)||α||2)+op(1), βRP(α)=Φ(−zγ+T3/222tr(Σ)||α||2)+op(1). Therefore, the power of the proposed random projection test outperforms PY test as long as the ratio ϱ=T3/222tr(Σ)||α||2/T2tr(Σ2)||α||2 is larger than 1. This is equivalent to having T≥4tr2(Σ)/tr(Σ2). To better understand this inequality, we consider the following two scenarios. Scenario I: There exists two finite positive values, λmax and λmin, such that λmin<λmin(Σ)≤λmax(Σ)<λmax, where λmax(M) and λmin(M) represent the largest and smallest eigenvalues of an arbitrary symmetric matrix M. This case emerges if the observed systematic risk factor Xt can well explain the asset returns, so that the idiosyncratic errors are weakly dependent. In this setting, we have tr2(Σ)/tr(Σ2) is of order N. Consequently, the power of the proposed random projection test should outperform the PY test as long as N/T→0. Scenario II: The idiosyncratic error E follows a latent factor structure Et=BZt+Ei*. This case would happen if there were some unobservable systematic risks needing to be specified. In this setting, one can verify that tr2(Σ)/tr(Σ2) is of order O(1) if the common factors are all strong, in the sense that p−1B⊤B converges to a finite positive definite matrix; see, for example, Fan, Liao, and Mincheva (2011). For purposes of illustration, assume that each element of the factor loadings B∈ℝN×d, the common factors Zi∈ℝd, and the random errors Ei* are all independently simulated from a standard normal distribution, with d > 0 the number of common factors. As a result, we have tr(Σ)=dN{1+op(1)}, and tr(Σ2)=dN2{1+op(1)}. Consequently, we have tr(Σ2)/tr2(Σ)→1/d. Thus, if d is finite and positive, tr2(Σ)/tr(Σ2) is bounded by some positive constant. Consequently, in this setting, the power of the proposed random projection test outperforms the PY test theoretically. In contrast, if the common factors Zt are weak such that B⊤B=O(Nα) for some constant α<1 as considered by Bailey, Kapetanious, and Pesaran (2016), then, we have tr2(Σ)/tr(Σ2)=O(N) if α<1/2 and tr2(Σ)/tr(Σ2)=O(N2−2α) if α>1/2. Consequently, the power of the proposed random projection test should outperform the PY test as long as N/T→0, which is similar to the results of scenario I. According to the above two cases, the proposed random projection test should always outperform the PY test in terms of power as long as T is relatively large. This finding is not surprising, because for larger T and smaller N, the random projection test is reduced to the GRS test. Moreover, according to the explanation of Pesaran and Yamagata (2015), the observed systematic risk factors alone may not be enough to explain the full risk of the returns over all time. As a result, the second case considered above may be commonly encountered in real practice, which indicates that the proposed random projection test should always outperform the PY test in terms of power. The latter findings are also confirmed by our real data analysis. It is worth mentioning that the idea of random projections is not new, and it has been tentatively studied by Lopes, Jacob, and Mainwright (2015) for a high-dimensional two-sample test. Nevertheless, we are the first to use this method for testing the linear pricing model. Moreover, we theoretically compare the power function with the method of Pesaran and Yamagata (2015) and find, in some cases, that the power of the proposed random projection test outperforms the PY test. These findings are also confirmed by our extensive simulation studies and real data analysis. 3 Simulation Evidence To evaluate the finite sample performance of the proposed test statistic, we consider two simulation examples in this section. The first example considers the situation that the factors Xt are independently distributed, whereas the second example is intended to mimic the commonly used FF three-factor model. As a result, the factors Xt should have serious serial correlation and heterogeneous variance in the second example. For purposes of comparison, we also present the results for the testing procedures proposed by Pesaran and Yamagata (2015). Example 3.1: The first example is very simple, it is intended to verify the statistical theory of the proposed test. Specifically, the response Yit is generated according to model (2.1). Here, βi for i∈{1,⋯,N} is independently generated from a uniform distribution within [0,1]. In addition, each element of Xt was independently generated from a standard normal distribution. To verify the robustness of the proposed testing method, ɛit is generated from either a normal distribution, a standardized exponential distribution, or a mixture distribution. Moreover, to generate the non-sparse covariance structure of the error term, we generate Et by following a factor structure in line with Scenario II of Section 2. Specifically, E follows a latent factor structure Et=BZt+Et*, where each element of the factor loadings B∈ℝN×1, the common factors Zt∈ℝ1, and the random errors Et* are all independently simulated from a standard normal distribution. In this example, we consider two different numbers of time periods (T = 50 and 100), three different matrix dimensions ( N=100,200, and 500) and three different predictor dimensions (p = 0, 1, and 3). The dimension of the projection matrix is set to be k=[T/2] according to the discussion in Section 2.3.1 For a fixed parameter setting (i.e., fixed N, T, and k), a total of 1000 realizations were conducted at the pre-specified significance level 0.05. Table 2 presents the sizes of the random projection test and PY test ( Jα(1) and Jα(2), as defined in Pesaran and Yamagata (2015)). Specifically, our random projection test is quite robust against the normal error assumption. Table 2. Sizes of our proposed test and the PY tests for Example 3.1 for different error distributions and various predictor dimensions Normal Exponential Mixture T N p p p 0 1 3 0 1 3 0 1 3 Random projection test 50 100 0.055 0.062 0.043 0.058 0.067 0.038 0.046 0.047 0.048 200 0.046 0.060 0.031 0.042 0.049 0.061 0.041 0.055 0.066 500 0.042 0.057 0.044 0.039 0.043 0.069 0.033 0.051 0.070 100 100 0.061 0.055 0.060 0.044 0.065 0.054 0.051 0.056 0.065 200 0.053 0.046 0.068 0.036 0.029 0.035 0.049 0.065 0.069 500 0.070 0.061 0.044 0.068 0.062 0.029 0.049 0.061 0.073 Jα(1) Test 50 100 0.022 0.035 0.042 0.027 0.031 0.033 0.029 0.034 0.026 200 0.034 0.028 0.030 0.041 0.040 0.041 0.033 0.022 0.027 300 0.026 0.032 0.031 0.040 0.035 0.020 0.032 0.027 0.036 100 100 0.026 0.020 0.032 0.044 0.031 0.029 0.021 0.037 0.043 200 0.043 0.025 0.048 0.031 0.034 0.037 0.022 0.036 0.033 300 0.055 0.042 0.039 0.034 0.028 0.021 0.025 0.039 0.036 Jα(2) Test 50 100 0.016 0.015 0.034 0.029 0.017 0.032 0.025 0.020 0.027 200 0.036 0.032 0.028 0.034 0.028 0.017 0.033 0.036 0.028 500 0.025 0.036 0.029 0.012 0.025 0.035 0.031 0.020 0.027 100 100 0.023 0.035 0.026 0.025 0.033 0.047 0.027 0.034 0.010 200 0.020 0.025 0.042 0.012 0.038 0.027 0.035 0.015 0.026 500 0.024 0.017 0.021 0.025 0.040 0.022 0.020 0.040 0.033 Normal Exponential Mixture T N p p p 0 1 3 0 1 3 0 1 3 Random projection test 50 100 0.055 0.062 0.043 0.058 0.067 0.038 0.046 0.047 0.048 200 0.046 0.060 0.031 0.042 0.049 0.061 0.041 0.055 0.066 500 0.042 0.057 0.044 0.039 0.043 0.069 0.033 0.051 0.070 100 100 0.061 0.055 0.060 0.044 0.065 0.054 0.051 0.056 0.065 200 0.053 0.046 0.068 0.036 0.029 0.035 0.049 0.065 0.069 500 0.070 0.061 0.044 0.068 0.062 0.029 0.049 0.061 0.073 Jα(1) Test 50 100 0.022 0.035 0.042 0.027 0.031 0.033 0.029 0.034 0.026 200 0.034 0.028 0.030 0.041 0.040 0.041 0.033 0.022 0.027 300 0.026 0.032 0.031 0.040 0.035 0.020 0.032 0.027 0.036 100 100 0.026 0.020 0.032 0.044 0.031 0.029 0.021 0.037 0.043 200 0.043 0.025 0.048 0.031 0.034 0.037 0.022 0.036 0.033 300 0.055 0.042 0.039 0.034 0.028 0.021 0.025 0.039 0.036 Jα(2) Test 50 100 0.016 0.015 0.034 0.029 0.017 0.032 0.025 0.020 0.027 200 0.036 0.032 0.028 0.034 0.028 0.017 0.033 0.036 0.028 500 0.025 0.036 0.029 0.012 0.025 0.035 0.031 0.020 0.027 100 100 0.023 0.035 0.026 0.025 0.033 0.047 0.027 0.034 0.010 200 0.020 0.025 0.042 0.012 0.038 0.027 0.035 0.015 0.026 500 0.024 0.017 0.021 0.025 0.040 0.022 0.020 0.040 0.033 Table 2. Sizes of our proposed test and the PY tests for Example 3.1 for different error distributions and various predictor dimensions Normal Exponential Mixture T N p p p 0 1 3 0 1 3 0 1 3 Random projection test 50 100 0.055 0.062 0.043 0.058 0.067 0.038 0.046 0.047 0.048 200 0.046 0.060 0.031 0.042 0.049 0.061 0.041 0.055 0.066 500 0.042 0.057 0.044 0.039 0.043 0.069 0.033 0.051 0.070 100 100 0.061 0.055 0.060 0.044 0.065 0.054 0.051 0.056 0.065 200 0.053 0.046 0.068 0.036 0.029 0.035 0.049 0.065 0.069 500 0.070 0.061 0.044 0.068 0.062 0.029 0.049 0.061 0.073 Jα(1) Test 50 100 0.022 0.035 0.042 0.027 0.031 0.033 0.029 0.034 0.026 200 0.034 0.028 0.030 0.041 0.040 0.041 0.033 0.022 0.027 300 0.026 0.032 0.031 0.040 0.035 0.020 0.032 0.027 0.036 100 100 0.026 0.020 0.032 0.044 0.031 0.029 0.021 0.037 0.043 200 0.043 0.025 0.048 0.031 0.034 0.037 0.022 0.036 0.033 300 0.055 0.042 0.039 0.034 0.028 0.021 0.025 0.039 0.036 Jα(2) Test 50 100 0.016 0.015 0.034 0.029 0.017 0.032 0.025 0.020 0.027 200 0.036 0.032 0.028 0.034 0.028 0.017 0.033 0.036 0.028 500 0.025 0.036 0.029 0.012 0.025 0.035 0.031 0.020 0.027 100 100 0.023 0.035 0.026 0.025 0.033 0.047 0.027 0.034 0.010 200 0.020 0.025 0.042 0.012 0.038 0.027 0.035 0.015 0.026 500 0.024 0.017 0.021 0.025 0.040 0.022 0.020 0.040 0.033 Normal Exponential Mixture T N p p p 0 1 3 0 1 3 0 1 3 Random projection test 50 100 0.055 0.062 0.043 0.058 0.067 0.038 0.046 0.047 0.048 200 0.046 0.060 0.031 0.042 0.049 0.061 0.041 0.055 0.066 500 0.042 0.057 0.044 0.039 0.043 0.069 0.033 0.051 0.070 100 100 0.061 0.055 0.060 0.044 0.065 0.054 0.051 0.056 0.065 200 0.053 0.046 0.068 0.036 0.029 0.035 0.049 0.065 0.069 500 0.070 0.061 0.044 0.068 0.062 0.029 0.049 0.061 0.073 Jα(1) Test 50 100 0.022 0.035 0.042 0.027 0.031 0.033 0.029 0.034 0.026 200 0.034 0.028 0.030 0.041 0.040 0.041 0.033 0.022 0.027 300 0.026 0.032 0.031 0.040 0.035 0.020 0.032 0.027 0.036 100 100 0.026 0.020 0.032 0.044 0.031 0.029 0.021 0.037 0.043 200 0.043 0.025 0.048 0.031 0.034 0.037 0.022 0.036 0.033 300 0.055 0.042 0.039 0.034 0.028 0.021 0.025 0.039 0.036 Jα(2) Test 50 100 0.016 0.015 0.034 0.029 0.017 0.032 0.025 0.020 0.027 200 0.036 0.032 0.028 0.034 0.028 0.017 0.033 0.036 0.028 500 0.025 0.036 0.029 0.012 0.025 0.035 0.031 0.020 0.027 100 100 0.023 0.035 0.026 0.025 0.033 0.047 0.027 0.034 0.010 200 0.020 0.025 0.042 0.012 0.038 0.027 0.035 0.015 0.026 500 0.024 0.017 0.021 0.025 0.040 0.022 0.020 0.040 0.033 We next study the power of the proposed testing procedure. To this end, we specify the following two different types of alternative hypotheses. The first type is the non-sparse alternative, where α=τ(α1,⋯,αN)⊤∈ℝN. Here, each αj is simulated from a standard normal distribution; τ is selected so that the signal strength of α, which is defined by α⊤Σ−1α, is located between 0 and 1. The second alternative type is the sparse alternative—that is, most of the αjs are 0. Similarity, we set α=κ(α1,⋯,αN)⊤. Here, the αjs are simulated from a standard normal distribution if j < 6, whereas it is set as αj=0 for j > 5. The selection of κ is the same as for the non-sparse alternative. The power results are listed in Table 3 Table 3. Powers of our proposed test and the PY tests for Example 3.1 under different signal strength for both sparse and non-sparse alternatives Sparse alternative Dense alternative T N Testing methods Testing methods RP Jα(1) Jα(2) RP Jα(1) Jα(2) Signal strength SS=0.25 50 100 0.126 0.064 0.065 0.341 0.179 0.180 200 0.107 0.042 0.055 0.301 0.142 0.144 300 0.097 0.031 0.042 0.259 0.120 0.125 100 100 0.249 0.133 0.141 0.786 0.581 0.602 200 0.166 0.106 0.104 0.742 0.558 0.587 300 0.132 0.075 0.082 0.699 0.532 0.546 Signal strength SS=0.5 50 100 0.314 0.142 0.155 1.000 0.896 0.907 200 0.198 0.081 0.078 1.000 0.857 0.869 300 0.162 0.054 0.051 1.000 0.785 0.806 100 100 0.654 0.411 0.447 1.000 1.000 1.000 200 0.538 0.258 0.261 1.000 1.000 1.000 300 0.442 0.179 0.166 1.000 1.000 1.000 Signal strength SS=0.75 50 100 0.642 0.304 0.321 1.000 1.000 1.000 200 0.384 0.140 0.147 1.000 1.000 1.000 300 0.247 0.086 0.099 1.000 1.000 1.000 100 100 0.933 0.755 0.768 1.000 1.000 1.000 200 0.827 0.608 0.649 1.000 1.000 1.000 300 0.721 0.478 0.501 1.000 1.000 1.000 Sparse alternative Dense alternative T N Testing methods Testing methods RP Jα(1) Jα(2) RP Jα(1) Jα(2) Signal strength SS=0.25 50 100 0.126 0.064 0.065 0.341 0.179 0.180 200 0.107 0.042 0.055 0.301 0.142 0.144 300 0.097 0.031 0.042 0.259 0.120 0.125 100 100 0.249 0.133 0.141 0.786 0.581 0.602 200 0.166 0.106 0.104 0.742 0.558 0.587 300 0.132 0.075 0.082 0.699 0.532 0.546 Signal strength SS=0.5 50 100 0.314 0.142 0.155 1.000 0.896 0.907 200 0.198 0.081 0.078 1.000 0.857 0.869 300 0.162 0.054 0.051 1.000 0.785 0.806 100 100 0.654 0.411 0.447 1.000 1.000 1.000 200 0.538 0.258 0.261 1.000 1.000 1.000 300 0.442 0.179 0.166 1.000 1.000 1.000 Signal strength SS=0.75 50 100 0.642 0.304 0.321 1.000 1.000 1.000 200 0.384 0.140 0.147 1.000 1.000 1.000 300 0.247 0.086 0.099 1.000 1.000 1.000 100 100 0.933 0.755 0.768 1.000 1.000 1.000 200 0.827 0.608 0.649 1.000 1.000 1.000 300 0.721 0.478 0.501 1.000 1.000 1.000 Table 3. Powers of our proposed test and the PY tests for Example 3.1 under different signal strength for both sparse and non-sparse alternatives Sparse alternative Dense alternative T N Testing methods Testing methods RP Jα(1) Jα(2) RP Jα(1) Jα(2) Signal strength SS=0.25 50 100 0.126 0.064 0.065 0.341 0.179 0.180 200 0.107 0.042 0.055 0.301 0.142 0.144 300 0.097 0.031 0.042 0.259 0.120 0.125 100 100 0.249 0.133 0.141 0.786 0.581 0.602 200 0.166 0.106 0.104 0.742 0.558 0.587 300 0.132 0.075 0.082 0.699 0.532 0.546 Signal strength SS=0.5 50 100 0.314 0.142 0.155 1.000 0.896 0.907 200 0.198 0.081 0.078 1.000 0.857 0.869 300 0.162 0.054 0.051 1.000 0.785 0.806 100 100 0.654 0.411 0.447 1.000 1.000 1.000 200 0.538 0.258 0.261 1.000 1.000 1.000 300 0.442 0.179 0.166 1.000 1.000 1.000 Signal strength SS=0.75 50 100 0.642 0.304 0.321 1.000 1.000 1.000 200 0.384 0.140 0.147 1.000 1.000 1.000 300 0.247 0.086 0.099 1.000 1.000 1.000 100 100 0.933 0.755 0.768 1.000 1.000 1.000 200 0.827 0.608 0.649 1.000 1.000 1.000 300 0.721 0.478 0.501 1.000 1.000 1.000 Sparse alternative Dense alternative T N Testing methods Testing methods RP Jα(1) Jα(2) RP Jα(1) Jα(2) Signal strength SS=0.25 50 100 0.126 0.064 0.065 0.341 0.179 0.180 200 0.107 0.042 0.055 0.301 0.142 0.144 300 0.097 0.031 0.042 0.259 0.120 0.125 100 100 0.249 0.133 0.141 0.786 0.581 0.602 200 0.166 0.106 0.104 0.742 0.558 0.587 300 0.132 0.075 0.082 0.699 0.532 0.546 Signal strength SS=0.5 50 100 0.314 0.142 0.155 1.000 0.896 0.907 200 0.198 0.081 0.078 1.000 0.857 0.869 300 0.162 0.054 0.051 1.000 0.785 0.806 100 100 0.654 0.411 0.447 1.000 1.000 1.000 200 0.538 0.258 0.261 1.000 1.000 1.000 300 0.442 0.179 0.166 1.000 1.000 1.000 Signal strength SS=0.75 50 100 0.642 0.304 0.321 1.000 1.000 1.000 200 0.384 0.140 0.147 1.000 1.000 1.000 300 0.247 0.086 0.099 1.000 1.000 1.000 100 100 0.933 0.755 0.768 1.000 1.000 1.000 200 0.827 0.608 0.649 1.000 1.000 1.000 300 0.721 0.478 0.501 1.000 1.000 1.000 for both the sparse alternative and the non-sparse alternative with signal strengths of 0.25, 0.50, and 0.75. According to the results, our random–projection-based method dominates the PY test in terms of power across all simulation settings and all error distributions. This finding is quite prominent when the signal is weak or when N is quite large. It is worthy mentioning that because the simulation-based sampling procedure is needed for our test, this test is much more computationally expensive as compared with PY tests when the number of projections is large. For example, when T = 50 and N = 200 and programming in R by using an Intel (R) Xeon (R) CPU (2.40 GHz), our test on average needs 100.6 (×1000) seconds to complete the computation, whereas the PY tests takes less than 4.6 (×1000) seconds to finish the computation. Example 3.2: The second example is modified from Pesaran and Yamagata (2015). This example is designed to mimic the FF three-factor model and attempt to match the real data analysis implemented in the next section. Specifically, we consider the following data-generating process (DGP) according to Equation (2.1), Yit=αi+∑k=1pβktXkt+ɛit. (3.1) We set p = 3, and the three factors ( X1t, X2t, and X3t) are presented as the FF three factors (Market factor, small minus big (SMB), high minus low( HML)). (A detailed explanation of the meaning of these three factors is relegated to the real data analysis.) We next generate the factors as follows where, for each factor, the error term follows a GARCH(1,1) process and all the coefficients are estimated from the real data, which is discussed in Section 4, X1t−0.25=0.08(X1t−1−0.25)+h1t1/2ζ1t, Market factor X2t−0.13=0.07(X2t−1−0.13)+h2t1/2ζ2t, SMB factor X3t−0.06=0.03(X3t−1−0.06)+h3t1/2ζ3t, HML factor where ζ1t is simulated from a standard normal distribution and the variance term hkt (k = 1, 2, and 3) is generated as follows h1t=0.52+0.69h1t−1+0.11ζ1t−12, Market factor h2t=0.18+0.74h2t−1+0.14ζ2t−12, SMB h3t=0.14+0.72h3t−1+0.18ζ3t−12, HML Similar to Pesaran and Yamagata (2015), the above process is simulated over the periods t=−49,⋯,0,1,⋯,T with the initial values Xk,−50=0 and hk,−50=1 for any k = 1, 2, and 3. We use the simulated data for observations t=1,⋯,T for our final simulation studies. We next generate the factor loadings of these three factors. To well mimic the FF three-factor model and the analysis in a real data example, the factor loadings are generated as follows. We firstly fit the model (3.1) using the FF three-factor model and the weekly returns in the U.S. stock market with 480 stocks (the details of the analysis are presented in Section 4) for the period 2011–2014. We then get the estimated factor loading for these 480 stocks, denote it by B^real∈ℝ3×480, and the resulting error covariance estimator by Σ^real∈ℝ480×480. For any fixed dimension N, we then randomly generate a sequence of N stocks from this raw pool of 480 stocks and denote the subset by setN∈ℝN. We then let the factor loading in model (3.1) be B=B^real,setN, where B^real,setN stands for the associated subvector with respect to subset setN. To capture the cross-section error dependence, we lastly simulate the error term as follows. The error term Et is simulated from a multivariate distribution with mean 0 and covariance matrix Σ=Σ^setN,setNreal, where Σ^real is the error covariance matrix estimator obtained in the real data analysis and discussed above. Moreover, Σ^setN,setNreal presents the submatrix that is associated with the subvector setN randomly selected from the total pool of 480 stocks with length N. To check the robustness of the proposed method, we consider three different error distributions as in Example 3.1. In addition, we consider two different sample sizes (T = 50 and 100), and three different matrix dimensions (N = 100, 200, and 300). The dimension of the projected matrix is set to be k=[T/2] according to the discussion in Section 2.3. For a fixed parameter setting (i.e., N, T, and k), a total of 1000 realizations were conducted at the pre-specified significance level 0.05. All of the results of the random projection, together with those of the PY test, are presented in Table 4. According to the results, all of these methods control the size very well for the three different error distributions. We next study the power properties of the test statistics. The definition of the alternative hypotheses are the same as in Example 3.1. The results are listed in Table 5. To save space, we only present the results for the normal distribution; the results for the exponential distribution and mixture distribution are quantitatively similar. The pattern of the results is quite similar to Example 3.1. In most cases, the proposed test procedure based on random projections is both statistically and uniformly more powerful than the PY test in these particular examples. Table 4. Sizes of our proposed test and the PY tests for Example 3.2 Normal Exponential Mixture T N Testing methods Testing methods Testing methods RP Jα(1) Jα(2) RP Jα(1) Jα(2) RP Jα(1) Jα(2) 50 100 0.056 0.025 0.027 0.053 0.032 0.034 0.042 0.029 0.042 200 0.044 0.031 0.025 0.042 0.041 0.026 0.045 0.030 0.032 300 0.051 0.018 0.023 0.043 0.020 0.024 0.041 0.021 0.019 100 100 0.061 0.015 0.033 0.047 0.026 0.030 0.050 0.019 0.044 200 0.044 0.034 0.048 0.039 0.030 0.015 0.047 0.026 0.029 300 0.046 0.026 0.041 0.042 0.032 0.029 0.055 0.037 0.018 Normal Exponential Mixture T N Testing methods Testing methods Testing methods RP Jα(1) Jα(2) RP Jα(1) Jα(2) RP Jα(1) Jα(2) 50 100 0.056 0.025 0.027 0.053 0.032 0.034 0.042 0.029 0.042 200 0.044 0.031 0.025 0.042 0.041 0.026 0.045 0.030 0.032 300 0.051 0.018 0.023 0.043 0.020 0.024 0.041 0.021 0.019 100 100 0.061 0.015 0.033 0.047 0.026 0.030 0.050 0.019 0.044 200 0.044 0.034 0.048 0.039 0.030 0.015 0.047 0.026 0.029 300 0.046 0.026 0.041 0.042 0.032 0.029 0.055 0.037 0.018 Table 4. Sizes of our proposed test and the PY tests for Example 3.2 Normal Exponential Mixture T N Testing methods Testing methods Testing methods RP Jα(1) Jα(2) RP Jα(1) Jα(2) RP Jα(1) Jα(2) 50 100 0.056 0.025 0.027 0.053 0.032 0.034 0.042 0.029 0.042 200 0.044 0.031 0.025 0.042 0.041 0.026 0.045 0.030 0.032 300 0.051 0.018 0.023 0.043 0.020 0.024 0.041 0.021 0.019 100 100 0.061 0.015 0.033 0.047 0.026 0.030 0.050 0.019 0.044 200 0.044 0.034 0.048 0.039 0.030 0.015 0.047 0.026 0.029 300 0.046 0.026 0.041 0.042 0.032 0.029 0.055 0.037 0.018 Normal Exponential Mixture T N Testing methods Testing methods Testing methods RP Jα(1) Jα(2) RP Jα(1) Jα(2) RP Jα(1) Jα(2) 50 100 0.056 0.025 0.027 0.053 0.032 0.034 0.042 0.029 0.042 200 0.044 0.031 0.025 0.042 0.041 0.026 0.045 0.030 0.032 300 0.051 0.018 0.023 0.043 0.020 0.024 0.041 0.021 0.019 100 100 0.061 0.015 0.033 0.047 0.026 0.030 0.050 0.019 0.044 200 0.044 0.034 0.048 0.039 0.030 0.015 0.047 0.026 0.029 300 0.046 0.026 0.041 0.042 0.032 0.029 0.055 0.037 0.018 Table 5. Powers of our proposed test and the PY tests for Example 3.2 under different signal strength for both sparse and non-sparse alternatives Sparse alternative Dense alternative T N Testing methods Testing methods RP Jα(1) Jα(2) RP Jα(1) Jα(2) Signal strength SS = 0.5 50 100 0.129 0.028 0.048 0.174 0.037 0.049 200 0.103 0.044 0.026 0.128 0.038 0.046 300 0.072 0.026 0.028 0.086 0.031 0.040 100 100 0.185 0.087 0.083 0.301 0.228 0.207 200 0.130 0.052 0.059 0.255 0.153 0.142 300 0.105 0.041 0.039 0.242 0.129 0.121 Signal strength SS = 1 50 100 0.141 0.047 0.049 0.435 0.209 0.209 200 0.142 0.039 0.032 0.397 0.185 0.187 300 0.096 0.025 0.023 0.346 0.146 0.162 100 100 0.305 0.181 0.175 0.902 0.712 0.737 200 0.228 0.098 0.090 0.866 0.604 0.648 300 0.147 0.062 0.076 0.833 0.554 0.573 Signal strength SS = 1.5 50 100 0.207 0.104 0.105 0.904 0.608 0.602 200 0.178 0.078 0.067 0.822 0.565 0.598 300 0.145 0.046 0.033 0.760 0.490 0.521 100 100 0.435 0.240 0.234 1.000 1.000 1.000 200 0.365 0.157 0.146 1.000 1.000 1.000 300 0.215 0.108 0.114 1.000 1.000 1.000 Sparse alternative Dense alternative T N Testing methods Testing methods RP Jα(1) Jα(2) RP Jα(1) Jα(2) Signal strength SS = 0.5 50 100 0.129 0.028 0.048 0.174 0.037 0.049 200 0.103 0.044 0.026 0.128 0.038 0.046 300 0.072 0.026 0.028 0.086 0.031 0.040 100 100 0.185 0.087 0.083 0.301 0.228 0.207 200 0.130 0.052 0.059 0.255 0.153 0.142 300 0.105 0.041 0.039 0.242 0.129 0.121 Signal strength SS = 1 50 100 0.141 0.047 0.049 0.435 0.209 0.209 200 0.142 0.039 0.032 0.397 0.185 0.187 300 0.096 0.025 0.023 0.346 0.146 0.162 100 100 0.305 0.181 0.175 0.902 0.712 0.737 200 0.228 0.098 0.090 0.866 0.604 0.648 300 0.147 0.062 0.076 0.833 0.554 0.573 Signal strength SS = 1.5 50 100 0.207 0.104 0.105 0.904 0.608 0.602 200 0.178 0.078 0.067 0.822 0.565 0.598 300 0.145 0.046 0.033 0.760 0.490 0.521 100 100 0.435 0.240 0.234 1.000 1.000 1.000 200 0.365 0.157 0.146 1.000 1.000 1.000 300 0.215 0.108 0.114 1.000 1.000 1.000 Table 5. Powers of our proposed test and the PY tests for Example 3.2 under different signal strength for both sparse and non-sparse alternatives Sparse alternative Dense alternative T N Testing methods Testing methods RP Jα(1) Jα(2) RP Jα(1) Jα(2) Signal strength SS = 0.5 50 100 0.129 0.028 0.048 0.174 0.037 0.049 200 0.103 0.044 0.026 0.128 0.038 0.046 300 0.072 0.026 0.028 0.086 0.031 0.040 100 100 0.185 0.087 0.083 0.301 0.228 0.207 200 0.130 0.052 0.059 0.255 0.153 0.142 300 0.105 0.041 0.039 0.242 0.129 0.121 Signal strength SS = 1 50 100 0.141 0.047 0.049 0.435 0.209 0.209 200 0.142 0.039 0.032 0.397 0.185 0.187 300 0.096 0.025 0.023 0.346 0.146 0.162 100 100 0.305 0.181 0.175 0.902 0.712 0.737 200 0.228 0.098 0.090 0.866 0.604 0.648 300 0.147 0.062 0.076 0.833 0.554 0.573 Signal strength SS = 1.5 50 100 0.207 0.104 0.105 0.904 0.608 0.602 200 0.178 0.078 0.067 0.822 0.565 0.598 300 0.145 0.046 0.033 0.760 0.490 0.521 100 100 0.435 0.240 0.234 1.000 1.000 1.000 200 0.365 0.157 0.146 1.000 1.000 1.000 300 0.215 0.108 0.114 1.000 1.000 1.000 Sparse alternative Dense alternative T N Testing methods Testing methods RP Jα(1) Jα(2) RP Jα(1) Jα(2) Signal strength SS = 0.5 50 100 0.129 0.028 0.048 0.174 0.037 0.049 200 0.103 0.044 0.026 0.128 0.038 0.046 300 0.072 0.026 0.028 0.086 0.031 0.040 100 100 0.185 0.087 0.083 0.301 0.228 0.207 200 0.130 0.052 0.059 0.255 0.153 0.142 300 0.105 0.041 0.039 0.242 0.129 0.121 Signal strength SS = 1 50 100 0.141 0.047 0.049 0.435 0.209 0.209 200 0.142 0.039 0.032 0.397 0.185 0.187 300 0.096 0.025 0.023 0.346 0.146 0.162 100 100 0.305 0.181 0.175 0.902 0.712 0.737 200 0.228 0.098 0.090 0.866 0.604 0.648 300 0.147 0.062 0.076 0.833 0.554 0.573 Signal strength SS = 1.5 50 100 0.207 0.104 0.105 0.904 0.608 0.602 200 0.178 0.078 0.067 0.822 0.565 0.598 300 0.145 0.046 0.033 0.760 0.490 0.521 100 100 0.435 0.240 0.234 1.000 1.000 1.000 200 0.365 0.157 0.146 1.000 1.000 1.000 300 0.215 0.108 0.114 1.000 1.000 1.000 4 Real Data Analysis 4.1 Data Description To further demonstrate the usefulness of the proposed method, we next consider a real data analysis of the market efficiency of the U.S. stock market. We collect data for securities in the Standard & Poor 500 (S&P 500) index from the period November 25, 2011 to December 31, 2014. After eliminating the assets with missing observations, there remain T=160 time periods for each of the N=480 firms. According to the definition of Fama and French (1993), the average return of three small portfolios minus the average return on three big portfolios (SMB), and the average return on three value stock portfolios minus the average return on three growth portfolios (HML) are obtained from Ken French’s data library web page. The one-month U.S. treasury bill rate is chosen as the risk-free rate, and the value-weighted returns on all NYSE, AMEX, and NASDAQ stocks obtained from CRSP are used as a proxy for market returns. The access the dataset used in our real example analysis, we refer to the supplementary data. 4.2 Testing Results We next use the proposed random projection test to assess the market efficiency of the U.S. stock market on the basis of the observation from the period 2011 to 2014. According to our simulation studies and the theoretical results in Theorem 1, the proposed test can be valid even when the number of assets N is much larger than the sample size T. To this end, we propose the following rolling window procedure to access the dynamic movement of the market efficiency of the U.S. stock market. Specifically, for some given length of an estimation window L, for each observation τ∈{1,⋯,160−L}, we estimate the CAPM and FF three-factor regression using the data from period τ to τ+L−1 as follows, rit−rft=α^i+β^i(rmt−rft)+ɛ^it, and rit−rft=α^i+β^1,i(rmt−rft)+β^2,iSMBt+β^3,iHMLt+ɛ^it for any 1≤t≤τ+L−1, and then we can obtain the random projection test statistic based on the estimate α^. By doing so, we obtain a sequence of p-values for both the CAPM and FF three-factor model. Because the PY test relies on the sparse structure of the error term, we next employ the CD test of Pesaran (2015) to assess the cross-sectional dependency of the error term, the reslting p-value is 0, which indicates that the error terms are indeed strongly and cross-sectionally dependent. Figure 1 provides plots of the evaluation of the p-values for both the CAPM and the FF three-factor model from 2011 to 2014 based on L = 100 (for the purpose of a robustness check, we also considered the window length L = 50, and the results yield a similar pattern to that in Figure 1)—that is, the rolling testing window is nearly two years. According to the results, we found that the U.S. stock market is efficient during the studying periods and we cannot reject the null hypothesis of market efficiency in most cases under the FF three-factor model (there are only 3% of the periods that their p-values are <5%). In addition, the FF three-factor model (Fama and French, 1993) is better than the CAPM (Sharpe, 1964) in explaining the mean–variance efficiency of the U.S. stock market.2 Figure 1. View largeDownload slide The movement of the market efficiency of the U.S. stock market based on CAPM and FF three-factor model. Figure 1. View largeDownload slide The movement of the market efficiency of the U.S. stock market based on CAPM and FF three-factor model. 5 Concluding Discussion In this study, we proposed a testing procedure based on random projections, to test the mean–variance efficiency. This new test can be valid even when the number of assets is much larger than the number of time periods. Compared with Pesaran and Yamagata’s (2015) test which requires that the covariance matrix of the error terms should be sparse, this new test could work under a non-sparse covariance structure. We show theoretically that the proposed new test statistic is asymptotically normal. We also carefully discussed the power function of this new test and found that, under some general cases, this new test can provide a higher power than Pesaran and Yamagata’s (2015) test. The finite sample performance of the proposed new test is confirmed by two simulation studies. An empirical example from the U.S. stock market further demonstrates the usefulness of this new test. Future research could include improving the power of tests under the conditional factor model setup (Li and Yang, 2011; Ang and Kristensen, 2012). Unlike the study in this article, the conditional factor model allows the systematic risk βi to change over time and, therefore, to be more flexible. Such results would be helpful with shedding new lights on the mean-variance efficiency of the capital market, especially on the complicated emerging market. In addition, it is of interest to extent our approach to accommodate unbalanced panel data. Lastly, the normal assumption is needed to facilitate the theoretical proof; it is also of interest to extent our results to accommodate non-normal data and we believe these efforts would broaden the usefulness of our random projection test. Appendix A: A Useful Lemma To facilitate the theoretical proof of the proposed random projection test, we firstly present the following lemma; its proof can be directly found in Lopes, Jacob, and Mainwright (2015), and we thus omit it. Lemma 1: Let Pk∈ℝN×k be a random matrix that is full rank with probability 1, and let Σ1/2Pk=QR be a thin QR factorization (Golub and Van Loan, 1996). Then we have A∼TEQ{Q(Q⊤WNQ)−1Q⊤} (A.1)where A=EPk{Σ1/2Pk(Pk⊤Σ^Pk)−1Pk⊤Σ1/2} and ∼ means the two parts have the same distribution, and WN∼WN(T,IN) is a Wishart matrix that is independent of Q. Appendix B: Proof of Theorem 1 Define ut=Σ−1/2Et, c=(c1,⋯,cT)⊤=(1⊤Q1)−1/2Q1 and ηT=∑t=1Tctut. Obviously, ηT∼N(0,IN). Moreover, under the null hypothesis of α = 0, we can rewrite the test statistic Tave in a quadratic form as follow Tave=ηT⊤AηT (A.2) where A=EPk{Σ1/2Pk(Pk⊤Σ^Pk)−1Pk⊤Σ1/2}. Note that A is a random matrix related to the sample covariance matrix Σ^. Moreover, when Y indeed follows a normal distribution, the sample mean and sample covariance matrix are independent. As a result, A and ηT are independent under normality assumption (C2). Consequently, our overall strategy is to work conditional on A and use the following representation P(ηT⊤AηT−μNσN≤x)=EAPηT(ηT⊤AηT−μnσn≤x|A) for any arbitrary x∈ℝ. According to Appendix B.4 in Lopes, Jacob, and Mainwright (2015), we have ||A||op=opA({tr(A2)}1/2), where ||A||op denotes the maximum eigenvalues of A and pA stands for the probability measure that related to the distribution of A. This implies the Lyupanov condition and then implies the Lindeberg condition. Thus, by the Central Limit Theorem, we can obtain supx∈ℝ|PZ(ηT⊤AηT−tr(A)2tr(A2)≤x|A)−Φ(x)|=opA(1). Consequently, by the dominated convergence theorem, to prove the asymptotical normality of T, it suffices to show the following two parts. tr(A)−μN=opA(T1/2) and 2tr(A2)−σN=opA(T1/2) The above results can be directly obtained by Appendix (B.2) and (B.3) of Lopes, Jacob, and Mainwright (2015). Appendix C: Proof of Theorem 2 Under the alternative hypothesis, we can rewrite Tave as Tave=ηT⊤AηT+2δTηT⊤AΣ−1/2α+δT2α⊤Σ−1/2AΣ−1/2α≐Tave1+Tave2+Tave3 where δT=∑t=1Tct. According to Appendix (B.3) of Lopes, Jacob, and Mainwright (2015), we have tr(A2)=Op(T). Thus, by Theorem 1, in order to obtain the power function, we only need to show that Tave2=op(T1/2) and Tave3=T(T−p)T−k−1Δ¯k+op(T1/2). We firstly consider Tave2. Note that E(Tave2)=EA{EηT(2δTηT⊤AΣ−1/2α|A)}=0. As a result, we can obtain var(Tave2)=EA{varηT(2δTηT⊤AΣ−1/2α|A)}=4E(δT2)EA(α⊤Σ−1/2A2Σ−1/2α) Obviously, E(δT2)=T−p=O(T). Thus, it is suffices to show that EA(α⊤Σ−1/2A2Σ−1/2α)=o(1). By Lemma 1, we have EA(α⊤Σ−1/2A2Σ−1/2α)=EWN[||EQ{Q(Q⊤T−1WNQ)−1Q⊤}Σ−1/2α||22] ≤EQEWN(||Q(Q⊤T−1WNQ)−1Q⊤Σ−1/2α||22) =EQ[α⊤Σ−1/2QEWN(Q⊤T−1WNQ)−1Q⊤Σ−1/2α]=EQ{α⊤Σ−1/2Q(TT−k−1Ik)Q⊤Σ−1/2α}=TT−k−1EQ(α⊤Σ−1/2QQ⊤Σ−1/2α) ≤TT−k−1α⊤Σ−1α=o(1) where the first inequality follows by Jensen’s inequality and the last inequality follows by the fact that the maximum singular value of Q is no larger than 1. We next consider Tave3. From Lemma 1, we have E(Tave3)=E(δT2α⊤Σ−1/2AΣ−1/2α)=E(δT2)α⊤Σ−1/2E(A)Σ−1/2α=(T−p)α⊤Σ−1/2EQ[QEWN{(Q⊤WNQ)−1}Q⊤]Σ−1/2α=(T−p)α⊤Σ−1/2EQ(QTT−k−1IkQ⊤)Σ−1/2α=T(T−p)T−k−1α⊤Σ−1/2EQ(QQ⊤)Σ−1/2α=T(T−p)T−k−1Δ¯k. As a result, it suffices to show that T−1var(T3)=o(1). We define v=Σ−1/2α and u=Q⊤v/||Q⊤v||. Consequently, T−1var(Tave3)=T−1EWN([δT2v⊤EQ{TQ(Q⊤WNQ)−1Q⊤}v−T(T−p)T−k−1||Q⊤v||22]2) ≤T−1E(δT4)EWN([v⊤EQ{TQ(Q⊤WNQ)−1Q⊤}v−TT−k−1||Q⊤v||22]2) +T−1E[{δT2−(T−p)}2]E{(TT−k−1||Q⊤v||22)2}≐V1+V2. By the fact that var(δT2)=o(T) and EQ(||Q⊤v||24)≤||v||4=(α⊤Σ−1α)2=o(1), (A.3) we infer, therefore, that the second term V2 is o(1). Furthermore, by Jensen’s inequality, TEWN([v⊤EQ{TQ(Q⊤WNQ)−1Q⊤}v−TT−k−1||Q⊤v||22]2) ≤T3EQ([v⊤EWN{TQ(Q⊤WNQ)−1Q⊤}v−TT−k−1||Q⊤v||22]2) =T3EQ(||Q⊤v||24EWk[(u⊤Wk−1u−1T−k−1)2]) According to the Lemma 6 in Lopes, Jacob, and Mainwright (2015), we have T3EWk[(u⊤Wk−1u−1T−k−1)2]=O(1), This, together with the fact that E(δT2)=O(T) and from Equation (A.3), we can obtain that the first term V1 is also o(1), which completes the entire proof. Supplementary Data Supplementary data are available at Journal of Financial Econometrics online. Footnotes 1 In fact, we also tried k=[κT] for κ=0.4 and 0.6 to check the robustness, all the results with different k present very similar pattern. Consequently, we only report the results based on k=[T/2] to save space. 2 It is noteworthy that to assess the market efficiency, we only focus on the p-values based on the 61 rolling periods and do not consider the effect of multiple test. In addition, the results for the method of Pesaran and Yamagata (2015) are similar, we cannot reject the null of zero intercepts for all assets based on the CAPM and FF three-factor model. To save space, the detailed results are not reported here. References Affleck-Graves J., McDonald B.. 1989. Nonnormalities and Tests of Asset Pricing Theories. Journal of Finance 44: 889– 908. Google Scholar CrossRef Search ADS Ang A., Kristensen D.. 2012. Testing Conditional Factor Models. Journal of Financial Economics 106: 132– 156. Google Scholar CrossRef Search ADS Bailey N., Kapetanious G., Pesaran H. M.. 2016. Exponent of Cross-Sectional Dependence: Estimation and Inference. Journal of the Applied Econometrics 31: 929– 960. Google Scholar CrossRef Search ADS Beaulieu, M., J. Dufour, and L. Khalaf. 2007. Multivariate Tests of Mean-Variance Efficiency With Possibly Non-Gaussian Errors: : An Exact Simulation-Based Approach. Journal of Business and Economic Statistics 25: 398– 410. CrossRef Search ADS Blattberg R., Gonedes N.. 1974. A Comparison of the Stable and Student Distributions as Statistical Models of Stock Prices. Journal of Business 47: 244– 280. Google Scholar CrossRef Search ADS Fama E. F., French K. R.. 1993. Common Risk Factors in the Return on Stocks and Bonds. Journal of Financial Economics 33: 3– 56. Google Scholar CrossRef Search ADS Fan J., Liao Y., Mincheva M.. 2011. High Dimensional Covariance Matrix Estimation in Approximate Factor Model. Annals of Statistics 39: 3320– 3356. Google Scholar CrossRef Search ADS PubMed Gagliardini P., Scaillet O., Ossola E.. 2016. Time-Varying Risk Premium in Large Cross-Sectional Equity Datasets. Econometrica 84: 985– 1046. Google Scholar CrossRef Search ADS Gibbons M. R., Ross S. A., Shanken J.. 1989. A Test of the Efficiency of a Given Portfolio. Econometrica 57: 1121– 1152. Google Scholar CrossRef Search ADS Goeman J., Houwelingen V., Finos L.. 2011. Testing Against a High Dimensional Alternative in the Generalized Linear Model: Asymptotic Type I Error Control. Biometrika 98: 381– 390. Google Scholar CrossRef Search ADS Golub G. H., Van Loan C. F.. 1996. Matrix Computations, Volume 3 . Baltimore, MD: Johns Hopkins University Press. Gospodinov, N., R. Kan, and C. Robotti. 2013. Misspecification-Robust Inference in Linear Asset-Pricing Models with Irrelevant Risk Factors. Working Paper https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2579821 Gungor S., Luger R.. 2009. Exact Distribution-Free Tests of Mean–Variance Efficiency. Journal of Empirical Finance 16: 816– 829. Google Scholar CrossRef Search ADS Gungor S., Luger R.. 2013. Testing Linear Factor Pricing Models with Large Cross-Sections: A Distribution-Free Approach. Journal of Business and Economic Statistics 31: 66– 77. Google Scholar CrossRef Search ADS Hsu D. 1982. A Bayesian Robust Detection of Shifts in the Risk Structure of Stock Market Returns. Journal of the American Statistical Association 77: 29– 39. Google Scholar CrossRef Search ADS Jensen M. 1968 The Performance of Mutual Funds in the Periods 1945–1964. Journal of Finance 23: 389– 416. Google Scholar CrossRef Search ADS Lan W., Wang H., Tsai C. L.. 2014. Testing Covariates in High Dimensional Regression. Annals of the Institute of Statistical Mathematics 66: 279– 301. Google Scholar CrossRef Search ADS Li Y., Yang L.. 2011. Testing Conditional Factor Models: A Nonparametric Approach. Journal of Empirical Finance 18: 972– 992. Google Scholar CrossRef Search ADS Lopes M., Jacob L., Mainwright M. J.. 2015. A More Powerful Two-Sample Test in High Dimensions Using Random Projection. https://arxiv.org/pdf/1108.2401v3.pdf. Lintner J. 1965. The Valuation of Risk Assets and the Selection of Risky Investment in Stock Portfolios and Capital Budges. Review of Economics and Statistics 47: 13– 37. Google Scholar CrossRef Search ADS MacKinlay A. C., Richardson M. P.. 1991. Using Generalized Method of Moments to Test Mean–Variance Efficiency. Journal of Finance 46: 511– 527. Google Scholar CrossRef Search ADS Markowitz H. M. 1952. Portfolio Selection. Journal of Finance 7: 77– 91. Pesaran M. H. 2015. Testing Weak Cross-Sectional Dependence in Large Panels. Econometric Reviews 34: 1089– 1117. Google Scholar CrossRef Search ADS Pesaran M. H., Yamagata T.. 2015. “ Testing CAPM with a Large Number of Assets.” Working paper. Available at: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2020423 Sharpe W. F. 1964. Capital Asset Prices: A Theory of Market Equilibrium Under Conditions of Risk. Journal of Finance 19: 425– 442. Tibshirani R. 1996. Regression Shrinkage and Selection Via the Lasso. Journal of the Royal Statistical Society: Series B 58: 267– 288. Zou G. 1993. Asset Pricing Test under Alternative Distributions. Journal of Finance 48: 1925– 1942. © The Author(s) 2018. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/about_us/legal/notices)
Journal of Financial Econometrics – Oxford University Press
Published: Apr 1, 2018
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Access the full text.
Sign up today, get DeepDyve free for 14 days.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.