Access the full text.

Sign up today, get DeepDyve free for 14 days.

Axioms
, Volume 9 (4) – Oct 28, 2020

/lp/multidisciplinary-digital-publishing-institute/nonlinear-approximations-to-critical-and-relaxation-processes-W02otU7RtA

References for this paper are not available at this time. We will be adding them shortly, thank you for your patience.

- Publisher
- Multidisciplinary Digital Publishing Institute
- Copyright
- © 1996-2020 MDPI (Basel, Switzerland) unless otherwise stated Disclaimer The statements, opinions and data contained in the journals are solely those of the individual authors and contributors and not of the publisher and the editor(s). MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. Terms and Conditions Privacy Policy
- ISSN
- 2075-1680
- DOI
- 10.3390/axioms9040126
- Publisher site
- See Article on Publisher Site

axioms Article Nonlinear Approximations to Critical and Relaxation Processes Simon Gluzman Materialica + Research Group, Bathurst St. 3000, Apt. 606, Toronto, ON M6B 3B4, Canada; gluz@sympatico.ca Received: 5 September 2020; Accepted: 22 October 2020; Published: 28 October 2020 Abstract: We develop nonlinear approximations to critical and relaxation phenomena, complemented by the optimization procedures. In the ﬁrst part, we discuss general methods for calculation of critical indices and amplitudes from the perturbative expansions. Several important examples of the Stokes ﬂow through 2D channels are brought up. Power series for the permeability derived for small values of amplitude are employed for calculation of various critical exponents in the regime of large amplitudes. Special nonlinear approximations valid for arbitrary values of the wave amplitude are derived from the expansions. In the second part, the technique developed for critical phenomena is applied to relaxation phenomena. The concept of time-translation invariance is discussed, and its spontaneous violation and restoration considered. Emerging probabilistic patterns correspond to a local breakdown of time-translation invariance. Their evolution leads to the time-translation invariance complete (or partial) restoration. We estimate the typical time extent, amplitude and direction for such a restorative process. The new technique is based on explicit introduction of origin in time as an optimization parameter. After some transformations, we arrive at the exponential and generalized exponential-type solutions (Gompertz approximants), with explicit ﬁnite time scale, which is only implicit in the initial parameterization with polynomial approximation. The concept of crash as a fast relaxation phenomenon, consisting of time-translation invariance breaking and restoration, is advanced. Several COVID-related crashes in the time series for Shanghai Composite and Dow Jones Industrial are discussed as an illustration. Keywords: critical index; relaxation time; time-translation invariance breaking and restoration; market crash; COVID-19; Gompertz approximants 1. Introduction Let the function F(x) of a real variable x 2 [0, ¥) be deﬁned by some rather complicated problem. The variable x > 0 can represent, e.g., a coupling constant or concentration of particles. Of course, one should strive to ﬁnd an exact solution to the problem [1,2]. Among such exact solutions one can ﬁnd the solution to the celebrated Kondo problem and its thermodynamics. In a number of cases important for optical applications, such as Bessel beams and its generalizations [3], one can ﬁnd an intriguing physics already within the linear wave equation. In optics, there are a variety of exact solutions: spatial, temporal, dark optical solitons and breathers all follow from the celebrated nonlinear Schrödinger equation and its modiﬁcations [4]. The so-called spatiotemporal X-waves, another type of the closed-form solutions are being studied as well (see, e.g., [5]). What if such a problem does not allow for an explicit solution for the sought function? Let us assume that some kind of perturbation theory is still possible to develop, so that it generates formal ¥ n power series about the point x = x = 0, F(x) = c x , for the function in question [6]. The å n n=0 perturbation methods can generate the series (often slowly) convergent for all x smaller than the radius of convergence, or the series divergent for all x, except x = 0. Axioms 2020, 9, 126; doi:10.3390/axioms9040126 www.mdpi.com/journal/axioms Axioms 2020, 9, 126 2 of 38 That is, for smooth function F(x) [7], we have the asymptotic power series [7,8], F(x) c x . (1) n=0 Our task is to recast the series (2) into some convergent expressions by means of a nonlinear analytical constructs, the so-called approximants. When literally all of the terms in divergent series are known, one can invoke Euler of Borel summation [8]. Even for convergent series there is still a problem of how to continue the expansion outside of radius of convergence [9], where the approximants could be useful. However, in realistic problems, only a few terms on the RHS of (1) can be calculated, and applying various approximants is the only available analytical option for the truncated series (5) and (A6). The approximants are conditioned to be asymptotically equivalent to the series (1), truncated at some ﬁnite number k. However, the approximants are able to generate an additional inﬁnite number of coefﬁcients, approximating unknown exact coefﬁcients. Determination of the best approximant is grounded solely on the empirical, numerical convergence [9], of the sequences of approximants. One can always attempt to extrapolate the perturbative results by means of the Padé approximants P (x) [6,9]. The Padé approximants P can be understood as the ratio of two polynomials P (x) M,N M,N M and Q (x) of the order M and N, respectively. The diagonal Padé approximant of order N corresponds to the case of M = N. Conventionally, Q (0) = 1. The coefﬁcients of the polynomials are derived directly from the asymptotic equivalence with the given power series for the sought function F(x). Sometimes, when there is a need to stress the role of F(x), we write Pade A p proximant [F[x], n, m]. The Padé approximant might possess a pole associated with a ﬁnite critical point, but can only produce an integer critical index. While usually critical indices are not integers. The same concerns the large-variable behavior where the power of x produced from extrapolation with some form of Padé approximants is always an integer. Unfortunately, solutions to many problems exhibit irrational functional behavior. Such a behavior cannot be properly described by the standard rational Padé approximants. However, it would be highly desirable to modify somehow the familiar technique of Padé approximants in order to take into account the irrational behavior. Such modiﬁcation can be performed by separating the sought modiﬁcation of the Padé approximants into two factors [10]. The ﬁrst factor is to be expressed as an iterated root or factor approximant [11,12]. It is speciﬁcally designed to take care of the irrational part of the solution. The second factor is simply a diagonal Padé approximant, and it is supposed to take care of the rational part of the solution. We arrive thus to the corrected Padé approximants. They appear to be applicable to a larger class of problems, even when the standard Padé technique is not applicable [11]. Many examples of application of the Padé approximants as well as their theoretical modiﬁcations, can be found in [13], including some important applications to aerodynamics and boundary layer problems [14]. The so-called two-point Padé is applied for interpolation, when in addition to the expansion about x = 0, given by (1), additional information is available and contained in the asymptotic power series expansion about x = ¥, F(x) b x [8]. The two-point Padé n=0 approximant has the same form as the standard Padé approximant, but with the coefﬁcients expressed through c and b . n n The idea of combining information coming from the different limits appear to be fruitful and can be exploited for different types of approximants and various forms of asymptotic expansions [11,12,15,16]. Various self-similar approximants also allow extrapolating and interpolating between the small-variable and large-variable asymptotic expansions, as discussed recently in [16]. The key to the success is to introduce the so-called control functions to allow “to sew” the two limit-cases together in the form most natural for each concrete problem [11,12,15–17]. The example of such an approach is brought up in Appendix B. Although the expansions for small and large couplings are very bad, the resulting approximants are in a good agreement with the numerical data. Axioms 2020, 9, 126 3 of 38 There are four main technical approaches to the approximants constructions, all aimed to optimize their performance. The ﬁrst approach is conventional, also called accuracy-through order. It is based on progressive improvement of quality of approximants with adding new information through the higher-order coefﬁcients, with the approximants becoming more and more complex. It is exempliﬁed in construction of Pad’e and Euler super-exponential approximants [8], factor, root and additive approximants [11,12,16]. The latter “cluster ” of approximations was derived based on the ideas of self-similar approximation theory, a close relative of the ﬁeld-theoretic renormalization group [17]. The property of self-similarity is discussed in Section 3.1. The second approach leads to corrected approximants. The idea is to ensure the correct form of the solution already in the starting approximation with some initial parameters. The initial parameters should be corrected by asymptotically matching with the truncated series/polynomial regressions in increasing orders. Thus, instead of increasing the order of approximation, one can correct the parameters of the initial approximation [11,12]. The form of the solution is not getting more complex, but the parameters take more and more complex form with increasing order. In the third approach, predominantly adopted in Section 3, we keep the form and order of approximants the same in all orders, but let the series/regressions evolve into higher orders. Independent on the order of regression, we construct the same approximant, based on the ﬁrst-order terms solely, only with the parameters changing with increasing order of regression. In the framework of such effective ﬁrst-order theories, we employ exponential approximants and their extensions. In the fourth approach, the critical index is treated as a vital part of optimization procedure. The critical index plays the role of a control parameter, to be determined from the optimization procedure described in Section 2.2, following Gluzman and Yukalov [18]. Different optimization techniques based on introduction of control parameters were proposed in [19,20]. The problems arising in approximation theory can vary. Note that, for a recovery problem, when measurements of the sought function are given for some ﬁnite set of points, there is Prony’s method available, with the sought function represented as sums of polynomial or exponential functions combined with periodic functions [21]. For approximation of a continuous function on the interval x 2 [0, 1], one can use Bernstein polynomials [22]. However, the two methods do not allow for inclusion of the asymptotic information. Prony’s and Bernstein methods are numerical and work only for interpolation problems. The latter method was further adapted to the region x 2 [0, ¥), and applied in [23]. The technique of Cioslowski [23] allows for incorporation of the asymptotic information. The technique of self-similar roots [24] allows us to solve the same problems as in [23], but without resorting to ﬁtting [23,24]. Our methods are analytical, user-friendly and applicable to the most difﬁcult extrapolation problem [11,12,16], involving explicit calculation of various critical indices and amplitudes, with novel applications to ﬁnding relaxation times. However, our methods remain applicable also for various interpolation problems [11,12,16,24] (see also Appendix B). It is likely impossible to ﬁnd the same approximant to be the best for each and every realistic problem. Based on the same asymptotic information, such as series coefﬁcients, thresholds, critical indices, and correction to scaling indices, one can construct not only Padé but quite a few different approximants, such as corrected Padé, additive, D Log-additive, etc. [12,16]. It is feasible that for each problem one can ﬁnd an optimal different approximant. We think that the idea behind the method of corrected approximants [11,12,16], is the most progressive, since it allows to combine the strength of a few methods together and proceed, in the space of approximations, with piece-wise construction of the approximation sequences, as pointed out recently by Gluzman [16].In the following sections, we present a more expended description of the concept of approximants, applied now both to critical and relaxation phenomena, extending the earlier work of Chapter 1 of the book [12]. Axioms 2020, 9, 126 4 of 38 2. Critical Index and Relaxation Time The function F(x) of a real variable x exhibits critical behavior, with a critical index a, at a ﬁnite critical point x , when F(x) ' A(x x) , as x ! x 0 . (2) c c The deﬁnition covers the case of negative index when function can tend to inﬁnity, or the sought function can tend to zero if the index is positive. Sometimes, the values of critical index and critical point are known from some sources, and the problem consists in ﬁnding the critical amplitude A, as extensively exempliﬁed in [11]. The case when critical behavior occur at inﬁnity, F(x) ' Ax , as x ! ¥ , (3) can be analyzed similarly. It can be understood as the particular case with the critical point positioned at inﬁnity. Critical phenomena are ubiquitous [18], ranging from the ﬁeld theory to hydrodynamics. It is vital to explain related critical indices theoretically. Regrettably, for realistic physical systems, one can as a rule learn only its behavior at small variable, F(x) ' F (x), as x ! 0 , (4) which follows form some perturbation theory. The function F (x) is approximated by an expansion F (x) = 1 + c x . (5) k å n=1 Most often one ﬁnds that such expansions give numerically divergent results, valid only for very small or very large x (see Appendix B). Constructively, the expansion is treated as a polynomial of the order k. Sometimes, theoretically, one even has a convergent series, resulting in a rather good numerically convergent, truncated polynomial approximations (A6). However, there is still a problem of extrapolating outside of the region of numerical convergence, where the critical behavior sets in. Three examples of such type are given in Appendix A, based on the results of Chapter 7 of the book [12]. The discussion below traces the basic ideas from Chapter 1 of the book [12]. One can always express the critical index directly by using its deﬁnition, and ﬁnd it as the limit of explicitly expressed approximants. For instance, critical index can be estimated from a standard representation as the following derivative B (x) = ¶ log (F(x)) ' , (6) a x x x as x ! x , thus deﬁning the critical index as the residue in the corresponding single pole. The pole corresponds to the critical point x . The critical index corresponds to the residue a = lim (x x )B (x). c a x!x To the D Log-transformed seriesB x one is bound to apply the Padé approximants [6]. Moreover, ( ) the whole table of Padé approximants can be constructed [9], That is, the D Log Padé method does not lead to a unique algorithm for ﬁnding critical indices. procedure. Basically, different values are produced by different Padé approximants. Then, it is not clear which of these estimates to prefer. The standard approach consists in applying a diagonal Padé approximants [6]. Axioms 2020, 9, 126 5 of 38 When a function, at asymptotically large variable, behaves as in (3), then the critical exponent can be deﬁned similarly, by means of the D Log transformation. It is represented by the limit a = lim xB (x) . (7) x!¥ Assume that the small-variable expansion for the function B (x) is given. In order for the critical index to be ﬁnite, it is necessary to take only the approximants behaving as x as x ! ¥. It leaves us no choice but to select the non-diagonal P (x) approximants, so that the corresponding n,n+1 approximation a is ﬁnite. One can also apply, in place of Padé, some different approximants [12,16]. The examples of application of the D Log Padeé methods are given in Appendix A, based on the results ﬁrst obtained in Chapter 1 of the book [12]. To simplify and standardize calculations different, and more powerful, approximants, called self-similar factor approximants, are introduced in [25]. The singular solutions emerging from factor approximants correspond to critical points and phase transitions [25], including also the case of singularity located at ¥. When the series is long, one would expect that the accuracy is going to improve with increasing numbers of terms. Sometimes, an optimum is achieved for some ﬁnite number of terms, reﬂecting the asymptotic nature of the underlying series. It is very difﬁcult to improve the quality of results produced by the factor approximants, when the series are short. Some suggestions on such improvement were advanced by Gluzman [12]. In some simple but rather important cases of ODEs, the factor approximants allow to restore exact solutions, such a bell soliton, kink soliton, logistic equation solution and instanton-type solution [26]. However, as pointed out in the Introduction, such cases are quite special, and only an approximate solution could be found in many important cases [26,27]. More information about various methods of calculating critical index, amplitude and critical point can be found in [11,12,16]. 2.1. Relaxation Time Consider the case of relaxation behavior when a function at asymptotically large variable decays as F(t) ' A exp( ) (t ! ¥) , (8) with negative t. Formally, the relaxation time is t. It can be found as the limit 1 d = lim ln F(t) . (9) t t!¥ dt As in the case of critical behavior considered above, the small-variable expansion for the function is given by the sum F (t). The effective relaxation time can be expressed in terms of the small-variable expansion as follows, 1 d = ln F (t) . (10) t (t) dt It can be expanded in powers of t, leading to t (t) = b t . (11) k å n n=0 The coefﬁcients b are easily expressed through c of the original series (1). Let us apply to the n n obtained expansion the self-similar or Padé approximants, That is, we have to derive an approximant t (t) whose limit t (t) ! const (t ! ¥) , k Axioms 2020, 9, 126 6 of 38 gives the relaxation time t = lim t (t) . (12) k k t!¥ In such approach, the amplitude A does not enter the consideration. In practice, one can indeed construct the approximants with such required behavior. The complete approximant for the sought function F(t) denoted below as E(t, r), can be constructed as well. Even some ad hoc forms satisfying some general symmetry requirements can be suggested, as in Section 3. As an illustration, let us ﬁnd t (t) in explicit form under some simple assumptions concerning its asymptotic behaviors. Assume simply that there are two distinct exponential behaviors for short and long times with two different t , t , and the transition from short to long time behavior also occurs at the duration of some third characteristic time t = b . The characteristic times can be found from the short-time expansion. The simple approximation to the effective relaxation time, expressed in second order of (12), can be written down in the spirit of Yukalov and Gluzman [28] as follows: t (t) = b + (b b ) exp (b t), (13) 2 2 1 2 3 1 1 so that for negative b we have t (0) = b , t (¥) = b . 3 2 2 2 In the theory of reliability, the failure (hazard) rate or mortality force [29] is analogous to the inverse effective relaxation time, and the model of the type of formula (13) is known as the Gompertz–Makeham law of mortality. The complete approximant corresponding to (13) is reconstructed after elementary integration (b b ) exp(b t) 1 2 3 F(t) = A exp + b t , (14) with all unknown constituents of (13) expressed explicitly, from the asymptotic equivalence with the power-series, 2 2 2 2 4 c 2c c ( 1 0 2) c 6c c c 4c c 2c c c +c 1 0 1 3 0 2 0 1 2 1 A = c exp , b = , b = , 0 2 1 2 2 3 2 3 c 0 2c (3c c 3c c c +c ) 4(3c c 3c c c +c ) 0 0 3 0 1 2 1 0 3 0 1 2 1 (15) 2 3 2(3c c 3c c c +c ) 0 3 0 1 2 1 b = . 3 2 c (2c c c ) 0 0 2 1 Most interesting, as b = 0 the linear decay (growth) term in the formula for F(t) disappears, we arrive in different notations to the Gompertz function (54), b exp(b t) 1 3 G(t) = A exp t , (16) employed in calculations of Gluzman [30]. In this case, we have the effective relaxation time decaying (growing) exponentially with time.In Section 3, we apply this method of ﬁnding the effective relaxation time for time series. 2.2. Critical Index as Control Parameter. Optimization Technique The function’s critical behavior follows from extrapolating the asymptotic expansion (1) to ﬁnite or large values of the variable. Such an extrapolation can be accomplished by means of a direct technique just discussed above. However, its successful application requires knowledge of a large number of terms in the expansion. However, it is also possible to obtain rather good estimates for the critical indices from a small number of terms in the asymptotic expansion [12,18]. To this end, we can employ the self-similar root approximants given by (17). The external power m is to be determined here from additional conditions. More detailed explanations and more examples can be found in the book [12]. Axioms 2020, 9, 126 7 of 38 The self-similar root approximant has the following general form [15], 2 k m 2 k R (x, m ) = (1 +P x) +P x + . . . +P x . (17) 1 2 k k k In principle, all the parameters may be found from asymptotic equivalence with a given power series. The large-variable power a in Equation (3) could be compared with the large-variable behavior of the root approximant (17), km R (x, m ) ' A x , (18) k k k where 3 k A = P +P +P + . . . +P . (19) k 2 3 k This comparison yields the relation km = a, deﬁning the external power m = , when a is k k known. This way of deﬁning the external power is used when the root approximants are applied for interpolation. The root approximants (17) are applied in Appendix B, in the context of interpolation problem, for construction of accurate formulas valid for all values of x. Consider an exceptionally difﬁcult situation of an extrapolation problem: the large-variable behavior of the function is not known and a is not given. In addition, the critical behavior can happen at a ﬁnite value x of the variable x. The method for calculating the critical index a by employing the self-similar root approximants was developed by Gluzman and Yukalov [18]. In such approach, we construct several root approximants R (x, m ), and the external power m plays the role of a control function. The sequence of approximants is considered as a trajectory of a dynamical system. The approximation order k plays the role of discrete time. A discrete-time dynamical system or the approximation cascade consists of the sequence of approximants. The cascade velocity is deﬁned by Euler discretization formula [31–33] V (x, m ) = R (x, m )R (x, m ) + (m m ) R (x, m ) . (20) k k k k k+1 k k k+1 k k ¶m The effective limit of the sequence of approximants corresponds to the ﬁxed point of the cascade. Based on just a few approximants, the cascade velocity has to decrease. In such a sense, the sequence appears to be convergent. The control functions m = m (x), have to minimize the absolute value of k k the cascade velocity jV (x, m (x))j = minjV (x, m )j . (21) k k k k A ﬁnite critical point x , in the kth approximation, is to be obtained from Equation (17) by imposing the condition on the critical behavior expressed by (2), c 1/m c [R (x , m )] = 0 (0 < x < ¥). (22) k k k c c Its ﬁnite solution is denoted as x = x (m ). k k The critical index in the kth approximation is given by the limit a = lim m (x). k k x!x In the case of the critical behavior at inﬁnity, when x ¥, the critical index is a = k lim m (x), as x ¥ . (23) k c x!¥ Axioms 2020, 9, 126 8 of 38 Thus, to ﬁnd the critical indices, the control functions m (x) have to be found. The minimization of the cascade velocity (50) is complicated. Equation (21) contains two control functions, m and m . k+1 k Nevertheless, the problem can be resolved. This can be done in two ways. The ﬁrst constructive approach notices that m should be close k+1 to m . Then, we arrive to to the minimal difference condition min R (x, m )R (x, m ) (k = 1, 2, . . .) . (24) k k k+1 k One should typically ﬁnd a solution m = m (x) of the simpler equation k k R (x, m )R (x, m ) = 0 . (25) k k k+1 k The control functions m , characterizing the critical behavior of F(x), become the numbers m (x ). k k We simply write m = m (x ). k k c In the vicinity of a ﬁnite critical point, the function R behaves as R (x, m ) ' 1 , as x ! x 0 . (26) k c k The condition (25) is expressed as follows, c c c x (m ) x (m ) = 0 (0 < x < ¥) . (27) k k k+1 k k For the critical behavior at inﬁnity, it is expedient to introduce the control function s = km . (28) k k The large-variable behavior reads as R (x, s ) ' A (s )x , as x ! ¥ . (29) k k k As a result, the minimal difference condition is reduced to the equation A (s ) A (s ) = 0, as x ¥ . (30) k+1 k k k The alternative equation for the control functions also follows from the minimal velocity condition (21), and is called the minimal derivative condition min R (x, m ) (k = 1, 2, . . .) , (31) k ¶m In practice, we have to solve the equation R (x, m ) = 0 . (32) ¶m To apply this condition, we have ﬁrst to extract from the function its non-divergent parts. If the critical point is ﬁnite, one can study the residue of the function ¶ logR /¶m , expressed as ¶x c k lim (x x) logR (x, m ) = m . k k k k x!x ¶m ¶m k k k Axioms 2020, 9, 126 9 of 38 Thus, from Equation (32), we arrive to the condition ¶x k c = 0 (0 < x < ¥) . (33) ¶m When the critical behavior occurs at inﬁnity, then we can consider the limiting form of the amplitude and reduce Equation (32) to the form ¶ A (s ) k k = 0, as x ¥ . (34) ¶s The ﬁnal estimate for the critical index is given by a simple average of the minimal difference and minimal derivative results. The technique reviewed in Section 2.2, following Chapter 1 of the book [12], turned out to be useful in calculating the critical properties of the classical analog of the graphene-type composites with varying concentration of vacancies [34]. In the next subsection, we give some examples, ﬁrst presented in Chapter 1 of the book [12]. More information and details can also be found in Chapter 7 of the book [12]. 2.3. Examples: Permeability in the Two-Dimensional Channels In the cases considered below, we deal with a unique theoretical opportunity to attack the problem of critical exponent and criticality in general, directly from the solution of the hydrodynamic Stokes problem. Let us consider as example the case of the two-dimensional channel bounded by the surfaces z = b (1 + e cos x) , as explained in Appendix A. Here, e is termed waviness. The permeability behaves critically [12], That is, it tends to zero as K(e) (e e) , as e ! e 0 , (35) c c with e = 1 ,{ = . The permeability as a function of the waviness can be derived in the form of an expansion in powers of e [35]. In the particular case of b = 0.5, the permeability can be found explicitly as 2 4 K(e) ' 1 3.14963 e + 4.08109 e , as e ! 0 . (36) By setting e = 1, and changing the variable y = , one can move the critical point to inﬁnity. 1e The critical index is calculated as explained above and in [18]. From the minimal-difference condition we ﬁnd { = 2.184, with an error 12.6%. From the minimal derivative condition, we obtain { = 2.559, with an error 2.37%. The ﬁnal answer { is given by the average of two solutions { = 2.372 0.19 . In another particular case considered in Chapter 1 of the book [12], for b = 0.25, the permeability expands as follows, 2 4 K(e) ' 1 3.03748 e + 3.54570 e , as e ! 0 . (37) Setting e = 1, and using the same technique as above the approximations for critical index are found, so that { = 2.342, and { = 2.743. Finally, { = 2.543 0.2. 1 2 Let us also consider some examples of the numerical convergence of root approximants in high-orders, ﬁrst presented in Chapter 1 of the book [12]. The technique is applied for calculating critical index {. It seems instructive to consider the same two cases of permeability K(e), but with higher-order terms, up to 16th order inclusively. The numerical form of the corresponding expansions can be found in Appendix A (see expansions (A8) and (A14)). Concretely, we construct the iterated root approximants a/k 4/3 3/2 2 2 3 k R (y) = (1 +P y) +P y +P y + . . . +P y . (38) 1 2 3 k k Axioms 2020, 9, 126 10 of 38 The parameters P have to be found from the asymptotic equivalence with the expansions. The permeability has the required critical asymptotic forms R (y) ' A y , as y ! ¥ . (39) The amplitudes A = A (a ) are found explicitly as k k k a/k 4/3 2 3/2 A = (P +P ) +P + . . . +P . (40) 2 3 k 1 k To deﬁne the critical index a , we analyze the differences D (a ) = A (a ) A (a ) . (41) kn k k k n k From the sequences D = 0, we ﬁnd the related sequences of approximate values a for the kn k critical indices. Although it is possible to investigate different sequences of the conditions D = 0, the most kn natural from is presented by the sequences of D = 0 and D = 0, with k = 1, 2, 3, 4, 5, 6, 7. k,k+1 k8 The results for b = are shown in Table 1. We observe good numerical convergence of the approximations a { , to the value { = . k k Similar results, presented in Table 2 (for b = ), again demonstrate rather good numerical convergence of the approximate critical indices to the value { = . Comparison of the results for different parameters b allows us to think that the critical index does not depend on parameter b. In both examples considered above, the convergence sets in rather quickly. The D Log Padé method appears to bring convergent sequences and consistent expressions for permeability as well. Further details can be found in Appendix A. The results obtained from the two different methods well agree with each other. A similar comparison was made by Gluzman and co-authors [34] for the effective conductivity of graphene-type composites. Table 1. Walls can touch (b = 1/2). The problems described in Appendices A and A.1. Critical indices for the permeability { obtained from the optimization conditions (41). There is rather good numerical convergence to the number 5/2. { D ({ ) = 0 D ({ ) = 0 k k+1 k k8 k { 2.18445 2.39678 { 2.68311 2.52028 { 2.48138 2.49208 { 2.49096 2.49692 { 2.5012 2.49982 { 2.49935 2.499 { 2.49861 2.49861 7 Axioms 2020, 9, 126 11 of 38 Table 2. Walls can touch (b = 1/4). The problems described in the Appendices A and A.2. Critical indices { are found from the optimization conditions (41). There is a good numerical convergence of the sequences to the value 5/2. { D ({ ) = 0 D ({ ) = 0 k k+1 k k8 k { 2.34165 2.452 { 2.52463 2.50542 { 2.4976 2.49933 { 2.49941 2.50004 { 2.50028 2.50033 { 2.50032 2.50036 { 2.50041 2.50041 Consider a different case of permeability K(e) (see Appendixes A and A.3). The results were ﬁrst obtained in Chapter 1 of the book [12]. For the parallel sinusoidal two-dimensional channel when the walls would not touch, the permeability remains ﬁnite. It is expected to decay as a power-law as the waviness e becomes large, K(e) e , as e ! ¥, with negative index n. In the expansion of K(e) in small parameter e , we retain the same number of terms as in the previous two examples. The numerical values of the corresponding coefﬁcients can be found in Appendix A ( see expression (A16)). The results of calculations are presented in Table 3 (for b = ). They show rather good numerical convergence, especially in the last column, to the value 4. The sequence, based on the D Log Padé method, is convergent as well (see Appendixes A and A.3). Table 3. Walls can not touch. Case of b = 1/2. Critical indices for the permeability for the problems in Appendixes A and A.3, obtained from the optimization conditions D (n ) = 0. The sequences kn k demonstrate reasonably good numerical convergence to the value n = 4. n D (n ) = 0 D (n ) = 0 k k+1 k k8 k n 6 4.36 n 4.04 4.1 n n.a. 4.13 n 4.09 4.05 n 3.97 4.03 n n.a. 4.08 n 3.94 3.94 More information on the problems of critical permeability, can be found in Appendix A. The three problems considered above are studied by applying the D Log Padé method of Section 2 to calculate the critical index for permeability. The computations complement and conﬁrm the results for critical index, obtained above from the optimization technique. The optimization technique works better for short truncated series, converging more quickly, while the D Log Padé method is easier to apply for very long series. In addition, the D Log Padé method, as well as the Padé method, when its application is appropriate, allows us to compute the critical amplitudes. Axioms 2020, 9, 126 12 of 38 3. Relaxation Phenomena in Time Series For the phenomenon to occur, the basic underlying symmetry must be broken. While studying the phenomenon it is important to distinguish between an explicit symmetry breaking when governing equations are not invariant under the desired symmetry and spontaneous symmetry breaking, without presence of any asymmetric cause [36]. When successful, the approach based on broken global symmetries leads to understanding of the key phenomena of magnetism, superconductivity and superﬂuidity. On the other hand, when some global inherent symmetry can be recognized in physical quantities, we arrive to the gloriously successful theory of critical phenomena and vital extensions of perturbation results in quantum ﬁeld theories, jointly called renormalization group (RG) [17,37]. In a nutshell, we suggest below how to apply symmetry considerations and RG-inspired methods to the sharp moves which occur in time series, with the most notable examples given by stock market crashes. Assume that numerical data on the time series variable (e.g., price) s is given for some time t segment. Typically, one considers N + 1 values s(t ), s(t ) . . . , s(t ), for N + 1 given at equidistant 0 1 N successive moments in time t = t , with j = 0, 1, 2 . . . , N [38]. In the study of time series, one is interested in the extrapolated to future value of s. In ﬁnancial mathematics, one is particularly interested in the predicted value of log return [38,39], s(t + dt) R(t + dt) = ln . (42) s(t ) One can see from the deﬁnition that we are really interested in the quantity S = ln(s), to be called return. Let us place the origin at the very beginning of the time interval, setting also t = 0. Naturally, one is interested in the value of S(t + dt), allowing to ﬁnd R(t + dt) at a later time. Since the N N approach developed in [30,38] is invariant with regard to the time unit choice, we consider temporal points of the dataset as integer, while considering the actual time variable as continuous. Modern physics when applied to ﬁnancial theory is concerned with ergodicity violations [40–43]. Ergodicity violations may be understood as a manifestation of a non-stationarity, or violation of time-invariance of random process. Metastable phases in condensed matter also defy ergodicity over long observation timescales. In special quantum systems of ultracold atoms, spontaneous breaking of time-translation symmetry causes the formation of temporal crystalline structures [44]. The concept of a spontaneously broken time-translation invariance can be useful for time series in application to market dynamics, as ﬁrst suggested in [38]. According to Andersen, Gluzman and Sornette [38], the window of forecasting of time series describing market evolution emerges due to a spontaneous breaking/restoration of the continuous time-translation invariance, dictated by relative probabilities of the evolution patterns [45]. In turn, the probabilities are derived from the stability considerations. The notion of probability introduced in [45] is not based on the same conventional statistical ensemble probability for a collection of people, but it is closer to the time probability, concerned with a single person living through time (see Gell-Mann and Peters [42] and Taleb [43]). Probabilistic trading patterns correspond to local breakdown of time-translation invariance. Their evolution leads to the time-translation symmetry complete (or partial) restoration. We need to estimate typical time, amplitude and direction for such a restorative process. Thus, we are not conﬁned to a binary outcome as in [38] but attempt to estimate also the magnitude of the event. According to Hayek [46], markets are mechanisms for collecting vast amounts of information held by individuals and synthesizing it into a useful data point [46,47], e.g., price of the stock market index dependent on time. Conversely, consolidation of knowledge is done via prices and arbitrageurs (Taleb on Hayek). A catastrophic downward acceleration regime in the time series is known as crash [48]. Time series representing market price dynamics in the vicinity of crisis (crash, melt-up), could be treated as a self-similar evolution, because of the prevalence of the collective coherent behavior of many trading, interacting agents [45,49], including humans and machine algorithms. The dominant Axioms 2020, 9, 126 13 of 38 collective slow mode corresponding to such behavior, develops according to some law, formalized as a time-invariant, self-similar evolution. Away from crisis, there is a superposition of collective coherent mode (generalized trend) and of a stochastic incoherent behavior of the agents [39,45]. We do not attempt here to write down a generic evolution equation of behind the time series pertaining to market dynamics. Instead. we consider, locally in time, some trial functions—approximants—in the form inspired by the solutions to some well-known evolution equations. The approximants are designed to respect or violate self-similarity. If in physics the relation of phenomenon and symmetry violation is understood, in econophysics such connection is far from being clear. However, to realize the promise of econophysics [50], on a consistent basis and at par with physics achievements, one has to identify and study the phenomenon from the relevant symmetry viewpoint. Our primary goal here is not forecasting/timing the crash, but studying the crash as a particular phenomenon created by spontaneous, time-translation symmetry breaking/restoration. Since the market dynamics is believed to be formed by a crowd (herd) behavior of many interacting agents, there are ongoing attempts to create empirical, binary-type prediction markets functioning on such principle, or mini Wall Streets [47]. Prediction markets often work pretty well, however there are many cases when they give wrong prediction or do not make any predictions at all. Such special set-ups are already very useful in reaching understanding that market crowds are correct only if they express a sufﬁcient diversity of opinion. Otherwise, the market crowd can have a collective breakdown, i.e., is fallible, as expected by Soros [48]. In our understanding, such breakdowns amount to breaking of time-translation invariance. Restoration of the time-translation invariance—in theory—may be attributed to a small proportion of the traders having either superior information or market intellect [47]. Data from a survey conducted with high income and institutional investors show that they “generally exaggerated assessments of the risk of a stock market crash, and that these assessments are inﬂuenced by the news stories, especially front page stories, that they read” [51]. The division into two (at least) groups can be seen in the very parallel existence of future and spot markets for the same asset, such as S&P 500 index, with the futures market working 24 h. It is believed that a lot of the daily crashes, or melt-up days, start overnight. It is not that arbitrage is not effective, the spot market is just closed overnight, while the futures market operates in a discovery mode. 3.1. Self-Similarity and Time Translation Invariance According to Isaac Newton and Murray Gell-Mann, the laws of nature are somehow self-similar. The laws of Newtonian mechanics are invariant with respect to the Galilean group, expressing Galileo’s principle of relativity [52]. The group includes time-translation invariance, or else the laws of classical mechanics are self-similar. What should be the underlying symmetry for price dynamics? Mind that in normal times the average price trajectory is exponential, because of the compounding interests, and we enjoy an almost constant return (or price growth rate) [53]. Indeed, let s be an underlying security (index) price at t = t . Let F be the fair value of the future requiring a risk associated expected return b [43]. Then (see, e.g., [43]), expected forward price F = s exp(b(t t ). For example, a share of a stock would t 0 be correctly priced with the expected return calculated as the return of a risk-free money market fund minus the payout of the asset, being a continuous dividend for a stock [43]. Thus, rather simple and natural exponential estimates are constantly made for stocks and the alike. The formula for the forward price is self-similar, or time-translation invariant, as explained below. However, as noted in [48,53], prices often signiﬁcantly deviate from such a simple description. Bubbles can be formed, as well as other presumed patterns of technical analysis. Asset prices strongly deviate from the fundamental value over signiﬁcant intervals of time. The fundamental value is not truly observable, making deﬁnition of such intervals somewhat elusive. There are some very real mechanisms in work, acting to increase and even accelerate the deviation from fundamental value. The causes of deviation could be “option hedging, portfolio insurance strategies, leveraging and margin requirements, imitation and herding behavior ”, as is the authoritative opinion expressed in [48,53]. Axioms 2020, 9, 126 14 of 38 Recall also that meaningful technical analysis starts from recasting the time series data using some polynomial representation to serve as the expansion [38]. The regression is constructed in standard fashion by minimizing mean-square deviation, with the effective result that the high-frequency component of the price is getting average out. Then, one can consider self-similarity in averages [49]. Indeed, the standard polynomial regressions are invariant under time-translation, retaining their form after arbitrary selection of origin of time with simple redeﬁnition of all parameters. The position of origin in time can be explicitly introduced into the regression formula and included into the coefﬁcients, but actual results of calculations with any arbitrary chosen origin will remain the same. Such property can be expressed as some symmetry. We put forward the idea that it is the onset of broken time-translation invariance that signiﬁes the birth of a bubble, or of some other temporal pattern preceding a crash. End of pattern corresponds to the restoration of time-translation invariance, partially or fully. Our task is to express this idea in quantitative terms by making explicit transformation from the regression-based technical analysis to the valuation formula in the exponential form, taking into account strong deviations from the standard valuation formulae. Assume that a time series dynamics is predominantly governed by its own internal laws. This is the same as to write down a self-similar evolution for the marker price s [54], meaning that, for arbitrary shift t , one can see that s(t + t, a) = s(t, s(t, a)), (43) with the initial condition s(0, a) = a [55,56]. The value of the self-similar function s in the moment t + t with given initial condition, is the same as in the moment t, with the initial condition shifted to the value of s in the moment t. When t stands for true time, the property of self-similarity means the time-translation invariance. Formally understood, Equation (43) gives a background for the ﬁeld-theoretical RG, with addition of some perturbation expansion for the sought quantity, which should be resummed in accordance with self-similarity expressed in the form of ODE [55–57]. The time-translation invariance expressed by (43) means that the law for price evolution exists and remains unchanged with time, with proper transformation of the initial conditions [52]. The role of perturbation expansion when price dynamics is concerned, is accomplished by meaningful technical analysis, by recasting data in the form of some polynomial representation [38]. There is no formal difference in treating polynomials and expansions, as already mentioned in Section 2. Consider ﬁrst the simplest case of technical analysis. The linear function can be formally considered as the function of time and initial condition a, namely s (t, a) = a + bt, and s (0, a) = a. 1 1 The linear function (regression) is self-similar, or time-translation invariant, as can be checked directly, by substitution into (43). Through some standard procedure, let us obtain the linear regression on the data around the origin t = 0, so that s (t) = a + b t. 0,1 1 1 Note that the position of origin is arbitrary, and it can be moved to arbitrary position given by real number r, so that s (t) = A (r) + B (r)(t r), r,1 1 1 with new and different coefﬁcients. It turns out that the coefﬁcients are related as follows A (r) = a + b r, B (r) = b , 1 1 1 1 1 so that s (t) s (t). r,1 0,1 By shifting the origin, we create an r-dependent form of the linear regression s , which can be r,1 used constructively. Thus, instead of a single regression we have its r-replicas, equivalent to the original Axioms 2020, 9, 126 15 of 38 form of regression, and all replicas respect time-translation symmetry. In such a sense, one can speak about replica symmetry. Of course, we would like to avoid such redundancy in data parameterization and to ﬁnd the origin(s) by imposing some optimal conditions (see Section 3.2). The position of origin in time can be explicitly introduced into the regression formula and included into the coefﬁcients, but actual results of calculations with any arbitrary chosen origin will remain the same. Such property can be expressed as some symmetry. However, intuitively, one would expect that the result of extrapolation with chosen predictors should be dependent on the point of origin r. Indeed, various patterns such as “heads and shoulders”, “cup-with-handle”,“ hockey stick”, etc., considered by technical analysts do depend on where the point of origin is placed. In physics, the point of origin (Big Bang) plays a fundamental role. We should ﬁnd a way to break the replica symmetry. As discussed above, it is exponential shapes that are natural in pricing. Exponential function E(t, a) = a exp(bt), with initial condition a and arbitrary b satisfy functional self-similarity as well as the linear functions. It can be replicated as E (t) = a(r) exp(b(t r)), (44) a(r) = a exp(br). Having b dependent on r is going to violate the time-translation and replica symmetry. Instead of a global time-translation invariance, we have a set of r local “laws” near each point of origin. However, having r in Formula (44) ﬁxed, by imposing some additional condition, or just being integrated out, should restore the global time-translation invariance completely as long as the exponential function is considered. Moreover, stability of the exponential function is measured by the exponential function with the same symmetry (see Formula (46)). Not only is exponential function time-translation invariant, but the expected return b has the same property. For exponential functions, the expected (predicted) value of return per unit time exactly equals b. Another simple rational function, known as hyperbolic discounting function [58], H(t, a) = , where a is the initial condition and b is arbitrary, is time-translation invariant. a +bt Note that shifted exponential function E (t, a) = c + (a c) exp(bt), with initial condition a and arbitrary b and c, is invariant under time-translation as well. Another interesting symmetry is shape invariance [59], meaning P P F = mF , t+t t and an exponential function is shape invariant with m = exp (bt), leaving the expected return unchanged. Keep in mind that our task is to calculate b from the time series. In principle, one can think about breaking/restoration of shape invariance, as a guide for construction of the concrete scheme for calculations. A critical phenomenon, an underlying symmetry of the formula for the observable, is scaling f = Lf , lt where L = f . The class of power laws, f = t , with critical index a, is scaling-invariant. The central task is to calculate c. The statistical renormalization group formulated by Wilson [37] explains well the critical index in equilibrium statistical systems. When information on the critical index is encoded in some perturbation expansion, one can use resummation ideas to extract the index, even for short expansions and for non-equilibrium systems [11,12,18]. Some of the methods are discussed in the preceding section (see also [12,16]). Working with power-law functions will not leave the return unchanged. However, one can envisage the scheme with broken scaling invariance, as an alternative to the former schemes. The log-periodic solutions extend the simple scaling [60] and are extensively employed in the form of a Axioms 2020, 9, 126 16 of 38 sophisticated seven-parametric ﬁt to long historical dataset [53], as well as of its extensions [61]. The ﬁt is tuned for prediction of the crossover point to a crash, understood as catastrophic downward acceleration regime [48]. However, one cannot exclude the possibility of the solutions with different time symmetries (scaling and time-invariance, for instance) competing to win over, or to coexist, all measured in terms of their stability characteristics. Our primary concern is the crash per se, not the regime preceding it. We start analyzing crashes with the polynomial approximation that respects time-translation symmetry, have the symmetry broken, and then restored (completely or partially), by means of some optimization. Such sequence ends with a non-trivial outcome: b becomes renormalized b(r), with r being found using the optimization procedure(s) deﬁned below. We discuss in Section 2.1 a general technique for correcting b directly, which accounts for higher order terms in regression, making it time-dependent. In [38], the framework for technical analysis of time series was developed, based on second-degree regression and asymptotically equivalent exponential approximants, with some rudimentary, implicit breaking of the symmetry. We intend to go to higher-degree regressions and develop a consistent technique for explicit symmetry breaking with its subsequent restoration. According to textbooks, the fourth order should be considered as “high”. Taleb (see footnote on p. 53 in [43]) also considered models with ﬁve parameters as more than sufﬁcient. 3.2. Optimization, Approximants, Multipliers Higher-order regressions allow for replica symmetry. For instance, the quadratic regression s (t) = a + b t + c t can be replicated as follows: 0,2 2 2 2 s (t) = A (r) + B (r)(t r) + C (r)(t r) , r,2 2 2 2 with A (r) = a + b r + c r , B (r) = b + 2c r, C (r) = c . 2 2 2 2 2 2 2 2 2 With such transformed parameters, we ﬁnd that s (t) s (t). In fact, one can still formulate r,2 0,2 self-similarity analogous to (43), but in vector form with increased number of parameters/initial conditions in place of a [57]. However, if only the linear part of quadratic regression, or trend, is taken into account, we return to the conventional functional self-similarity time-translation invariance, discussed above extensively. Such effective linear/trend approach to higher-order regressions allows applying the same idea at all orders and observe how the exponential structures change with increasing regression order. Note that, in the course of trading, a common pattern is trend following, which appears to be a collective, self-reinforcing motion that, intuitively, lends itself to a self-similar description. Indeed, some participants are waiting for a market conﬁrmation of the trend before acting on it, which in turn acts as a conﬁrmation for others. Having a universal model explaining this dynamics (if not predicting it) would be quite useful. To take into account the dependence on origin, the replica symmetry has to be broken. Breaking of the symmetry means the dependence on origin of actual extrapolations with non-polynomial predictors. As the primary predictors, we suggest the simplest exponential approximants considered as the function of origin r and time, B(r) E (t, r) = A(r) exp (t r) , (45) A(r) independent on the order of polynomial regression. The approximants (45) are constructed by requiring an asymptotic equivalence with the linear part of chosen polynomial regression. If the extrapolations E (t + dt, r) are made by each of the approximants, they appear to be different for various r, meaning breaking of the replica symmetry and of the time-translation symmetry. Passage from polynomials to exponential functions leads to emergence of the continuous spectrum of relaxation (growth) times. Axioms 2020, 9, 126 17 of 38 To compare the approximants quality, one can look at their stability. Stability of the approximants is characterized by the so-called multipliers deﬁned as the variation derivative of the function with respect to some initial approximation function [45]. Following Yukalov and Gluzman [62], one can take the linear regression as zero approximation and ﬁnd the multiplier B(r) M (t, r) = exp (t r) . (46) A(r) The simple structure of multipliers (46) allows avoiding appearance of spurious zeroes which often complicate analysis with more complex approximants/multipliers. Because of the multiplicity of solutions, embodied in their dependence on origin, it is both natural and expedient to introduce probability for each solution. As explained in [45], one can introduce Probability µ j M (t, r)j , with proper normalization, as shown below in Formula (48). Probability appears to be of a pure dynamic origin and is expressed only from the time series itself. When the approximants and multipliers of the ﬁrst order are applied to the starting terms of the quadratic, third- or fourth-order regression, we are conﬁned to effective ﬁrst-order models, with velocity parameter from [38] dependent also on higher-order coefﬁcients and origin. To make extrapolation with approximants (45), one has still to know the origin. In other words, the time-translation symmetry has to be restored completely or partially, so that a speciﬁc predictor with speciﬁcally selected origin, or as close as possible to a time-translation invariant form, is devised. Fixing unique origin also selects unique relaxation (growth) time, during which the price is supposed to ﬁnd a time-translation invariant state. Exponential functions are chosen above because they are invariant under time translation. Any shift in origins is absorbed by the pre-exponential amplitude and does not inﬂuence the return R. A similar in spirit view that broken symmetries have to be restored in a correct theory was expressed by Duguet and Sadoudi [63]. In the approach predominantly adopted in this section, we keep the form and order of approximants the same in all orders, but let the series/regressions evolve into higher orders. Independent of the order of regression, we construct the same approximant, based only on the ﬁrst-order terms, only with parameters changing with increasing order of regression. In the framework of the effective ﬁrst-order theories, we employ exponential approximants. Consider the value of origin as an optimization parameter [30]. To ﬁnd it and restore the time-translation symmetry, we have to impose an additional condition directly on the exponential predictors with known last closing price, E (t , r) = s . (47) N N One has to solve the latter equation to ﬁnd the particular origin(s) r = r . In this case, we consider a discrete spectrum of origins, consisting of several isolated values. To avoid double-counting when the last closing price enters both regression and optimization, one can determine the regression parameters in the segment limited from above by t , s . Alternatively, one can consider the two N1 N1 ways to deﬁne regression parameters and choose the one which leads to more stable solutions. Unless otherwise stated, we consider that such comparison was performed and the most stable way was selected. The extrapolation for the price is simply s(t + dt) = E (t + dt, r ). The condition N N imposed by Equation (47) is natural, because then a ﬁrst-order approximation to Formula (42), s(t +dt)s(t ) N N R , is recovered (see, e.g., [39]), as one would expect intuitively. s(t ) N Axioms 2020, 9, 126 18 of 38 The procedure embodied in (47), leads to a radical reduction of the set of r-predictors to just a few. Set of predictors and corresponding to each multiplier, deﬁne the probabilistic, poor man’s order book. Instead of an unknown to us true numbers of buy and sell orders, we calculate a priori probabilities for the price going up or down and corresponding levels. Target price is estimated through weighted averaging developed in [45,62], in its concrete form (48) given below. For the sake of uniqueness, one can simply choose the most stable result among such conditioned predictors. One can also consider extrapolation with a weighted average of all such selected solutions. With 1 M 6 solutions, their weighted average E for the time t + dt is given as follows, 1 N E (t + dt, r ) M (t + dt, r ) N N k=1 1 k 1 k E (t + dt) = . (48) 1 N M (t + dt, r ) k=1 1 k Within the discrete spectrum, we can ﬁnd solutions with varying degrees of adherence to the original data. They can follow data rather closely or be loosely deﬁned by the parameters of regression. The former could be called “normal” solutions, and tend to be less stable, with multipliers 1, but the latter are “anomalous” solutions, since they cut through the data and typically are the most stable with small multipliers. Anomalous solutions are crashes (meltdowns) and melt-ups. The typical situation with the solutions in the discrete spectrum is presented in Figure 1. The novel feature introduced through (48) is that averaging is performed over all approximants of the same order, compatible with constraints expressed by (47). GHt,r L 13 14 15 16 Figure 1. All Gompertz approximants corresponding to the discrete spectrum, i.e., solutions to (56) are shown. The most stable downward and less stable upward solutions are shown with solid lines. Three additional solutions are shown as well. The solution shown with the dashed line is closest to the data. The “no-change”, practically ﬂat solution, is shown with a dot-dashed line. Another solution, corresponding to moderate growth, is shown with a dotted line. The level s = 2746.61 is shown with black line. Several historical data points are shown as well. One can also integrate out the dependence on origin r, considered as a continuous variable, by applying an averaging technique of weighted ﬁxed points suggested in [45]. The dependence on origin enters the integration limit through parameter T. Integration can be performed numerically for the simplest exponential predictors according to the formula R t +T E (X, t) M (X, t) dX t T 1 1 I(t, T) = . (49) R t +T N 1 M (X, t) dX t T 1 To optimize the integral, we have to impose an additional condition on the weighted average/integral. It is natural to force it to pass precisely through the last historical point. I(t , T) = s(t ), (50) N N Axioms 2020, 9, 126 19 of 38 and solve the latter equation to ﬁnd the integration limit T = T . The sought extrapolation value for the price s is simply I(t + dt, T ). We prefer to take into account the broadest possible region of integration. Under such conditions, if and when the solution to (50) exists, it is unique. The value of s may enter the consideration twice: in the regression parameters and in the optimization condition (50). To avoid counting twice the last known value s , one can slightly different deﬁnition R t +T N1 E (X, t) M (X, t) dX t T 1 1 I(t, T) = . (51) R t +T N1 M (X, t) dX t T 1 As an additional condition to ﬁnd origin, one can also consider the minimal difference requirement on the lowest order predictors, as ﬁrst suggested in [49]. Such approach is analogous to the technique discussed in Section 2.2. However, instead of a critical index, we calculate relaxation time. To this end, one has to construct the second order super-exponential approximant C(r)(tr)t(r) B(r)(tr) exp B(r) E (t, r) = A(r) exp , A(r) (52) B(r) t(r) = 1 , 2A(r)C(r) and minimize its difference with the simplest exponential approximant in the time of interest t + dt. Namely, one has to ﬁnd all roots of the equation C(r)t(r)(t + dt r) exp = 1, (53) B(r) with respect to real variable r. Corresponding multiplier 1 ¶E (t, r) M (t, r) = , B(r) ¶t can be found as well. The discrete spectrum optimization seems to be the most natural and transparent. Our goal is to ﬁnd the approximants and probabilistic distributions in the last available historical point of time series. Crashes are attributed to the stable solutions with large negative r, meaning that the origin of time has to be moved to the deep past to explain the crash in near future. Preliminary results of Gluzman [30] suggest that, in the overwhelming majority of cases, a crash is preceded by similar, asymmetric probability pattern(s), of the type shown in the ﬁgures below. As noted in [51], Kahneman and Tversky explained that people tend to judge current events by their similarity to memories of representative events. There are also additional solutions with multipliers of the order of unity, coming from the region of moderate r, and it is often possible to ﬁnd some rather stable upward solution for large positive r. One can think that, for such stable time series as describing population dynamics, only the region of moderate r gives relevant solutions, while for time series describing price dynamics all types of solutions may exist simultaneously. Within our approach to constructing approximants, one can also try to exploit the second order terms in regression. Instead of exponential approximants, one should try some other, higher-order approximants, but with time-translation invariance property. Such approximants are presented below. They are considered ad hoc, because they can be written in closed form only in special, low-order situations. It is not feasible to extend them systematically into arbitrary high order. Hence, our interest in special forms with desired symmetry. Sometimes, it is even not possible to ﬁnd stable solutions with a single approximant, but it is still possible with corrected approximants. Axioms 2020, 9, 126 20 of 38 Recall that exponential function can be obtained as the solution to simple linear ﬁrst-order ODE. In the search for second-order approximants with time-translation invariance, we turned to some explicit formulas, emerging in the course of solving some ﬁrst-order ODE with added nonlinear term with arbitrary positive power, which generalizes ODE for simple exponential growth. It is known as Bertalanffy–Richards (BR) growth model [64,65]. Among its solutions in the case of second-order nonlinear term, there is a celebrated logistic function [64], L(t) = , (1q q ) exp(q t) 1 2 0 q + where q is the initial condition. The logistic function is widely used to describe population growth phenomena and is also known to be the solution to the logistic equation of growth. The logistic function written in the form L(t, q ), dependent on the initial condition L(0, q ) = q , with arbitrary 1 1 1 q , q , is time-translation invariant. One can also introduce the second-order logistic approximant 0 2 which generalizes logistic function [30]. In addition to describing situations with saturation at inﬁnity, the logistic approximant include also the case of so-called ﬁnite-time singularity, which makes it redundant, since such solutions were axiomatically excluded from the price dynamics [38]. Another solution to the BR model in the case when the nonlinear term has power only slightly differing from unity, is known as Gompertz function [64], G(t) = g exp(g exp(g t)), (54) 0 2 used to describe growth (relaxation, decay) phenomena. However, as we demonstrate in Section 2.1, it is possible to explain G(t) directly from the resummation technique leading to Formula (16), without resorting to BR. Relaxation (growth) time behaves exponentially with time. The Gompertz function is log-time-translation invariant. One can consider the second order Gompertz approximant. It simply generalizes the Gompertz function. Namely, one can ﬁnd Gompertz approximant in the following form G(t, r) = g (r) exp(g (r) exp(g (r)(t r))), 0 1 2 2 (55) B(r) 2A(r)C(r)B(r) g (r) g (r) = A(r)e , g (r) = , g (r) = , 0 1 2 A(r)g (r) A(r)B(r) with the multiplier g (r)(tr) (g (r)e 2 +g (r)(tr)) 1 2 g (r)g (r)g (r)e 0 1 2 M (t, r) = . B(r) The Gompertz approximant, of course, is not limited to the situations with saturation at inﬁnity, as it can also describe very fast decay (growth) at inﬁnity. With r to be found from some optimization procedure, the return R generated by Gompertz approximant is time-translation invariant and has a compact form R(dt) = g (r) exp(g (r)(t r))(exp(g (r)dt) 1). 1 2 N 2 For small dt, it becomes particularly transparent: dt R(dt) g (r)g (r) exp(g (r)(t r)) dt , 1 2 2 N t(T , r) with the pre-factor giving the return per unit time. The inverse return per unit time has the physical meaning of the effective time for growth (relaxation) b(t, r) t(t, r) = (g (r)g (r)) exp (g (r)(r t)) , 1 2 2 Axioms 2020, 9, 126 21 of 38 considered at the moment t = T . Here, we employ the the effective relaxation (growth) time (see Section 2.1), t(t) = ln G(t) , dt and replicate it. We ﬁnd that the return for Gompertz approximant is solely determined by relaxation time S(t, r) = , t(t, r) allowing to express the log return in a compact form R(dt) = S(t + dt, r) S(t , r). N N Thus, the return for Gompertz approximant appears as purely dynamic quantity, not involving any consent about equilibrium, fundamental value, etc. If relaxation time is found from the data to be very large as it should be close to equilibrium conditions [66], we have no potential for returns, i.e., near-equilibrium yields dull, everyday mundane events that are repetitive and lend themselves to statistical generalizations [48]. If relaxation time is anticipated to be very short, we have potentially huge returns. The far-from-equilibrium conditions give rise to unique, historic events [48], or to some very fast relaxation events/crashes. The latter condition makes real markets fragile [67]. Gompertz approximant can go at inﬁnity faster or slower than exponential, and in some important examples such differences amounting to a few percent, can be detected. The function g (r), could be called a gauge function for the price, expressing arbitrariness of choice of the price unit, as it does not enter the return. The time-translation invariance of return and gauge invariance for the price are considered very desirable in price model formulation [38], both properties are pertinent to exponential and Gompertz approximations for the price temporal dynamics. We are interested in market prices on a daily level, and consider only signiﬁcant market price drops/crashes with magnitude more than 5.5%. Such magnitude is selected to be comparable to the typical yearly return of Dow Jones Industrial Average index. Typically, a 2% daily move is considered as big, but not at the times of various turmoils. It is widely accepted in practical ﬁnance that asset price moves in response to unexpected fundamental information. The information can be identiﬁed as well as the tone, positive versus negative. It is found that news arrival is concentrated among days with large return movements, positive or negative [68]. Spontaneously emerging narratives, a simple story or easily expressed explanation of events, might be considered as largely exogenous shocks to the aggregate economy [51]. Simply put, one should analyze what people are talking about in the search for the source of economic ﬂuctuations. Moreover, as in true epidemics governed by evolutionary biology, mutations in narratives spring up randomly, and if contagious generate unpredictable changes in the economy [51]. As noted by Harmon et al. [69], panic on the market can be due to external shocks or self-generated nervousness. It is argued [70] that cause and effect can be cleanly disentangled only in the case of exogenous shocks, as it is only needed to select some interesting set of shocks to which price is likely to respond. Effects of positive and negative oil price shocks on the stock price need not be symmetric. In macroeconomics, it is even accepted that only positive changes in the price of oil have important effects. Periods dominated by oil price shocks are reasonably easy to identify, and they can indeed be considered as exogenous as well as, often, strong, although difﬁcult to model. Oil price shocks are the leading alternative to monetary shocks and may very well have similar effects [70]. Our goal here is not to forecast/timing the crash, but to study the crash as a particular phenomenon created by spontaneous, time-translation symmetry breaking/restoration. In essence, we ask the following questions: 1. What probabilistic pattern would an observer see the day before crash, Axioms 2020, 9, 126 22 of 38 2. What would be the market reaction (expressed through the index), if we are aware that a Swan of some color has already arrived? In our opinion, in the presence of a Swan, understood as a shock of unspeciﬁed strength, the problem simpliﬁes, because of a reduced set of outcomes, dominated by the most extreme, very stable downward solution. Consider that, in natural sciences, most efforts are dedicated to creating a correct experimental setup. Studying reaction to shock is the only current viable substitute for clean experimental conditions. 3.3. Examples Consider as example a 7.72% drop in the value of Shanghai Composite index related to the ﬁrst COVID-19 crash, which occurred on 3 February 2020. With N = 15, as recommended in [38], the following data points are available, s = 3085.2, s = 3083.79, s = 3083.41, s = 3104.8, s = 3066.89, s = 3094.88, 0 1 2 3 4 5 s = 3092.29, s = 3115.57, s = 3106.82, s = 3090.04, s = 3074.08, s = 3075.5, 6 7 8 9 10 11 s = 3095.79, s = 3052.14, s = 3060.75, s = 2976.53. 12 13 14 15 The value of s = 2746.61 is to be “predicted”. From the whole set of daily data, we employ only several values of the closing price. Such coarse-grained description of the time series may be justiﬁed if one is interested in the phenomenon not dependent on the ﬁne details, such as crash. In the examples presented below, we keep the number of data points per quartic regression parameter in the range 3–4. Lower order calculations can be found in [30]. Here, we show only the quartic regression 2 3 4 s (t) = a + b t + c t + d t + f t , 0,4 4 4 4 4 4 and based on it optimize approximants and multipliers. It can be replicated as follows: 2 3 4 s (t) = A (r) + B (r)(t r) + C (r)(t r) + D (r)(t r) + F (r)(t r) , r,4 4 4 4 4 4 with 2 3 4 2 3 A (r) = a + b r + c r + d r + f r , B (r) = b + 2c r + 3d r + 4 f r , 4 4 4 4 4 4 4 4 4 4 4 C (r) = c + 3d r + 6 f r , D (r) = d + 4 f r, F (r) = f . 4 4 4 4 4 4 4 4 4 With such transformed parameters, we have s (t) s (t). r,4 0,4 Within the data shown in Figure 2, one can discern competing trends. First, let us show the data compared to the regression. There are two obvious trends, “up” and “down”, as can be seen in Figure 2. Index 5 10 15 Figure 2. COVID-19, Shanghai Composite, 3 February 2020. Fourth-order regression is shown against data points. Axioms 2020, 9, 126 23 of 38 Our analysis indeed ﬁnds highly probable solutions of both types, with the downward trend developing into fast exponential decay. Let us analyze the typical approximant and multiplier dependencies on origin, for ﬁxed time t = t . The inverse multiplier is shown as a function of the origin r in Figure 3 as well as the ﬁrst-order approximant. * * -1 E H15,rL M H15,rL¤ 1 1 r r -50 50 -50 50 100 Figure 3. Shanghai Composite, 3 February 2020. Calculations with fourth-order regression. The inverse multiplier is shown as a function of the origin r at t = T , N = 15. The ﬁrst-order approximant is shown in a separate ﬁgure. Level s is shown as well, with dot-dashed line. There are two uneven humps in the probabilistic inverse multiplier, suggesting that large negative and large positive r dominate, with more weight put on the negative region. Such dependence on r manifests the time-translation invariance violation, which should be lifted by ﬁnding appropriate origin. More details on the example can be found in [30]. Below, we discuss only the fourth-order calculations. The results of extrapolation by method expressed by Equation (47) is given as E (16) = 2804.32, M (16) = 0.0113494, 1 1 with relative percentage error of 2.1%. There is also a less stable “upward” solution E (16) = 3211.95, M (16) = 0.0363796, 1 1 in agreement with intuitive picture based on naive data analysis. There are also two additional solutions in between with multipliers close to 1. They do not affect averages much, but in real time the metastable solutions, similar to the metastable phases in condensed matter, may show up under special conditions. Metastable solutions when realized violate the principle of maximal stability over the observation timescale, complicating or even negating a unique forecast, based on weighted averages or the most stable solution. Calculation of the discrete spectrum can be extended to different approximants. For instance, one can also construct the second-order Gompertz approximant introduced above, and solve the following equation on origins: G(t , r) = s(t ) . (56) N N The most stable Gompertz approximant gives the most accurate estimate G(16) = 2746.05, M (16) = 0.001539, with a very small error of 0.02%. There are altogether ﬁve solutions to (56), in the discrete spectrum, as shown in Figure 1. Thus, the Gompertz approximant of second order with log-time-translation invariance gives better results than symmetric exponential approximant E . Although Taleb’s Black Swan did seem to materialize, the short-time stock market response was not different than in somewhat comparable instances of crashes brought up in [30], making it look like a Grey Swan. Indeed, it is plausible that the Axioms 2020, 9, 126 24 of 38 holiday season in China played the role here. It also helped our cause, effectively pinpointing the day for crash. One can think that all solutions, except the most extreme downward solution, were simply not considered. Consider several most spectacular examples of crashes from the tumultuous spring and summer of 2020, caused by combination of economic causes such as oil anti-shock and COVID-19 related, enormous disruptions—a rare constellation of Two Swans of Gray coming together! There was a month long delay until DJ crashed. All three conspicuous crashes from March 2020 can be considered as an exponentially accelerated decay. Black Monday I. Drop in DJ Industrial of 7.79% was caused by the shock from coronavirus, to the value of s = 23,851, on 9 March 2020 (Black Monday I), as demonstrated in Figure 4. The data and the components deﬁning spectrum of scenarios are presented. Again, there are two asymmetric humps in the probabilistic space, and the region of large negative r dominates. The extrapolation by the most stable solution results in the following result, E (19) = 24257.9, M (19) = 0.00629791, 1 1 of 1.7%. There is also less stable by order of magnitude “upward” solution, as well as four additional solutions in between with multipliers of the order of unity. Using the same methodology, we obtain Gompertz approximant, and ﬁnd that it gives rather good extrapolation G(19) = 23669.1, M (19) = 0.000805813, with a very small multiplier, and shows accuracy of 0.76%. There is also an upward solution, by order of magnitude less stable. Averaging the two solutions improves the estimate to the error of only 0.52%. Black Thursday. Drop of 9.99%, to the level of s = 21,200.6 on 12 March 2020 (Black Thursday), is also believed to be caused by the coronavirus-shock. In this case, we use the standard dataset with N = 15 and the third-order regression to see the typical pattern shown in Figure 5. There is again a marked asymmetry on the graphs for the components in the probabilistic space, as the region of large negative r prevails. The extrapolation by the most stable solution gives E (16) = 22, 237.1, M (16) = 0.0371606, 1 1 bringing the numerical error 4.89%. There is also a much less stable “upward” solution. Using the same methodology for ﬁnding the discrete spectrum, we obtain Gompertz approximant, and ﬁnd that it gives rather good result G(16) = 21, 800.2, M (16) = 0.00997846, with a very small multiplier and an accuracy of 2.83%. There is also an additional solution, even slightly more stable, leading to a super-fast decay almost to zero. Such scenario, obviously, is absent in calculations with pure exponential approximants. Axioms 2020, 9, 126 25 of 38 E H18,rL Index 50 000 29 500 29 000 40 000 28 500 30 000 28 000 27 500 20 000 27 000 10 000 26 500 t r 5 10 15 -60 -40 -20 20 40 60 80 * -1 M H18,rL¤ -100 100 200 Figure 4. Black Monday I. Pattern in DJ Industrial index preceding 9 March 2020 Non-monotonous decay pattern reminds of a hockey stick. Fourth-order regression is shown against data points. The inverse multiplier is shown as a function of the origin r at t = T , N = 18. The ﬁrst-order approximant is shown in separate ﬁgures. Level s = 25,864.8 is shown with a dot-dashed line. E H15,rL Index 100 000 30 000 29 000 80 000 28 000 60 000 27 000 40 000 26 000 25 000 20 000 24 000 t r 2 4 6 8 10 12 14 -100 -80 -60 -40 -20 20 * -1 M H15,rL¤ -100 -80 -60 -40 -20 20 Figure 5. Black Thursday. Pattern in DJ Industrial index preceding 12 March 2020. Monotonous decay pattern. Third-order regression is shown against data points. The inverse multiplier is shown as a function of the origin r at t = T , N = 15. The ﬁrst-order approximant is shown in separate ﬁgures. Level s = 23,553.2 is shown with a dot-dashed line. Black Monday II. Consider also the massive crash of 12.93%, to the value of s = 20,188.5 on 16 March 2020 (Black Monday II), caused also by oil anti-shock. Because the USA is the largest producer of oil, the big drop in oil prices (anti-shock) caused an effect typically attributed to oil shock. In this Axioms 2020, 9, 126 26 of 38 case, we again use the dataset of standard length with N = 15, to see the typical pattern shown in Figure 6. It demonstrates the data, approximant and multiplier. E H15,rL Index 29 000 40 000 28 000 30 000 27 000 26 000 20 000 25 000 24 000 10 000 23 000 t r 2 4 6 8 10 12 14 -40 -20 20 40 * -1 M H15,rL¤ -40 -20 20 40 60 Figure 6. Black Monday II. Pattern in DJ Industrial index preceding 16 March 2020. Non-monotonous decay pattern. Fourth-order regression is shown against data points. The inverse multiplier is shown as a function of the origin r at t = T , N = 15. The ﬁrst-order approximant is shown in separate ﬁgures. Level s = 23,185.6 is shown with a dot-dashed line. There are two typical asymmetric humps in the probabilistic space, and the region of large negative r dominates. The extrapolation by the most stable solution gives the following values, E (16) = 20, 810.7, M (16) = 0.00777882, 1 1 bringing the numerical error of 3.08%. There is also much less stable “upward” solution, E (16) = 27, 387, M (16) = 0.058839, 1 1 as well two additional solutions in between, with multipliers of the order of unity. Using the same optimization methodology, we obtain Gompertz approximant, and ﬁnd extrapolations G(16) = 19, 987.4, M (16) = 0.00100679, with accuracy of 0.996%. Fear of second wave of coronavirus.Bubble conﬁguration corresponds to the price (index) going up monotonously, with rapid change of direction at some point, during the time scale of order of the time-series resolution. The growth ﬁnally becomes unsustainable. The crash of 11 June 2020 had started overnight. The index dropped to s = 25,128.2, corresponding to a mini-crash of 6.9%. For the the dataset of length N = 16, we observe almost a perfect bubble, as shown in Figure 7. It demonstrates the data, approximant and multiplier as functions of origin. Axioms 2020, 9, 126 27 of 38 E H16,rL 60 000 Index 27 000 50 000 26 500 40 000 26 000 30 000 25 500 20 000 25 000 10 000 t r 5 10 15 -100 -50 50 100 * -1 M H16,rL¤ -100 100 200 Figure 7. Temporal bubble in Dow Jones Industrial index, preceding mini-crash of 11 June 2020. Fourth-order regression is shown against data points. The ﬁrst-order approximant and multiplier are shown in separate ﬁgures. Level s = 26,990 is shown with a dot-dashed line. There is also a marked asymmetry in the probabilistic space, and the region of large negative r dominates. In the current case, the pattern appeared before the very day of crash and evolved into the mini-crash due to the overnight shock. Extrapolation by the most stable solution results in E (17) = 25, 641, M (17) = 0.0124981, 1 1 bringing the error of 2.04%. There is also less stable “upward” solution, E (17) = 28, 814.7, M (17) = 0.0435021, 1 1 as well two additional solutions in between with multipliers of the order of unity. Similar calculations with Gompertz approximant, give better estimate for the crash, G(17) = 25, 189.9, M (17) = 0.00169455, with error of just 0.25%. One can think that fear of a second coronavirus wave leads to self-generated nervousness, leading to panic [69], having the net result of a shock. Bubbles are quite rare patterns in DJ index and more typical to Shanghai Composite [30]. 3.4. Comments Many more examples of various notable crashes can be found in [30]. They were selected to exemplify market reaction to various shocks, including 9/11, Fukushima disaster, US entrance to the Great War, death of Chinese leader Deng Xiaoping, Friday the 13th, ﬂash crash, etc. and to demonstrate similarity of early panics with coronavirus recession. Despite their different “geometry”, different temporal patterns preceding crashes exhibit probabilistic distributions analogous in their main features, with signiﬁcant difference only in the region of moderate r, but with analogous structure for large negative and positive origins. Crashes are attributed to the stable solutions with large negative r, meaning that the origin of time has to be moved to the deep past to explain the crash in the near future. Axioms 2020, 9, 126 28 of 38 Preliminary results of Gluzman [30] suggest that, in the overwhelming majority of cases, a crash is preceded by similar, asymmetric probability pattern(s), of the type shown in ﬁgures of this section. Exponential and Gompertz approximants are found to work rather well, despite (or possibly due to) their simplicity. Unlike all other approximants, they give very clear graphic snapshots of the probabilistic space. Besides, their application is grounded in the exponential form of any future contract, with a transparent interpretation to the renormalized trend parameter b(t, r), as expected return per unit time, equivalent to inverse relaxation (growth) time. Our theory explains or at least gives a hint why making predictions about the future is so notoriously difﬁcult. Instead of a unique, ironclad solution to the problem, we advocate ﬁnding all solutions and interpreting them as bounds, as plainly illustrated in Figure 1. Bounds are given different strengths, a priori determined by multipliers. Reality is not completely conﬁned to reaching the most stable bound, but various metastable bounds can be realized as well, blurring the picture and complicating emergent time dynamics. After applying some arguments concerned with broken/restored time-invariance, we come to the exponential solution with explicit ﬁnite time scale, which was only implicit in initial parameterization with polynomial regressions. In condensed matter physics and ﬁeld theory, there is a key Meissner–Higgs mechanism for generating mass or, equivalently, for creating some typical space scale from original ﬁelds through broken symmetry technique (see, e.g., [71]). Relatively recently, the concept was conﬁrmed, culminating in the discovery of the Higgs boson. Our approach to market price evolution is by all means inspired by the Meissner–Higgs effect. However, instead of a mass of mind-boggling elementary particles, we have a mundane, but highly sought after return per unit time. Funding: This research received no external funding. Conﬂicts of Interest: The authors declare no conﬂict of interest. Appendix A. Critical Index Calculations with Padé and DLog Pad’e Techniques For low Reynolds numbers R, the ﬂow of a viscous ﬂuid through a channel is described by the well-known Darcy’s law. The Darcy law describes a linear relation between the average pressure gradient r p and the average velocity u along the pressure gradient [72]. It is given as follows, jr pj = u, (A1) where K stands for the permeability and h is the dynamic viscosity of the ﬂuid. The deﬁnition of permeability simply characterizes the amount of viscous ﬂuid ﬂow through a porous medium per unit time and unit area when a unit macroscopic pressure gradient is applied to the system [12]. The classical Poiseuille ﬂow is a classic example, which yields the Darcy’s law. It unfolds in the channel bounded by two parallel planes separated by a distance 2b, generated by an average pressure gradient r p. The ﬂow proﬁle is known to be parabolic when the Reynolds number is small. When the channel is “wavy”, i.e., not straight and when the Reynolds number is not negligible, additional terms appear in this relation. Darcy law holds in the interesting cases of the Stokes ﬂow through a channel with two-dimensional and three-dimensional wavy walls. The enclosing wavy walls are described by the analytical expressions, including the amplitude of waviness. The amplitude is proportional to the mean clearance of the channel and is multiplied by the small dimensionless parameter e. We brieﬂy discuss below the main steps of the derivation leading to the expansions for permeability, as obtained by Mityushev, Malevich and Adler. In Ref. [35], a general asymptotic Axioms 2020, 9, 126 29 of 38 analysis was applied to a Stokes ﬂow in curvilinear three-dimensional channel. It is bounded by walls of rather general shape described as follows z = S (x , x ) b 1 + eT(x , x ) , (A2) 2 2 1 1 z = S (x , x ) b 1 + eB(x , x ) . (A3) 1 2 1 2 The formally small dimensionless parameter e 0 is considered. It is introduced in such a way to allow the general shape to be recast as the geometric perturbation around the straight channel. The expansion then is accomplished around the straight channel considered as zero-approximation. Such approach builds on an original work by Pozrikidis [73]. In [12,35], arbitrary proﬁles S (x , x ) were explored. It was assumed only that they satisfy some 1 2 natural conditions, such as jT(x , x )j 6 1 and jB(x , x )j 6 1. (A4) 1 2 1 2 The inﬁnite differentiability is assumed for the functions T(x , x ) and B(x , x ). Such assumption 1 2 1 2 was made in order to calculate velocities and permeability, and to solve an emerging cascade of boundary value problems for the Stokes equations in a straight channel [35]. Inﬂuence of the curvilinear edges on ﬂow is of signiﬁcant theoretical interest. It illustrates the mechanism of viscous ﬂow under different geometrical conditions. To make our paper self-consistent, we bring below some general information about the mathematical formulation of the problem and some permeability deﬁnitions. Let u = u(x , x , x ) be 1 2 3 the velocity vector, and p = p(x , x , x ) the pressure. The ﬂow of a viscous ﬂuid through a channel is 1 2 3 considered under condition that the Reynolds number is small and the Stokes ﬂow approximation is valid. The ﬂuid is governed by the Stokes equations. The solution u of the Stokes equations is sought within the class of functions periodic with period 2L both in variable x and in variable x . 1 2 Let also u be the x-component of u. Let also an overall external gradient pressure r p be applied along the x -direction. It corresponds to a constant jump 2Lr p along the x -axis of the periodic cell. 1 1 Then, the permeability of the channel in the x -direction K (e) is deﬁned as the result of integration, S (x ,x ) L L 1 2 Z Z Z K (e) = dx dx u(x , x , x ) dx . (A5) x 1 2 1 2 3 3 r p jtj LL S (x ,x ) 1 2 Here, jtj stands for the volume of the unit cell Q of the channel. The sought K (e) in (A5) is expressed explicitly as a function in e. More precisely, we are interested in the ratio K = K(e) of the dimensional permeability for the curvilinear channel and permeability of the Poiseuille ﬂow. Most important for our methodology, the formulae of Mityushev, Malevich and Adler [35] determine the coefﬁcients of a Taylor series expansion for the permeability K(e) = c e , n=0 with the normalization with respect to the dimensional permeability for the of the Poiseuille ﬂow. In practical computations, K(e) is approximated by means of the truncation, leading to the Taylor polynomial of the order k K (e) = c e . (A6) k å n=0 The domain of application of this formula appears to be restricted. The corresponding Taylor series are divergent for larger e. Axioms 2020, 9, 126 30 of 38 Appendix A.1. Symmetric Sinusoidal Two-Dimensional Channel: Walls Can Touch Mityushev, Malevich and Adler [35] considered the following bounded two-dimensional channel z = b(1 + e cos x) , z = b(1 + e cos x). (A7) The expansion for permeability was found up to O(e ), and for b = 0.5. This example is popular among the researchers, as is documented in [35]. The following truncated polynomial for the permeability as the function of “waviness” parameter e was presented, K (e) = 2 4 6 8 10 1 3.14963e + 4.08109e 3.48479e + 2.93797e 2.56771e + 12 14 16 18 20 2.21983e 1.93018e + 1.67294e 1.45302e + 1.26017e (A8) 22 24 26 28 30 1.09411e + 0.949113e 0.823912e + 0.714804e 0.620463e +O(e ). On the other hand, for larger e, a lubrication approximation K was discussed by Adler [72]. It is motivated by the solution in the case of two cylinders of different radii that are almost in contact with one another along a line. As e ! e = 1, we arrive to the following power-law 5/2 8 2 b (e 1) K ' . (A9) 9p It has the general critical form, with the critical index for permeability { = 5/2. The critical 8 2b amplitude can be extracted as well, so that A = . In the case under consideration, we calculate 9p A = 0.100035. The reasons for failure of lubrication approximation are explained in [35,72], as well as in [12]. In a nutshell, the main assumption of the lubrication approximation is that the velocity has a parabolic proﬁle. Even for the plane channels [35], the lubrication approximation gives correct results only for channels in which the mean surface is sufﬁciently close to a plane and for small value of e. In what follows, we completely avoid the lubrication approximation by following the approach of Gluzman [12] (Chapter 7). The technique of approximants allows approaching the critical region, when the walls nearly touch, only based on the expansion (A8). As an input, we have the polynomial approximation (A8) of the function K(e). We intend to to calculate the critical index and amplitude(s) of the asymptotically equivalent approximants in the vicinity of the threshold e = e = 1. When such extrapolation problem is solved, one can proceed with an interpolation problem. In the latter case, assuming that the critical behavior is known in advance, one can derive the compact formula for all e (see Chapter 7, [12]). Let us calculate the index and amplitude for the critical behavior written in general form K(e) ' A(e e) , as e ! e 0 , (A10) c c with unknown index and amplitude. Let us ﬁrst apply the transformation, e z z = , e = , 1 e z + 1 to the series (A8). The transformation makes technical application of the different approximants more convenient. Axioms 2020, 9, 126 31 of 38 To the transformed series M (z), let us apply the D Log transformation and obtain the transformed series M(z). In terms of M(z) one can readily obtain the sequence of Padé approximations { for the critical index {. Namely, we obtain the sequence of values { = lim (zPade A p proximant[ M[z], n, n + 1]) , (A11) z!¥ as described in Section 2. The approximations for the critical index generated by the sequence of Padé approximants, converge nicely to the value 5/2, as shown below, { = 2.57972, { = 2.30995, { = 2.47451, { = 2.49689, 1 2 3 4 { = 2.4959, { = 2.49791, { = 2.49923, { = 2.50113, 5 6 7 8 { = 2.50028,{ = 2.49783,{ = 2.49778,{ = 2.49829,{ = 2.49836. 9 10 11 12 13 This result well agrees with estimates by the optimization technique of Section 2.3. If B (z) = Pade A p proximant[ M[z], n, n + 1], then one can also ﬁnd the approximation for permeability Z e e e K (e) = exp B (z) dz , (A12) and compute the corresponding amplitude A = lim (e e) K (e). (A13) n c e!e The typical value of amplitude could be found as A = 3.7758. It appears to be by order of magnitude larger than the value deduced from the lubrication approximation. Now, let us ﬁx the critical index to a value of 5/2, obtained from the extrapolation procedure. Now, one can calculate A using the standard Padé technique, ﬁnding the value of 3.77188. The latter result turns out to be very close to the value just found above from the extrapolation. It was illustrated by Gluzman [12] (Chapter 7) how the lubrication approximation approximation breaks down even in a close vicinity of e . The truncated polynomial is applicable only for small and moderately large e, breaking down for larger e in the vicinity of the critical point. But the ﬁnal formula derived by means of factor approximant is qualitatively correct for all e. Obviously, the standard Padé approximants are not able to capture the non-trivial power-law in the vicinity of critical point e . Appendix A.2. Symmetric Sinusoidal Two-Dimensional Channel: Example 2 Let us again consider the channel bounded by the surfaces (A7), but with different parameter, b = 0.25. The truncated polynomial K(e) was obtained by Mityushev, Malevich and Adler [35] as well, K(e) = 2 4 6 8 10 1 3.03748e + 3.54570e 2.33505e + 1.35447e 0.83303e 12 14 16 18 20 +0.49762e 0.30350e + 0.18185e 0.11083e + 0.06636e (A14) 22 26 28 30 0.04051e + 0.02419e 0.00880e 0.00544e + O(e ). Again, as in the previous example, we follow Chapter 7 from the book [12], where the case was researched in great detail. Using Formula (A11), we found an excellent convergence in the sequence of estimates for the index, { = 2.64456, { = 2.41346, { = 2.49488, { = 2.49992, 1 2 3 4 { = 2.49991, { = 2.50026, { = 2.50068, { = 2.50087, 5 6 7 8 Axioms 2020, 9, 126 32 of 38 { = 2.50086, { = 2.50063, { = 2.50063, { = 2.50086, 9 10 11 12 { = 2.50087, { = 2.50068, { = 2.50026, 13 14 15 leading to the same value for the index as above, { = 5/2. This result agrees with estimates by the optimization technique of Section 2.3. Clearly, the standard Padé technique fails. The value of amplitude is estimated as well, as A = 3.77362. Both amplitude and index appear to be independent on the parameter b, suggesting a universal regime in the vicinity of e . Interpolating with the known critical index, one can calculate the amplitude A, using standard Padé technique, ﬁnding again the very close value of A 3.77316. As in the previous example, the lubrication approximation approximation breaks down even in a close vicinity of e . The truncated polynomial is applicable only for small and moderately large e, breaking down for larger e in the vicinity of the critical point. However, the ﬁnal formula derived by means of factor approximant is qualitatively correct for all e (for more details, see Chapter 7, [12]). The critical index, amplitude and overall behavior of permeability in the vicinity of e , practically do not depend on the parameter b [12]. Appendix A.3. Parallel Sinusoidal Two-Dimensional Channel. Walls Can Not Touch Let us proceed with the case principally different from the two cases just studied. Consider the channel bounded by the surfaces z = b(1 + e cos x) , z = b(1 e cos x) , (A15) with b = 0.5 [35]. It is not possible for the walls to touch, and permeability remains ﬁnite but expected to decay as a power-law as e becomes large. Instead of a critical transition from permeable to non-permeable phase, we have a non-critical transition, or crossover, as deﬁned in [15]. The crossover is from high to low permeability and unravels with increasing parameter e. The crossover can still be characterized by the power-law, as one can study corresponding critical index at large e. Eddies are not expected in such channels even for very large e [35]. However, for large b, eddies are not excluded [35]. The truncated series expansion for the permeability were calculated up to O(e ), K (e) = 1 2 2 4 3 6 1 2.5368610 e + 4.2890710 e 5.4618810 e 4 8 6 10 5 12 6 14 +4.5469510 e + 9.065610 e 1.4157210 e + 3.7658410 e (A16) 7 16 8 18 9 20 9 22 6.7202110 e + 7.5833110 e + 2.3449510 e 4.5999310 e 9 24 11 26 9 28 9 30 +1.8844610 e 8.600510 e + 3.3415610 e + 1.6374810 e . In this case, it is well understood that the velocity is analytic in e in the disk jej < e . Therefore, one can deduce that (A16) is valid for e < e , where e is of order , with c being the maximal wave 0 0 bc number of T(x , x ) and B(x , x ). However, to extend K(e) for e > e , it was suggested to apply the 1 2 1 2 0 Padé approximation to the polynomial (A16), which agrees with it up to O(e ). The Padé approximant of the order (10, 20), denoted here as K (e), was ﬁrst developed by 10,20 Mityushev, Malevich and Adler [35]. Its explicit expression can also be found in Chapter 7 of the book [12]. This approximant gives K (e) e , as e ! ¥. One can think then that the permeability 10,20 decays as K(e) ' Be , as e ! ¥, with the critical index n different from the estimate given by K (e). Calculation of the 10,20 critical index n was accomplished in Chapter 7 of the book [12]. Assuming that the small-variable expansion for the function is given by the truncated sum K (e) (A16), we can ﬁnd the corresponding small-variable expression for the effective critical exponent which Axioms 2020, 9, 126 33 of 38 equals e log K (e). By applying to the obtained series, the method of Padé approximants, as in de two previous examples, the sought approximate expression for the critical exponent n = lim eP (e) , (A17) k k,k+1 e!¥ can be computed dependent on the approximation order k. Application of the method to the truncated power series (A16), is straightforward and suggests strongly the value of n = 4, as can be seen in Figure A1. This result agrees with estimates by the optimization technique of Section 2.3. Clearly, the Padé estimate mentioned above fails. The amplitude B, corresponding to k = 14, is equal to 44.5872. -3.8 -4.0 -4.2 -4.4 -4.6 2 4 6 8 10 12 14 Figure A1. The index n at inﬁnity, is shown dependent on the approximation number k. The values found by computing (A17) are shown with black circles. They are compared with the most plausible value of 4 (shown with gray circles). Assume now that n = 4 and construct the sequence of Padé approximants P for the original n,n+4 truncated polynomial (A16). There is a convergence in the approximation sequence for the amplitude B. One can safely assume that it converges to the value of 43.2. The sequence is shown in Figure A2. 45.5 45.0 44.5 44.0 43.5 2 4 6 8 10 12 Figure A2. The amplitude B dependence on approximation number k is shown with black circles. One can see the convergence to the value of 43.2, shown with squares. Axioms 2020, 9, 126 34 of 38 Appendix B. Example of Interpolation with Root Approximants: One-Dimensional Bose Gas Lieb and Liniger [74] considered a one-dimensional Bose gas with contact interactions. The ground-state energy of the gas can be written as a weak-coupling expansion, with respect to the coupling parameter g [75,76], as 4 1.29 3/2 2 5/2 E(g) ' g g + g 0.017201g , (A18) 3p 2p as g ! 0, In the strong-coupling limit, as g ! ¥, we have the following expression [75,76] p 4 12 E(g) ' (1 + ) . (A19) 3 g g In what follows, E (g) assimilates the three coefﬁcients from weak and strong coupling 3+3 expansions, while E (g) is based on all four terms from the weak-coupling side. 4+3 The accuracy of the root approximants (17) E (g) = , u 0 1 3+3 ! 9/8 7/6 5/4 u 3/2 (A20) 385.383 388.171 164.914 37.3454 8.12698 @ A 3 + + + + +1 5 4 3 2 g g g g g and E (g) = , 0 1 4+3 u 11/10 0 1 ! 9/8 7/6 5/4 3/2 (A21) 6 B C u 1267.86 1548.85 811.495 254.699 45.6531 8.8658 @ A 3 + + + + + +1 t @ A 6 5 4 3 2 g g g g g g turns out to be good. The approximants are constructed from “right-to left”. i.e., we self-similarly connect a known asymptotic expansion at the right boundary of the interval with a known asymptotic form at the left boundary. In Table A1, they are compared to the extensive numerical data obtained by Dunjko and Olshanii E [77]. The Padé-estimates, E , are also presented. The Padé approximant P ( g) reads as follows: DO P 3,5 3/2 g 0.285957g 0.177533g+0.355474 g+1 p ( ) (A22) P ( g) = p 3,5 3/2 5/2 2 0.455734g +0.0869206g 0.0539636g +0.0881093g+0.779887 g+1 Table A1. Ground-state energy of Lieb-Liniger model, for the varying dimensionless parameter g, in different approximations: Root approximants E (g), E (g), numerical data E , and the Padé DO 3+3 4+3 approximant E . g E E E E 3+3 4+3 DO P 0.00509427 0.00494169 0.00494163 0.00494165 0.00494136 0.0250691 0.0234269 0.0234247 0.0234254 0.0234125 0.100428 0.0875959 0.0875605 0.0875748 0.0872792 0.49294 0.361757 0.361368 0.361639 0.35512 1.00361 0.640965 0.640137 0.640920 0.622859 1.98395 1.04466 1.04325 1.04474 1.01247 5.122 1.78912 1.78751 1.78888 1.76111 6.02566 1.92249 1.92102 1.92206 1.89836 10.0214 2.31276 2.31188 2.31229 2.30062 20.0175 2.7248 2.72454 2.72458 2.72169 51.4117 3.04855 3.04853 3.04852 3.04825 277.602 3.24297 3.24297 3.24297 3.24927 Axioms 2020, 9, 126 35 of 38 It should become completely clear from observing Figure A3 that the problem of interpolation is neither simple nor superﬁcial. The asymptotic expressions for small and large couplings have little in common with each other. Although the expansions (A18) and (A19) appear to work only for very small and very large coupling constants, the deduced approximants work rather well. More examples of interpolation with various self-similar approximants can be found in [16]. Energy 2 4 6 8 Figure A3. The interpolation with root approximant (A20) is shown with solid line, while the Padé approximant is shown with dotted line. The weak- (dashed) and strong-coupling (dot-dashed) expansions are shown as well. References 1. Baxter, R.J. Exactly Solved Models in Statistical Mechanics; Academic Press: Cambridge, MA, USA, 1989. 2. Izyumov, Y.A.; Skryabin, Y.N. Statistical Mechanics of Magnetically Ordered Systems; Springer: Berlin, Germany, 1988. 3. Mendoza-Hernández, J.; Arroyo-Carrasco, M.; Iturbe-Castillo, M.; Chávez-Cerda, S. Laguerre-Gauss beams versus Bessel beams showdown: Peer comparison. Opt. Lett. 2015, 40, 3739–3742. 4. Taylo, J.R. Optical Solitons: Theory and Experiment; Cambridge University Press: Cambridge, UK, 1992. 5. Valiulis, G.; Dubietis, A.; Piskarskas, A. Optical parametric ampliﬁcation of chirped X pulses. Phys. Rev. A 2008, 77, 043824, doi:10.1103/PhysRevA.77.043824. 6. Baker, G.A. Padé approximant. Scholarpedia 2012, 7, 9756. 7. Hunter, J.K. Asymptotic Analysis and Singular Perturbation Theory; UC Davis: Davis, CA, USA, 2004. 8. Bender, C.M.; Orszag, S.A. Advanced Mathematical Methods for Scientists and Engineers. In Asymptotic Methods and Perturbation Theory; Springer: New York, NY, USA, 1999. 9. Baker, G.A.; Graves-Moris, P. Padé Approximants; Cambridge University: Cambridge, UK, 1996. 10. Gluzman, S.; Yukalov, V.I. Self-similarly corrected Pad‘e approximants for indeterminate problem. Eur. Phys. J. Plus 2016, 131, 340–361. 11. Gluzman, S.; Mityushev, V.; Nawalaniec, W. Computational Analysis of Structured Media; Academic Press (Elsevier): Cambridge, MA, USA, 2017. 12. Dryga’s, P.; Gluzman, S.; Mityushev, V.; Nawalaniec, W. Applied Analysis of Composite Media; Woodhead Publishing (Elsevier): Sawston, UK, 2020. 13. Andrianov, I.; Awrejcewicz, J.; Danishevs’kyy, V.; Ivankov, S. Asymptotic Methods in the Theory of Plates with Mixed Boundary Conditions; John Wiley & Sons: Hoboken, NJ, USA, 2014. 14. Andrianov, I.; Shatrov, A. Padé Approximation to Solve the Problems of Aerodynamics and Heat Transfer in the Boundary Layer; IntechOpen: London, UK, 2020. doi:10.5772/intechopen.93084. 15. Gluzman, S.; Yukalov, V.I. Uniﬁed approach to crossover phenomena. Phys. Rev. E 1998, 58, 4197–4209. Axioms 2020, 9, 126 36 of 38 16. Gluzman, S. Padé and Post-Padé Approximations for Critical Phenomena. Symmetry 2020, 12, 1600, doi:10.3390/sym12101600. 17. Yukalov, V.I. Interplay between Approximation Theory and Renormalization. Phys. Part. Nuclei 2019, 50, 141–209. 18. Gluzman, S.; Yukalov, V.I. Critical indices from self-similar root approximants. Eur. Phys. J. Plus 2017, 132, 535. 19. Gluzman, S.; Yukalov, V.I. Self-Similar Power Transforms in Extrapolation Problems. J. Math. Chem. 2006, 39, 47–56. 20. Yukalov, V.I.; Gluzman, S. Optimization of Self-Similar Factor Approximants. Mol. Phys. 2009, 107, 2237–2244. 21. Sauer, T. Prony’s method: An old trick for new problems. Snapshots Modern Math. Oberwolfach 2018, 4, 1–11. 22. Bernstein, S. Démonstration du théoréme de Weierstrass fondée sur le calcul des probabilités. Comm. Kharkov Math. Soc. 1912, 13, 1–2. 23. Cioslowski, J. Robust interpolation between weak-and strong-correlation regimes of quantum systems. J. Chem. Phys. 2012, 136, 044109. 24. Gluzman, S.; Yukalov, V.I. Effective summation and interpolation of series by self-similar root approximants. Mathematics 2015, 3, 510–526, doi:10.3390/math3020510. 25. Gluzman, S.; Yukalov, V.I.; Sornette, D. Self-similar factor approximants. Phys. Rev. E 2003, 67, 026109. 26. Yukalova, E.P.; Yukalov, V.I.; Gluzman, S. Solution of differential equations by self-similar factor approximants. Ann. Phys. 2008, 323, 3074–3090. 27. Gluzman, S.; Yukalov, V.I. Self-similarly corrected Pade approximants for nonlinear equations. Int. J. Mod. Phys. B 2019, 33, 1950353. 28. Yukalov, V.I.; Gluzman, S. Self-similar exponential approximants. Phys. Rev. E 1998, 58, 1359–1382. 29. Gavrilov, L.A.; Gavrilova, N.S. The reliability theory of aging and longevity. J. Theor. Biol. 2001, 213, 427–453. 30. Gluzman, S. Market crashes and time-translation invariance. Quant. Tech. Anal. 2020, doi:10.13140/RG.2.2.22623.07842/1. 31. Yukalov, V.I. Statistical mechanics of strongly nonideal systems. Phys. Rev. A 1990, 42, 3324–3334. 32. Yukalov, V.I. Method of self-similar approximations. J. Math. Phys. 1991, 32, 1235–1239. 33. Yukalov, V.I. Stability conditions for method of self-similar approximations. J. Math. Phys. 1992, 33, 3994–4001. 34. Drygas, ´ P.; Filishtinski, L.A.; Gluzman, S.; Mityushev, V. Conductivity and elasticity of graphene-type composites. In 2D and Quasi-2D Composite and Nano Composite Materials, Properties and Photonic Applications; McPhedran, R., Gluzman, S., Mityushev, V., Rylko, N., Eds.; Elsevier: Amsterdam, The Netherlands, 2020; Chapter 8, pp. 193–231. 35. Malevich, A.E.; Mityushev, V.V.; Adler, P.M. Stokes ﬂow through a channel with wavy walls. Acta Mech. 2006, 182, 151–182. 36. Brading, K.; Castellani, E.; Teh, N. Symmetry and symmetry breaking. In The Stanford Encyclopedia of Philosophy; Winter 2017 Edition; Zalta Edward, N., Ed.; SEP: Standford, CA, USA, 2017. 37. Ma, S. Theory of Critical Phenomena; Benjamin: London, UK, 1976. 38. Andersen, J.V.; Gluzman, S.; Sornette, D. General framework for technical analysis of market prices. Europhys. J. B 2000, 14, 579–601. 39. Fliess, M.; Join, C. A mathematical proof of the existence of trends in ﬁnancial time series. arXiv 2009, arXiv:0901.1945v1. 40. Peters, O. Optimal leverage from non-ergodicity. Quant. Fin. 2011, 11, 593–602. 41. Peters, O.; Klein, M. Ergodicity breaking in geometric Brownian motion. Phys. Rev. Lett. 2013, 110, 100603. 42. Peters, O.; Gell-Mann, M. Evaluating gambles using dynamics. Chaos 2016, 26, 023103. 43. Taleb, N.N. Statistical Consequences of Fat Tails (Technical Incerto Collection). 2020. Available online: https: //www.academia.edu/download/59794771/Technical_Incerto_Vol_1.pdf (accessed on 26 October 2020). 44. Sacha, K. Modeling spontaneous breaking of time-translation symmetry. Phys. Rev. A 2015, 91, 033617. 45. Yukalov, V.I.; Gluzman, S. Weighted ﬁxed points in self-similar analysis of time series. Int. J. Mod. Phys. B 1999, 13, 1463–1476. 46. Hayek, F.A. The use of knowledge in society. Am. Econ. Rev. 1945, 35, 519–530. 47. Mann, A. Market forecasts. Nature 2017, 538, 308–310. Axioms 2020, 9, 126 37 of 38 48. Soros, G. Fallibility, reﬂexivity, and the human uncertainty principle. J. Econ. Methodol. 2013, 20, 309–329. doi:10.1080/1350178X.2013.859415 49. Gluzman, S.; Yukalov, V.I. Renormalization group analysis of October market crashes. Mod. Phys. Lett. B 1998, 12, 75–84. 50. Buchanan, M. What has econophysics ever done for us? Nat. Phys. 2013, 9, 317. 51. Shiller, R.J. Narrative economics. Am. Econ. Rev. 2017, 107, 967–1004. 52. Arnold, V.I. Mathematical Methods of Classical Mechanics; Springer-Verlag: Berlin, Germany, 1989. 53. Zhang, Q.; Zhang, Q.; Sornette, S. Early warning signals of ﬁnancial crises with multi-scale quantile regressions of log-periodic power law singularities. PLoS ONE 2016, 11, e0165819. 54. Gluzman, S.; Yukalov, V.I. Booms and crashes of self-similar markets. Mod. Phys. Lett. B 1998, 12, 575–587. 55. Bogoliubov, N.N.; Shirkov, D.V. Quantum Fields; Benjamin-Cummings Pub. Co.: San Francisco, CA, USA, 1982. 56. Shirkov, S.V. The renormalization group, the invariance principle, and functional self-similarity. Sov. Phys. Dokl. 1982, 27, 197–199. 57. Kröger, H. Fractal geometry in quantum mechanics, ﬁeld theory and spin systems. Phys. Rep. 2000, 323, 81–181. 58. Adamou, A.; Berman, Y.; Mavroyiannisz, D.; Peters, O. Microfoundations of Discounting. arXiv 2019, arXiv:1910.02137v2. 59. Bougie, J.; Gangopadhyaya, A.; Mallow, J.; Rasinariu, C. Supersymmetric quantum mechanics and solvable models. Symmetry 2012, 4, 452–473, doi:10.3390/sym4030452. 60. Gluzman, S.; Sornette, D. Log-periodic route to fractal functions. Phys. Rev. E 2002, 65, 036142. 61. Lynch, C.; Mestel, B. Logistic model for stock market bubbles and anti-bubbles. Int. J. Theor. Appl. Financ. 2017, 20, 1750038. 62. Yukalov, V.I.; Gluzman, S. Extrapolation of power series by self-similar factor and root approximants. Int. J. Mod. Phys. B 2004, 18, 3027–3046. 63. Duguet, T.; Sadoudi, J. Breaking and restoring symmetries within the nuclear energy density functional method. J. Phys. G Nucl. Part. Phys. 2010, 37, 064009. 64. Lei, Y.C.; Zhang, S.Y. Features and partial derivatives of Bertalanffy-Richards growth model in forestry. Nonlinear Anal. Model. Control 2004, 9, 65–73. 65. Richards, F.J. A ﬂexible growth function for empirical use. J. Exp. Bot. 1959, 10, 290–301. 66. Gluzman, S.; Karpeev, D. Perturbative expansions and critical phenomena in random structured media. In Book “Modern Problems in Applied Analysis”; Drygas, ´ P., Rogosin, S., Eds.; Birkhauser: Basel, Switzerland, 2017; pp. 117–134. 67. Sandhu, R.; Georgiou, T.; Tannenbaum, A. Market Fragility, Systemic Risk, and Ricci Curvature. arXiv 2015, arXiv1505.05182v1. 68. Boudoukh, J.; Feldman, R.; Kogan, S.; Richardson, M. Which News Moves Stock Prices? A Textual Analysis. NBER Working Paper No. 18725 January 2012. Available online: https://www.nber.org/papers/w18725 (accessed on 26 October 2020). 69. Harmon, D.; Lagi, M.; de Aguiar, M.A.M.; Chinellato, D.D.; Braha, D.; Epstein, I.R.; Bar-Yam, Y. Anticipating economic market crises using measures of collective panic. PLoS ONE 2015, 10, e0131871. doi:10.1371/journal.pone.0131871. 70. Bernanke, B.S.; Gertler, M.; Watson, M. Systematic monetary policy and the effects of oil price shocks. Brook. Pap. Econ. Act. 1997, 1, 91–157. 71. Kleinert, H. Vortex origin of tricritical point in Ginzburg–Landau theory. Europhys. Lett. 2006, 74, 889–895. 72. Adler, P.M. Porous Media. Geometry and Transport, 2nd ed.; Butterworth-Heinemann: New York, NY, USA, 1992. 73. Pozrikidis, C. Creeping ﬂow in two-dimensional channel. J. Fluid Mech. 1987, 180, 495–514. 74. Lieb, E.H.; Liniger, S. Exact analysis of an interacting Bose gas: The general solution and the ground state. Phys. Rev. 1963, 13, 1605–1616. 75. Yukalov, V.I.; Girardeau, M.D. Fermi-Bose mapping for one-dimensional Bose gases. Laser Phys. Lett. 2005, 2, 375–382. Axioms 2020, 9, 126 38 of 38 76. Yukalov, V.I.; Yukalova, E.P.; Gluzman, S. Extrapolation and interpolation of asymptotic series by self-similar approximants. J. Math. Chem. 2010, 47, 959–983. 77. Dunjko, V.; Olshanii, M. Available online: http://physics.usc.edu/olshanii/DIST/ (accessed on 26 October 2020). Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional afﬁliations. © 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

Axioms – Multidisciplinary Digital Publishing Institute

**Published: ** Oct 28, 2020

Loading...

You can share this free article with as many people as you like with the url below! We hope you enjoy this feature!

Read and print from thousands of top scholarly journals.

System error. Please try again!

Already have an account? Log in

Bookmark this article. You can see your Bookmarks on your DeepDyve Library.

To save an article, **log in** first, or **sign up** for a DeepDyve account if you don’t already have one.

Copy and paste the desired citation format or use the link below to download a file formatted for EndNote

Access the full text.

Sign up today, get DeepDyve free for 14 days.

All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.