Abstract

A compounded method—exploiting the searching capabilities of an operation research algorithm and the power of bootstrap techniques—is presented. The resulting algorithm has been successfully tested to predict the turning point reached by the epidemic curve followed by the COVID-19 virus in Italy. Future lines of research, which include the generalization of the method to a broad set of distribution, will be finally given.

1. Introduction

In general, predicting the time of a peak conditional to a set of time-dependent data is a nontrivial task. Often carried out in a multitasking fashion, requiring the availability of time and resources, the correct estimation of future turning points can be important in many instances but becomes crucial in the case of epidemic events. These are the typical circumstances when the forecasting exercise is conducted online and on a time series exhibiting a small sample size. However, under these conditions, the problem might become particularly complicated since statistical methods usually employed for these purposes—for example, of the type hidden Markov (e.g., Hamilton [1] and Koskinen and Oller¨ [2]) or nonparametric (e.g., Delgado and Hidalgo [3]) models—not only are very demanding in terms of building and tuning procedures but typically requiring the availability of a “long” stretch of data. In addition to that, the time series related to epidemics usually show highly nonlinear dynamics, which, if not preprocessed, make them not suitable for standard linear models. On the other hand, attempting to fit nonlinear models, e.g., of the type self-exciting threshold autoregressive (for example, [4]) or the artificial neural network [1, 5], is not a viable option, due to the abovementioned sample size issues. In any case, when an ill-tuned model is fitted on a time series, reliable outcomes should not be reasonably expected. Therefore, an approach able to perform under the above-outlined conditions is proposed. In essence, the problem is solved by building a unified framework in which two powerful techniques—belonging to two different branches of computational statistics—are sequentially employed to lower the amount of uncertainty embedded in the observed data and to find a (possibly global) optimum through which the “best” statistical distribution for the dataset at hand is found.

2. The Unified Framework

As above stated, the approach studied in this study is rooted in a unified framework in which two powerful paradigms are exploited. The first one, which belongs to the so-called computer intensive statistical methods, is the bootstrap, which will be detailed in Section 4. By using this technique, a high number of “bona fide” replications of the original data are generated. In essence, each of the bootstrap series obtained “mimics” the observations recorded, so that the number of series observed—which in real life is typically equal to one—becomes (very) high. Repeating a mathematical operation (e.g., the computation of an estimator) B times makes possible (i) the assessment of the degree of uncertainty associated with the obtained estimations and (ii) less-biased estimators. The latter goal is achievable by the design of the bootstrap method, as through its replications, the use of central tendency functions, such as mean or median, is possible. The second tool employed is an optimization method for the selection of the “best” parametrization of a class of statistical distribution commonly used in the literature. In practice, this step is performed in the so-called bootstrap world, meaning that it is sequentially repeated for each bootstrap sample. By doing so, the degree of uncertainty associated with the selected distribution is lower than the one obtainable by processing just one set of data (the real observations).

3. Data and Contagion Indicator

This study employs the data related to COVID-19, collected and regularly updated by the Italian National Institute of Health (an agency of the Italian Ministry of Health) and by the Italian Civil Protection Department. The whole dataset is freely and publicly available in a comprehensive database, accessible on the Internet at the web address https://github.com/pcm-dpc/COVID-19/tree/master/dati-regioni. The dataset includes 38 daily data points collected at national level during the period starting from January 19 to March 27. The used indicator—which will be referred to as the variable of interest—is obtained by subtracting, for each day, from the total number of people tested positive of corona virus both the number of the deaths and of the recovered.

4. The Resampling Method

The choice of the most appropriate resampling method is far from being an easy task, especially when the identical and independent distribution (i.i.d.) assumption (Efron’s initial bootstrap method) is violated. Under dependence structures embedded in the data, simple sampling with replacement has been proved (for example, Carlstein [6]) to yield suboptimal results. As a matter of fact, i.i.d.-based bootstrap schemes are not designed to capture and therefore replicate dependence structures. This is especially true under the actual conditions (small sample sizes and strong nonlinearity). In such cases, selecting the “right” resampling scheme becomes a particularly challenging task as many resampling schemes are not designed to capture the dynamics typically found in epidemiology. As an example, the well-known resampling method called sieve bootstrap—introduced by Buhlmann [7]—cannot be employed due to the quadratic shape almost always found in this type of time series.

In more details, while in the classic bootstrap, an ensemble represents the population of reference the observed time series is drawn from, and in MEB, a large number of ensembles (subsets), say {ω1, ..., ωN}, become the elements belonging to , each of them containing a large number of replicates {x1, ..., xJ}. Perhaps, the most important characteristic of the MEB algorithm is that its design guarantees the inference process to satisfy the ergodic theorem. Formally, denoting by the symbol |·| the cardinality function (counting function) of a given ensemble of time series {xt ∈ ωi; i = 1, ..., N}, the MEB procedure generates a set of disjoint subsets Nω1ω1 ···∩ωN s.t. ENµ˜ (xt), with µ(·) being the sample mean. Furthermore, basic shape and probabilistic structure (dependency) are guaranteed to be retained ∀xt, jωi.

MEB resampling scheme has not negligible advantages over many of the available bootstrap methods; it does not require complicated tune-up procedures (unavoidable, for example, in the case of resampling methods of the type block bootstrap), and it is effective under nonstationarity. The MEB method relies on the entropy theory and the related concept of (un)informativeness of a system. In particular, the maximum entropy of a given density δf (x) is chosen so that the expectation of the Shannon information H = E (−log δo (x)) is maximized, i.e.,

Under mass- and mean-preserving constraints, this resampling scheme generates an ensemble of time series from a density function satisfying (4). Technically, the MEB algorithm can be broken down into the 8 steps detailed as follows:(1)A sorting matrix of dimension T × 2, say S1, accommodates in its first column the time series of interest xt and an index set, i.e., Iind = {2, 3, ..., T}, in the other one.(2)S1 is sorted according to the numbers placed in the first column. As a result, the order statistics x( t ) and the vector Iord of sorted Iind are generated and, respectively, placed in the first column and second column.(3)“Intermediate points,” averaging over successive order statistics, are computed, i.e., , and intervals It constructed on ct and rt are derived, using ad hoc weights obtained by solving the following set of equations:(i)(ii)(iii)(4)From a uniform distribution in [0, 1], T pseudorandom numbers are generated and the interval Rt = (t/T; t + 1/T] for t = 0, 1, ..., T − 1 is derived, in which each pj falls.(5)A matching between Rt and It is created according to the following equations:so that a set of T values {xj, t} as the jth resample is obtained. Here, θ is the mean of the standard exponential distribution.(6)A new T × 2 sorting matrix S2 is defined, and the T members of the set {xj,t} for the jth resample obtained in Step 5 is reordered in an increasing order of magnitude and placed in column 1. The sorted Iord values (Step 2) are placed in column 2 of S2.(7)Matrix S2 is sorted according to the second column so that the order {1, 2, ..., T} is then restored. The jointly sorted elements of column 1 are denoted by {xS,j,t}, where S recalls the sorting step.(8)Steps 1–7 are repeated a large number of times.

In order to give a clearer picture of the MEBOOT algorithm, in Figure 1, its flowchart is reported. As it can be noticed, four different functions characterize this resampling method. The sorting function plays a key role, as it operates in two different points of the algorithm, i.e., to order the values belonging to the original time series (S1) and to restore the given time sequence for each of the bootstrapped data (S2). Besides the pseudorandom function generator, whose employment is straightforward, the two remaining functions, i.e., the average and the matching, are, respectively, used to compute the mean of the maximum entropy density and to create a matched sequence of the intervals Rt’s.

5. Bootstrap-Driven Forecast Optimization

This section aims to define an alternative method to forecast our variable of interest by means of an optimization approach designed to fit a set of distribution functions applied to each bootstrap replication.

The variable of interest is assumed to approximately describe a logistic function, scaled by a normalizing parameter h (representing the asymptotic number of cases), as shown in Figure 2, so that its derivative is a Gaussian function rescaled accordingly, i.e.,

where t represents the number of days since pandemic has started in Italy, h represents the magnitude scale, µ represents the peak of daily cases (scale), and σ represents the standard deviation (shape).

Now, given(i)The parameter vector θ = (h, µ, σ), where θ ∈ Θ(ii)The total active cases xt since the infection spread(iii)A generic bootstrap distribution xt,i ∈ ω, where i ∈ {1..N} is the ith bootstrap within N replicates(iv), where is the theoretical value; the objective function is defined as follows:

This is a nonlinear unconstrained optimization problem which cannot be addressed using standard global optimization methods (e.g., of the type simplex, branch and bound, or branch and cut), which are designed for linear programming (LP) [8] and mixed-integer linear programming (MILP) [9], within the field of discrete combinatorial problems [10].

On the other hand, local search simulated annealing metaheuristic designed to approximate global optimization can be used to solve unconstrained nonlinear problems in a large space.

6. Simulated Annealing Optimization

Simulated annealing (SA), following Van Laarhoven and Aarts [11], is a probabilistic technique for approximating the global optimum of a given function. Specifically, it is a metaheuristic used to approximate global optimization in a large search space for an optimization problem.

The name and inspiration come from annealing in metallurgy, a technique involving heating and controlled cooling of a material to increase the size of its crystals and reduce their defects. Both are attributes of the material that depend on its thermodynamic free energy. Heating and cooling the material affects both the temperature and the thermodynamic free energy. Simulated annealing can be used to approximate the global minimum for a function with many variables. This approach was used by Kirkpatrick et al. [12] to solve the traveling salesman problem. They also proposed its current name, simulated annealing. The notion of slow cooling implemented in the simulated annealing algorithm is interpreted as a slow decrease in the probability of accepting worse solutions as the solution space is explored. Accepting worse solutions is a fundamental property of metaheuristics because it allows for a more extensive search for the global optimal solution.

In general, simulated annealing algorithms work as explained next. The temperature progressively decreases from an initial positive value to zero. At each time step, the algorithm randomly selects some neighbor state s of the current state s, measures its energy (in this case, the MSEi (xt, i) on the bootstrap distribution), and decides between moving the system to the state s or staying in state s according to the temperature-dependent probability of selecting better or worse solutions. During the searching process, such a probability, respectively, can remain at 1 (or any positive number smaller than 1) or decrease towards zero.

6.1. Simulated Annealing on Bootstrap Pseudocode

The following pseudocode presents the simulated annealing heuristic applied to bootstrap replicates. For each bootstrap, it starts from a state s0 and continues searching solutions until temperature decay reaches a low temperature. In the process, the call Neighbour (s, φ) generates a randomly chosen neighbor of a given state s; the call Random U (0, 1) picks and returns a value in the range [0, 1], uniformly at random. The annealing schedule is defined by the temperature decay based on the fixed cooling rate ρ.(i)Let the current temperature set at T = t0(ii)Set the cooling rate ρ(iii)For each bootstrap series i in {1, ..., N}

(i)Let current solution s = s0(ii)Loop while temperature T> 1(i)Pick a random neighbor, snew ← Neighbor (s, φ), where φ is the radius around s(ii)If Prob (E (s), E (snew) |T, kB) ≥ Random U (0, 1): ssnew(iii)TT∗ (1 − ρ)(iii)Output the final state s on ith bootstrap

Here, the function Prob (E (s), E (snew) |T, kB) is the acceptance probability at each iteration given temperature T and Boltzmann constant (Aarts and Korst [13] kB), i.e.,

The employed criterion for the acceptance of bad solutions has been preferred to the Boltzmann criterion if Prob (E (s), E (snew)|T) (as suggested by the referee), since it exhibited a much higher flexibility.

7. Empirical Evidence

In order to improve local search speed, the parameter space Θ can be bounded to Θ0⊂Θ, so that useless tails have been removed. Moreover, no information is lost under parameters space reduced to

The SA parameters have been iteratively tuned on a trial and error basis, to maximize the procedure overall efficiency. To this end, we set the initial temperature T0 = 10000, the cooling rate ρ = 0.0006, the Boltzmann constant kB = 100, and the radius φ = 0.3 (θmaxθmin). The optimization procedure has been applied to 500 bootstrap replications, derived from the positive cases in Italy between February 19 and March 27. The related results have been used to create the frequency distribution upon magnitude scales and peaks pairs, as shown in Table 1 and Figure 3.

The average values, computed for each parameter, are as follows: Avg (h) = 122178, Avg (µ) = 36.7, and Avg (σ) = 10.8.

Approximated confidence bounds are derived from the normal distribution (with 0.01 significance level) for each of the parameters, i.e.,

By evaluating µ confidence intervals, a peak day of daily cases between March 25 and 26 is inferred, while the h magnitude parameter shows an asymptote in the curve of the total positive cases between 120000 and 125000. The new cases curve has an asymptotic behavior, so cutting the tail beyond the 0.1 cutoff for new infections, the pandemic time window is hypothetically over after May 16.

This behavior is clearly described in Figures 4 and 5, which have been built considering the Gaussian (Figure 4) and the cumulated Gaussian (Figure 5) curves around the 99% confidence lower and upper bounds for each parameter.

8. Further Developments

The SA optimization for fitting bootstraps derived from real data is applicable to any kind of distribution known in the literature and empirical distributions as well. This research highlights a great potential if the aforementioned procedure is enhanced by introducing an automatic routine for the “optimal” choice of either known (ξr) or empirical (ξe) distributions, where (ξr, ξe) are in a predefined distribution space Ξ. In some more details, the algorithm could include a preprocessing light SA optimization (with a higher cooling rate ρ to cut down the number of SA iterations) designed to reduce the distribution space Ξ as well as the parameter space Θξ for each distribution ξ ∈ Ξ and thus boost the optimization search.

Data Availability

The data used to support the findings of this study are publicly available, free of charge, at the web address https://github.com/pcm-dpc/COVID-19/tree/master/dati-regioni.

Disclosure

The views and opinions expressed in this article are those of the authors and do not necessarily reflect the official policy or position of the Italian National Institute of Statistics or any other entities.

Conflicts of Interest

The authors declare that they have no conflicts of interest.