In the paper we analyze how a weak observational noise affects the permutation entropy of deterministic chaotic signals. The scaling dependence of the entropy increase on both the noise amplitude and the window length used to encode the time series is investigated. Upon performing a multifractal analysis, we discuss a method to reconstruct the noiseless permutation entropy. All these results are published in:
L. Ricci and A. Politi, Permutation Entropy of Weakly Noise-Affected Signals, Entropy 24 (2022), 00054, doi:10.3390/e24010054
The estimation of information theoretical quantities, most notably Shannon entropy, is routinely used as a tool to analize data stemming from dynamical systems. The so-called plug-in estimator plays a crucial role in this framework. In the case of an underlying multinomial distribution, the bias of the plug-in estimator can be evaluated, but variance is far more elusive. By studying the statistical properties of an estimator of this variance, we determined an upper limit to the uncertainty of entropy assessments, under the hypothesis of memoryless underlying stochastic processes. These results are published in:
L. Ricci, A. Perinelli and M. Castelluzzo, Estimating the variance of Shannon entropy, Phys. Rev. E. 104 (2021), 024220, doi:10.1103/PhysRevE.104.024220
The central limit theorem for Markov chains is widely used, especially in its univariate form. As far as the multivariate case is concerned, a few proofs exist, which depend on different assumptions and require advanced mathematical and statistical tools. We presented a novel proof that, starting from the standard condition of regularity only, relies on time-independent quantum-mechanical perturbation theory. This new proof can enhance the usability of this crucial theorem, especially in nonlinear dynamics and physics of complex systems. The new proof was published in:
L. Ricci, A quantum-mechanical derivation of the multivariate central limit theorem for Markov chains, Chaos, Solitons and Fractals 142 (2021), 110450, doi:10.1016/j.chaos.2020.110450
The inference of Shannon entropy out of sample histograms is known to be affected by systematic and random errors that depend on the finite size of the available data set. While this dependence was mostly studied in the multinomial case, we investigated the asymptotic behavior of the distribution of the sample Shannon entropy, also referred to as plug-in estimator, in the case of an underlying finite Markov process characterized by a regular stochastic matrix. By virtue of the formal similarity with Shannon entropy, the results are directly applicable to the evaluation of permutation entropy. This study was published in:
L. Ricci, Asymptotic distribution of sample Shannon entropy in the case of an underlying finite, regular Markov chain, Phys. Rev. E 103 (2021), 022215, doi:10.1103/PhysRevE.103.022215