Details
Tsallis entropy is a generalization of the Boltzmann–Gibbs–Shannon entropy.
For a discrete set of probabilities
{pi}, the Tsallis entropy is defined as
.
Tsallis entropy
Sq can equivalently be written as
, or
, or
, where the
q-logarithm
logqz is defined as
logqz=(z1-q-1)/(1-q), with
log1z= log z being the limiting value as
q→1.
For a continuous probability distribution
p(x), the Tsallis entropy is defined as
.
ResourceFunction["TsallisEntropy"][prob,q,k] computes the Tsallis entropy for a given discrete set of probabilities
prob={pi} with the condition
, where
W=Length[prob] is the number of possible configurations. If
prob is not normalized, it will be normalized first.
In the limit as q→1 for ResourceFunction["TsallisEntropy"][…], the Shannon differential entropy is recovered.
In ResourceFunction["TsallisEntropy"][prob,q,k], prob can be a list of real numbers. q and k can be real numbers.
In ResourceFunction["TsallisEntropy"][dist,q,k], dist can be any built-in continuous or discrete distribution in the Wolfram Language, or a mixture of them.
If not given, the default value of k is 1.
In ResourceFunction["TsallisEntropy"]["formula"], "formula" can be any of the following:
"DiscreteFormula" | formula for Tsallis entropy for a discrete set of probabilities |
"ContinuousFormula" | formula for Tsallis entropy for a continuous probability distribution |