EQ-SHANNON-ENTROPY · Information Theory
Shannon Entropy (binary form)
H + p*log(p) + (1 - p)*log(1 - p) = 0
Variables
variable
H
entropy of a Bernoulli(p) random variable in nats (will be converted to bits externally if needed)
- Object
- information_channel
- Property
- InformationEntropy
- Context
- frequentist
- Constraint
- bernoulli
variable
p
probability of outcome 1 (Bernoulli success probability)
- Object
- random_variable
- Property
- Probability
- Context
- frequentist
- Constraint
- bernoulli_success
Axioms
algebraic concave deterministic dimensionless nonlinear
Assumptions
- p ∈ (0, 1): boundary cases p=0 and p=1 are limits, not values of the expression
- Natural logarithm; convert to bits by dividing by ln(2) if the application expects binary units
- Bernoulli case only; the general formula is H = -Σ p_i log p_i
Derivation
- Shannon, Bell System Technical Journal 27 (1948), Appendix 2 Theorem 2
- Derived axiomatically: H is the unique (up to a multiplicative constant) functional that is (i) continuous in p, (ii) monotonic in n for the uniform distribution, (iii) additive under conditional decomposition (the grouping axiom)
- STRUCTURAL IDENTITY: identical functional to Boltzmann/Gibbs thermodynamic entropy S = -k_B Σ p_i ln p_i — the only difference is k_B (J/K) vs dimensionless. The sieve should catch this.
References
- Shannon, BSTJ 27 (1948), 379-423, 623-656
- Cover & Thomas, Elements of Information Theory, 2nd ed., §2.1
- Jaynes, Information Theory and Statistical Mechanics, Phys. Rev. 106 (1957), 620