“Coding Theorems for a Discrete Source With a Fidelity Criterion”, 1959 (; backlinks; similar):
Consider a discrete source producing a sequence of message letters from a finite alphabet. A single-letter distortion measure is given by an non-negative matrix (dij). The entry dij measures the “cost” or “distortion” if letter i is reproduced at the receiver as letter j. The average distortion of a communications system (source-coder-noisy channel-decoder) is taken to be d = ∑i, j Pij dij where Pij is the probability of i being reproduced as j.
It is shown that there is a function R(d) that measures the “equivalent rate” of the source for a given level of distortion. For coding purposes where a level d of distortion can be tolerated, the source acts like one within information rate R(d). Methods are given for calculating R(d), and various properties discussed.
Finally, generalizations to ergodic sources, to continuous sources, and to distortion measures involving blocks of letters are developed.