“Toward A Universal Law Of Generalization For Psychological Science”, Roger N. Shepard1987-09-11 (, ; backlinks; similar)⁠:

[Is there a connection to the power-laws in ML? cf. dark knowledge] A psychological space is established for any set of stimuli by determining metric distances between the stimuli such that the probability that a response learned to any stimulus will generalize to any other is an invariant monotonic function of the distance between them.

To a good approximation, this probability of generalization (1) decays exponentially with this distance, and (2) does so in accordance with one of 2 metrics, depending on the relation between the dimensions along which the stimuli vary.

These empirical regularities are mathematically derivable from universal principles of natural kinds and probabilistic geometry that may, through evolutionary internalization, tend to govern the behaviors of all sentient organisms.

Figure 1: 12 gradients of generalization. Measures of generalization between stimuli are plotted against distances between corresponding points in the psychological space that renders the relation most nearly monotonic. Sources of the generalization data (g) and the distances (d) are as follows. (A) g, McGuire (33); d, Shepard (7, 18). (B) g, Shepard (7, 17); d, Shepard (7,18). (C) g, Shepard (17); d, Shepard (8). (D) g, Attneave (25); d, Shepard(8). (E) g, Guttman and Kalish (4); d, Shepard (11). (F) g, Miller and Nicely (34); d, Shepard (35). (G) g, Attneave (25); d, Shepard (8). (H) g, Blough (36); d, Shepard (11). (I) g, Peterson and Barney (37); d, Shepard (35). (J) g and d, Shepard and Cermak (38). (K) g, Ekman (39); d, Shepard (18). (L) g, Rothkopf (40); d, Cunningham and Shepard (41). The generalization data in the bottom row are of a somewhat different type. [See (39) and the section “Limitations and Proposed Extensions”.]