Main

A fundamental question is how the neural coding properties of a circuit relate to its connectivity structure. Recent connectomic efforts have enabled the generation of wiring diagrams in the fly and in vertebrate circuits like the retina1,2, utricular system3,4, olfactory bulb5, visual thalamus6, visual cortex7,8 and somatosensory cortex9. In the fly navigation system, models for the organization of the head direction system were literally realized in the ring-like structure of the central complex10, and the topographically organized sinusoidally varying synaptic connectivity weights directly suggested the trigonometric identities required to encode coordinate transformations11,12. Likewise, in the early visual system of both invertebrates13 and vertebrates14, connectomics has provided a strong anatomical basis for the computations underlying motion processing. By contrast, within and across most vertebrate brain regions, the connectivity is highly recurrent without obvious topographic structure. In such cases, it remains unclear how the neural coding properties of the system emerge from and relate to the underlying connectivity and cellular properties.

We address this question within a multiregion network of the vertebrate hindbrain responsible for maintaining an animal’s gaze. This network contains the oculomotor neural integrator, which mathematically accumulates (that is, integrates in the sense of calculus) transient eye velocity-encoding command signals into persistent eye position-encoding commands. Persistent activity in this system is thought to be realized by low-dimensional attractor dynamics15,16. Low-dimensional attractors have been proposed to underlie a large range of neural computations, from storage of working memory to reduction of noise in sensory and cognitive representations16,17. Although it is generally understood that recurrent interactions in a network play an important role in the generation of attractor dynamics, we lack understanding of what features of the recurrent circuitry lead to the observed low-dimensional patterns of activity. Within the oculomotor integrator, there is broad physiological18,19,20 and coarse anatomical21 evidence implicating recurrent excitatory interactions in generating attractor dynamics. However, even in this relatively simple setting, we do not yet have a wiring diagram for the precise patterns by which neurons are recurrently interconnected, a necessary substrate for any quantitative, predictive model of network dynamics. To address this limitation, we reconstruct a synapse-resolution wiring diagram from a vertebrate brainstem region that includes the oculomotor neural integrator and more broadly serves a broad range of visual and sensorimotor functions. The wiring diagram is based on a previously acquired three-dimensional (3D) electron microscopy (EM) image of a larval zebrafish brainstem volume that is roughly bounded by a 250 × 120 × 80 μm3 box22. We reconstructed ~3,000 neurons as completely as the borders of the volume allow through human proofreading of an automated segmentation.

Classical anatomical studies have suggested that brainstem reticular circuitry within which the studied regions lie is ‘undifferentiated’23 or ‘diffuse’24. Furthermore, this region contains a tightly interwoven mixture of neurons involved in eye movements and, separately, body movements controlled by the spinal cord25. By applying graph clustering algorithms to the connectome, we reveal that this naively disorganized structure hides a strongly modular hierarchical organization. At the highest level, we find two anatomically defined modules that are specialized for different behaviors: one for eye movements and the other for body movements. The oculomotor module is in turn subdivided into two submodules specialized for movements of each eye. Further, each submodule contains a three-block cyclic substructure. Our linking of structure and behavior by modularity analysis at synaptic resolution is unique for the vertebrate nervous system.

Using this wiring diagram, we generated a computational network model to predict neural coding properties at cellular resolution across three brain regions. When relating the wiring diagram to function, a common approach starts by extracting rules of synaptic connectivity between cell types and then applies these rules to explain function26. This approach has worked well for direction selectivity in the retina1,14,27 but is not guaranteed to generalize to other brain structures that seem less stereotyped and precisely organized than the retina. Here, we take a more direct approach: use the wiring diagram to estimate a synaptic weight matrix characterizing physiological interactions between neurons and literally insert that matrix into a network model incorporating a minimal number of additional constraints from physiology. Surprisingly, the predictions turn out to be statistically consistent at a population level with neural activity recorded by calcium imaging of larval zebrafish during oculomotor behavior.

Results

Neuronal wiring diagram reconstructed from a vertebrate brainstem

We applied serial section EM to reconstruct synaptic connections between neurons in a larval zebrafish brainstem (Fig. 1). The imaged volume includes neurons involved in eye movements16,21,22,28. First, the dataset of Vishwanathan et al.22 was extended by additional imaging of the same serial sections. By improving the alignment of EM images relative to our original work, we created a 3D image stack that was amenable to semiautomated reconstruction29 of 100× more neurons than before. We trained a convolutional neural network to detect neuronal boundaries (Methods)29 and automatically segment the volume. To correct errors in segmentation, we repurposed Eyewire, which was originally developed for proofreading mouse retinal neurons14. Eyewirers proofread ~3,000 objects, which included neurons with cell bodies in the volume as well as ‘orphan’ neurites with no cell body in the volume (Fig. 1a). We will refer to all such objects as ‘neurons’ of the reconstructed network. Convolutional networks were used to automatically detect synaptic clefts and assign presynaptic and postsynaptic partner neurons to each cleft30 (Fig. 1b–d and Methods). The reconstructed dataset contained 2,884 neurons, 44,949 connections between pairs of neurons and 75,163 synapses (additional relevant features are summarized in Supplementary Fig. 1). Most connections (65%) involved just one synapse, but some involved two (19%), three (7.9%), four (3.7%) or more (4.0%) synapses, up to a maximum of 21 (Supplementary Fig. 1a).

Fig. 1: EM reconstructions of brainstem neurons.
figure 1

a, Three-dimensional rendering of reconstructed neurons. The large green cell body in the foreground is the Mauthner neuron (Mcell); Ro, rostral; C, caudal; D, dorsal; V, ventral; L, lateral; M, medial. The inset (top left) shows the location of the unilateral EM volume (black box) relative to the olfactory bulb (OB), tectum (Te), hindbrain (HB) and spinal cord (SC). bd, Automatic synapse detection and partner assignment. b, Raw EM image; scale bar, 750 nm. c, Postsynaptic densities (PSDs) identified by a convolutional network. d, Postsynaptic densities (red) overlaid onto the original raw image together with an exemplar presynaptic (blue) and postsynaptic (yellow) partnership identified by a second convolutional network. e, Sagittal view of the identified ABDM (green) and ABDI (magenta) neurons overlaid over representative EM planes; R, rhombomere. The asterisk (*) indicates the Mauthner cell soma. f, Coronal planes showing the locations of the ABDM (left) and ABDI (right) neurons at the planes indicated by dotted black lines in the sagittal view. Black boxes highlight nerve bundles from these populations. g, Representative ABDM and ABDI neurons with arrows indicating the axons. h, Reconstructions of large and small RS neurons and Ve2 neurons.

After registration of our EM volume to the Z-Brain reference atlas31 (Supplementary Fig. 2), several important groups of neurons and neurites were identified (Methods). Comparison with transgenic lines and cranial nerves in the atlas identified the abducens (ABD) neurons that control extraocular muscles, secondary vestibular (Ve2) neurons32 involved in sensorimotor transformations such as the vestibulo-ocular reflex (Fig. 1e–h, Supplementary Fig. 3 and Methods), and two populations of reticulospinal (RS) projection neurons involved in escape behaviors and controlling body movements33,34,35. The neurons of one RS population were large and dorsally located, and the neurons of the other RS population were smaller and ventromedially located (Fig. 1h and Supplementary Fig. 4).

Axial and oculomotor modules in the brainstem

Complex systems, both biological and artificial36, can often be divided into modules such that interactions within modules are stronger than interactions between them. Although such modules are structurally defined, they often turn out to be functionally specialized. To look for modularity in our data, we divided the reconstructed neurons into ‘center’ and ‘periphery’ (Methods). Neurons in the ‘center’ (540 neurons, 283 with soma) are recurrently connected to other reconstructed neurons. Neurons in the ‘periphery’ (2,344 neurons), by contrast, have negligible (62 neurons) or zero recurrent connectivity as quantified by eigenvector centrality (Methods) and are involved in feedforward pathways that supply input to the center or transmit output from the center; for example, the periphery includes ABD neurons and most RS projecting neurons. We then applied graph clustering algorithms to the center population (Methods) to identify recurrently connected modules that may not be easily discerned from peripheral associations. To demonstrate that this graph-theoretic procedure identified biologically relevant modules and to suggest functional roles of these modules, we then examined the patterns of connections between the central and peripheral populations. In the interest of clarity, we present the resultant organization in a hierarchical manner.

At the highest level, the center contains two supermodules: one termed the ‘axial module’, or ‘modA’, which we hypothesize primarily plays a role in movements of the body axis, and a second termed the ‘oculomotor module’, or ‘modO’, which we propose primarily plays a role in eye movements (Fig. 2a,b). RS neurons in the periphery received much stronger connectivity from modA than from modO (Fig. 2a, RS (large), and Extended Data Fig. 1). Of the ten smaller RS neurons contained in the center, all were in modA and none were in modO (Extended Data Fig. 1a, black arrows). By contrast, the 54 peripheral ABD neurons received much stronger connectivity from modO than from modA (Fig. 2a), and all 34 of the Ve2 neurons (Fig. 2c) were members of modO. Together, these features suggest that modA is functionally related to the control of body movements, and modO plays a role in eye movements.

Fig. 2: Modularity and functional specialization of interneurons.
figure 2

a, Top, matrix of connections in the ‘center’ of the wiring diagram, with neurons clustered into two modules (modA and modO). Bottom, matrix of connections from center to large RS and ABD neurons in the periphery. b, Top, example connected pairs of modA neurons. Bottom, example connected pair of modO neurons (light and dark blue) and the overlap of their axons with the dendrite of an ABD internuclear cell (magenta). The grid in the background is the same in both images to facilitate comparison. c, Locations of reconstructed neuron somas (modA, orange; modO, blue) projected onto the horizontal plane and one-dimensional (1D) densities along the mediolateral (bottom) and rostrocaudal (left) axes. Closed circles are neurons with complete somas inside the reconstructed EM volume. Open circles are locations of the primary neurites exiting the top of the EM volume for cells with somas above the volume. The inset cartoon shows the region of the hindbrain in the figure. d, Postsynapses of neurons in modA and modO along with 1D densities. Every fifth postsynaptic density is plotted for clarity. e, Presynapses of neurons in modA and modO along with 1D densities. Every tenth presynaptic terminal is plotted for clarity. f, Schematic illustrating the definition of a potential (that is, false) synaptic connection identified when a presynaptic terminal (for example, axon 2) is proximal (red) to a postsynaptic density (for example, dendrite 1) but not actually in contact with it. g, Ratio of the number of within-module to the number of between-module synapses versus threshold distance for true and potential synapses. The table lists the actual true synapse densities for the data point with an asterisk (*). h, Ratio of numbers of synapses from neurons in modA and modO to peripheral neurons (ABD and RS).

Although the two populations were quite intermingled, there were notable differences in the spatial distribution of soma positions (Fig. 2c), postsynaptic densities (Fig. 2d) and presynaptic terminals (Fig. 2e). Therefore, we decided to probe to what extent these distributions could be contributing to modularity.

To quantify modularity, we defined an index of wiring specificity as the ratio of the sum of within-module synapse densities to the sum of between-module synapse densities. Here, synapse density is defined as the number of synapses normalized by the product of presynaptic and postsynaptic neuron numbers. To look at the contribution of organization at the level of the neurites, we also defined a ‘potential synapse’ as a presynapse and a postsynapse within some threshold distance of each other (Fig. 2f), similar to the conventional definition as an axo-dendritic apposition within some threshold distance37. Based on actual synapses, the wiring specificity index was roughly 6 for the center neurons (Fig. 2g). The index dropped to less than 3 for potential synapses defined by a distance threshold of 5 μm and close to 2 at a distance threshold of 10 μm (Fig. 2g). Similar trends were observed using a corresponding index for wiring specificity to the periphery (Fig. 2h). Together, these results imply that the division into modA and modO requires synaptic identity to be precisely identified to within a few microns or less9.

We also identified, at the population level, statistical differences in other anatomical features of modA and modO neurons and synapses (Extended Data Fig. 2), such as total arbor length and synapse distance from the soma (Extended Data Fig. 2a–c). For completeness, we note that the total number of synapses onto a neuron (regardless of origin) was 310 ± 261 for modA neurons, 174 ± 98 for modO neurons, 1,576 ± 1,555 for peripheral RS neurons and 411 ± 167 for ABD neurons.

The modules in Fig. 2 were obtained with the Louvain algorithm for graph clustering38, which is designed to maximize modularity. Similar modules are obtained when spectral clustering39 is used (Extended Data Fig. 3). As shown next, the binary division into two modules is the first step in a hierarchical procedure that reveals further substructure.

Two major submodules of the oculomotor network

At the next level of organization, the oculomotor center divided into two major submodules, one associated with control of the ipsilateral eye and one associated with control of the contralateral eye (Fig. 3a). The soma, postsynaptic density and presynaptic terminal locations for neurons in each module are shown in Fig. 3b–d. Neurons in the two modules were five times more likely to connect with neurons in the same module than the other module (Fig. 3e). This strong modularity could not be explained solely by coarser-scale proximity of presynaptic terminals (Fig. 3d) and postsynaptic densities (Fig. 3c) as measured by potential synapses, as the wiring specificity ratio dropped to just over 2 when considering potential synapses at a distance of only 2 μm away (Fig. 3e). The relationship of each module to the periphery was assessed by focusing on two neuronal groups within the ABD complex, the ABD motor (ABDM) neurons that directly drive the lateral rectus muscle of the ipsilateral eye and the ABD internuclear (ABDI) neurons that indirectly drive the medial rectus muscle of the contralateral eye through a disynaptic pathway; increased activities in both ABDM and ABDI neurons drive eye movements toward the side of the brain on which the neurons reside (‘ipsiversive’ movements). Neurons in one submodule of modO preferentially connected to ABDM neurons, whereas neurons in the other submodule preferentially connected to ABDI neurons. This is evident from both visual inspection (Fig. 3a) and quantitative analysis (Fig. 3a,f) that revealed a wiring specificity of almost 6 (Fig. 3f). Preferences of individual neurons can be extreme. Many modO cells connect to ABDM neurons only or ABDI neurons only (Fig. 3g). We therefore refer to the submodules as motor targeting (modOM) and internuclear targeting (modOI) and suggest that they are largely involved in controlling movements of the ipsilateral and contralateral eyes, respectively.

Fig. 3: Submodules specialized for the two eyes.
figure 3

a, Top, matrix of connections within modO organized into two submodules termed modOM and modOI. Bottom, projections of modOM and modOI onto ABDM and ABDI neuron populations. b, Locations of reconstructed neuron somata along with 1D densities for neurons within modOM (blue) or modOI (brown). The symbols, inset and orientation are as in Fig. 2c. c,d, Postsynaptic densities (c) and presynaptic terminals (d) in modOI and modOM. Every fifth synaptic site was plotted for clarity. e, Ratio of the number of within-module synapses for modOM and modOI to the number of between-module synapses as a function of potential synapse distance. The table lists the actual number of true synapses for the data point with an asterisk (*). f, Ratio of the number of synaptic contacts between a modO submodule and its preferred peripheral partner versus those between a modO submodule and its nonpreferred peripheral partner. The numbers in the tables represent normalized synapse counts defined as the ratio of the sum of all synapses in a block to the product of the number of elements in the block. g, Ocular preference index for modOM and modOI neurons. A value of 1 or –1 indicates connections to only one ABD population. A value of 0 indicates an equal number of connections to each ABD population. Ve2 neurons were not included.

Population-level differences between modOM and modOI and their interconnections were also evident in the statistical distributions of synapse size and synapse distance from the soma (Extended Data Fig. 2d,e).

Each oculomotor submodule contains a three-block cycle

The results above suggest the existence of two weakly connected submodules within the oculomotor network. To gain insight into whether these submodules contained substructure, we next examined the eigenvalues of the modO connectivity matrix. The set of eigenvalues of a connectivity matrix, known as the eigenvalue spectrum, contains hallmark signatures of the matrix’s underlying structure. To understand the relationship between the eigenspectrum and network structure, we consider various null models.

When all structure is removed from modO by randomly shuffling connections (Methods), a plot of the real and imaginary parts of the eigenvalues consists of a roughly circular disk centered around 0 plus a single leading eigenvalue along the positive real axis (Fig. 4a, left). Such a pattern of eigenvalues is characteristic of a wide class of matrices with random entries40,41, with the radius proportional to the standard deviation and the leading eigenvalue proportional to the mean of the matrix entries. If, instead, we keep submodules modOM and modOI independent, removing all connections between these submodules, and then shuffle within each submodule, the eigenvalues consist of a roughly circular disk plus two leading eigenvalues (Fig. 4a, middle). Here, the slightly larger leading eigenvalue for modOI (purple point) relative to modOM (orange point) indicates that modOI has slightly stronger overall recurrence. If the connections coupling these submodules are maintained (but separately shuffled), a similar eigenspectrum emerges but with the leading eigenvalues pushed apart from one another along the real axis (Fig. 4a, right). Overall, this null model analysis shows that, if the previously identified submodules modOM and modOI contained no further substructure, then we would expect the eigenvalues for the modO connectivity matrix to occupy a roughly circular disk in the complex plane plus have two distinct leading eigenvalues along the positive real axis.

Fig. 4: ModO contains two distinct cycles with different projections to the ABD.
figure 4

a, Eigenvalue spectrum for modO when connections are shuffled (Methods), with modO treated as a single block (left) or as two blocks consisting of modOM and modOI with interconnections removed (middle) or intact (right). Im, imaginary; Re, real. b, Eigenvalue spectrum for the actual connectome. c, Matrix of synaptic connections for modO and its connections to the ABD. Cells are grouped by block identity in a seven-block SBM. The orange and purple boxes highlight cycles in the connections within blocks OM1–OM3 and blocks OI1–OI3, respectively. Gray ticks indicate the location of Ve2 cells. d,e, Eigenvalue spectrum of modO after the cycles are decoupled by eliminating connections between the internuclear cycle and the other four blocks (d) and when the connections within the internuclear cycle are additionally shuffled (e). f, Diagram of connections between blocks in modO. Solid lines indicate the most prominent connections, and dashed lines indicate weaker connections. g, Leading (left, purple) and second leading (right, orange) eigenvectors for modO (top) and modO with the two cycles decoupled as in d.

The eigenspectrum of the actual modO connectivity matrix is notably richer than expected from the null models (Fig. 4b). Similar to the null model of unstructured submodules (Fig. 4a, right), there is a dense central mass of eigenvalues plus two prominent leading eigenvalues. However, unlike the unstructured submodules, the two leading eigenvalues are each part of a triangle of eigenvalues (Fig. 4b, orange and purple). Such triangles are a hallmark of matrices with a cyclic structure (that is, a set of three blocks connected as 1 → 2 → 3 → 1) and suggest the presence of two weakly coupled cycles of blocks in the modO connectivity matrix. Indeed, our recent analysis of three-cell single cell motifs in cellular connectivity in this system revealed an overrepresentation of three-cell cycles that could support cyclic block structure42. We therefore used a stochastic block model (SBM; Methods) to decompose modO into statistically interconnected blocks. We found that the predicted structure of two coupled three-block cycles emerged most clearly when we divided the network into seven blocks, where the seventh block predominantly consisted of Ve2 neurons (Fig. 4c).

The first three-block cycle (Fig. 4c, orange: OM1 → OM2 → OM3 → OM1) connects most strongly to ABDM neurons, and 83% of its cells come from modOM. We therefore refer to this cycle as the ‘motor cycle OM1–OM3’. The other three-block cycle (Fig. 4c, purple: OI1 → OI2 → OI3 → OI1) connects predominantly to ABDI neurons, and 98% of its cells come from modOI. We therefore refer to this cycle as the ‘internuclear cycle OI1–OI3’. Because it predominantly consists of Ve2 neurons (32/34), we refer to the seventh block as the vestibular-containing block ‘OVe2’; unlike the three-block cycles, this block projects roughly equally to both the ABDM and ABDI populations. This hierarchical division into three-block cycles plus a vestibular-containing block is robust to errors in synapse assignments or definitions for inclusion of neurons in the center population (Extended Data Fig. 4) and leads to a wiring specificity to the periphery of 5.9, similar to that for the division into only two submodules of Fig. 3f.

To better isolate the correspondence between the eigenvalue triangles and the cycles seen in the seven-block organization, we decoupled the motor and internuclear three-block cycles by removing all connections to and from the internuclear block. This preserved the orange and purple triangles of eigenvalues, but, as was seen for the coupled versus uncoupled modOM and modOI (Fig. 4a, middle), the biggest effect of removing the coupling is to bring the two leading eigenvalues closer together (Fig. 4d). Additionally shuffling the connections within the internuclear cycle preserves the orange triangle in the eigenspectrum while removing the purple triangle (Fig. 4d), enabling us to identify the purple triangle with the internuclear cycle and the orange triangle with the motor cycle (Fig. 4e). A graphical summary of modO recurrent connectivity and connectivity to the periphery is provided in Fig. 4f.

The coupling between cycles, although relatively weak, may have important functional roles. The effect of this coupling can be seen in the eigenvectors corresponding to the two leading eigenvalues (Fig. 4g, top). These eigenvectors represent the patterns across the neurons of the network that have the first and second strongest recurrent feedback connectivity. When the cycles are decoupled, one modO eigenvector has nonzero entries only for neurons in the OI1–OI3 cycle (Fig. 4g, bottom left), whereas the other contains nonzero entries only for neurons in the OM1–OM3 cycle and OVe2 block (Fig. 4g, bottom right). Including the ABD neurons in the eigenvector analysis (which does not change the eigenvalues of Fig. 4b,d) further shows that the leading decoupled eigenvectors associate predominantly with ABDM and ABDI neurons, respectively (Fig. 4g, bottom). By contrast, when the full coupling is included, the leading eigenvectors approximately combine the eigenvectors found in the decoupled case. The leading eigenvector approximates a scaled sum of the two decoupled eigenvectors, and the second leading eigenvector approximates a scaled difference of the decoupled eigenvectors. Biologically, this suggests that activation of the pattern of modOM neurons corresponding to the first eigenvector may generate ‘conjugate eye movements’ (eyes moving together in the same direction, which requires coactivation of the ABDM and ABDI populations), whereas activation of the second eigenvector may be critical for maintaining either disconjugate eye movements (eyes moving in opposite directions) or deviations from conjugate eye movements.

Two of these graph-theoretically derived blocks would not be easily identifiable as part of modO by naively tracing connections back from the periphery. Block OM1 of the motor submodule projects extremely weakly to the motor neurons but has the strongest projections to and receipt of projections from the internuclear submodule. Thus, it appears to be involved nearly solely in internal processing. Similarly, block OI3 of the internuclear cycle has relatively little connection to the ABD. Other blocks, such as block OM3, appear to divide into two halves with different functions for each half; for block OM3, only half projects to the ABD, whereas the other half provides the strongest projection to the OVe2 block.

We separately applied an SBM to modA and examined how axial submodules aligned with those from modO. We found three axial submodules, with strong connections within and relatively weak connections between the submodules. Coupling between the oculomotor and axial submodules was weak overall, with the strongest coupling being mediated by connections to and from axial block A1 (Extended Data Fig. 5).

Predicting neural coding and dynamics from the wiring diagram

We hypothesized that modO contains the majority of the ‘neural integrator’ for horizontal eye movements that transforms angular velocity inputs into angular position outputs. This transformation has been suggested to depend on a relatively high degree of recurrent interactions within the velocity-to-position neural integrator (VPNI). However, it is not known how these interactions shape the single-neuron coding properties of neurons within modO and its peripheral outputs in the ABD.

We developed a network model to assess whether functional properties of modO and ABD neurons could be predicted from connectomic information. Within modO, we distinguished between generic modO neurons and the exceptions (Ve2) that received inputs from primary vestibular afferents (Fig. 5a). We first focused on predicting the relative strength of the eye position signal in these different populations during spontaneous fixation behavior and then examined what aspects of the firing rate dynamics could be predicted.

Fig. 5: Connectome-based model captures functional characteristics of the oculomotor system.
figure 5

a, Schematic illustrating gross organization of synaptic connections (triangles, excitatory; bars, inhibitory) between the three optically imaged populations; Ve2, vestibular; ABD, combined ABDM (green) + ABDI (pink) populations. b, Eye position (top) and calcium fluorescence activity (bottom) of an ABD neuron ipsilateral to the shown eye (green: raw fluorescence; dotted black: neural activity estimate from deconvolved fluorescence); AU, arbitrary units. c, Top, for the neuron in b, deconvolved fluorescence versus eye position (gray) and best-fit relationship (red) used to determine the relative eye position sensitivity k~ (see Methods). Bottom, for an example VPNI neuron, saccade-triggered average (STA) of deconvolved fluorescence (gray) and best fit to a sum of exponential functions with fixed time constants derived approximately from the principal components of the population firing rates (red; see Methods). d, Relative eye position sensitivities from the connectome-based model (gray) and imaging of real cells (green). e, Same as d except that the model uses potential synapses instead of the actual connectome. f, Cumulative variance explained for the leading principal components of the STA of the firing rates of the model (gray) or deconvolved fluorescence of imaged cells (green) for the period between 2 and 6 s after a saccade. g, For double exponential fits as in c, best-fit amplitudes of the exponentials for each cell in the model (gray) and each imaged cell (green). Note in d that VPNI cells were defined experimentally by having a sufficiently large correlation with eye position; thus, although included for completeness, the lowest sensitivity simulated VPNI cells would not have been counted if they occurred in a functional imaging dataset.

A minimal set of physiological constraints were imposed on the model. Based on alignment with previous studies (see Methods and Identification of excitatory versus inhibitory neurons), we assumed that all Ve2 cells were inhibitory (Supplementary Fig. 3c and Methods)32 and the remaining cells were excitatory21 (Methods). We omitted interactions with contralateral neurons based on physiological studies indicating that persistent activity in the oculomotor integrator is generated independently by each half of the brain20,43,44. Likewise, we omitted interactions between modO and modA based on their separate roles in controlling eye movement versus body movement behaviors34,35,45 and the body-fixed nature of the oculomotor behavior being investigated.

We determined the model’s weight matrix from our EM reconstruction as follows. Each element Wij of the weight matrix is the physiological strength of the connection received by neuron i from neuron j. First, we approximated Wij as proportional to Nij, the number of synapses received by i from j. This approximation effectively assumes that all individual synapses involved in a connection have the same strength. For each neuron i, we then divided by a normalization factor ΣkNik equal to the total number of synapses received by neuron i, including those from neurons outside modO. This normalizing factor has two motivations. First, normalizing by the total number of incoming synapses provides a form of homeostatic synaptic scaling46. Second, the normalization provides a simple way to compensate for truncation of dendritic arbors by the borders of the EM volume. With this normalization, each Wij is proportional to the fraction of the total inputs that neuron j provides to neuron i. Finally, to set the proportionality constant, we did an overall scaling of the weight matrix by a factor β to make the network response exhibit persistent activity (Methods). Therefore, the final synaptic weight matrix took the form Wij=±βNijkNik.

We then constructed a linear network model with weight matrix Wij (Methods). Linear model neurons47 were used for simplicity because most VPNI cells have firing rates that vary linearly with position for ipsilateral eye positions48,49.

We first used this model to predict the position sensitivities (that is, changes in firing rates for a given change in eye position) of all modO and ABD neurons. To form these predictions, we simulated the response to saccadic burst input and recorded the firing rates of model neurons during the subsequent fixation (Extended Data Fig. 6). We then tested these model predictions against prior calcium imaging experiments50 (Methods). We assessed experimental eye position sensitivity during spontaneous fixations in the dark (Fig. 5a–c). Because the imaged neurons come from different fish than the reconstructed one, they cannot be placed in one-to-one correspondence with neurons in our EM volume. Therefore, we resorted to a population-level comparison.

Each of the four simulated populations (Ve2, generic modO, ABDM and ABDI) displayed a characteristic distribution of eye position sensitivities (Fig. 5d, gray). The Ve2 population exhibited little eye position sensitivity. The generic modO population that contains the VPNI neurons exhibited a long-tailed distribution of eye position sensitivities with much higher mean sensitivity. The ABD population generated the largest eye position sensitivities, and these were bimodally distributed, with higher sensitivities for ABDI than for ABDM neurons. Qualitatively, the distribution of experimental eye position sensitivities for each population matched the model predictions quite well (Fig. 5d, green), although the bimodality seen in the model ABD populations was not clearly evident. We compared these simulations to three classes of neurons (VPNI, Ve2 and ABD) in the experiments, which could not easily distinguish between ABDM and ABDI neurons.

To test whether potential connectivity would have been sufficient for our network modeling, we estimated Nij by the number of potential synapses onto neuron i from neuron j and then computed the weight matrix Wij by the normalization described above (Methods). When our weight matrix based on potential synapses was substituted into our network model, we obtained population distributions for eye position sensitivities that were considerably different from experimental measurements (Fig. 5e). This suggests that the functional properties of the circuit depend on the spatially precise connections revealed by the EM connectome.

We next tested whether the connectome could predict the low dimensionality of the system dynamics and any features of the distribution of single-neuron relaxation dynamics across the population. Regarding the low dimensionality, because the (real parts of the) eigenvalues of the weight matrix set the relative timescales of decay in the system and the matrix Wij has two leading eigenvalues, this immediately predicts that the postsaccadic activity of the network will exhibit low-dimensional dynamics at long times (Fig. 5f; percent variance explained by the leading principal components of recorded and simulated network activity). To assess the dynamics of exponential decay of the network activity, we performed a double exponential fit to each neuron in the model and experimental recordings (Fig. 5c, bottom). We then calculated the amplitude of each neuron’s participation in the longest and second-longest exponential decays. Consistent with the experiments and previous findings16, the model predicted a negative correlation between the decay amplitudes (Fig. 5g).

To test the robustness of these results, we changed the model by adjusting the threshold for dividing neurons into center and periphery (Methods and Extended Data Fig. 7). Even if the number of neurons in the center was reduced by 25%, the model distributions of eye position sensitivities and exponential decay dynamics as well as the dimensionality of the network remained similar (Extended Data Fig. 7a,c). We also corrupted our reconstructed wiring diagram by simulating errors in the automated synapse detection and likewise found that the population distributions of eye position sensitivities and exponential decay dynamics and the dimensionality of the network remained very similar (Extended Data Fig. 7b,d).

Remarkably, the match between model predictions and experiments in Fig. 5d,f,g involved fitting only a minimal set of parameters. For the eye position sensitivity distributions of Fig. 5d, we set an overall scale factor such that the mean eye position sensitivity for the generic modO population in the model matched that of the experimental VPNI population; beyond this, as long as the intrinsic time constant was set short enough that the eye position sensitivity was dominated by the leading eigenvector of the weight matrix, the shapes of the distributions matched. Quantitatively matching the approximately two-dimensional persistent activity of the experimentally recorded population (Fig. 5f,g) required additionally setting the intrinsic time constant τ and the value of the overall weight scale parameter β in the model. Interestingly, we found that the best fits were obtained with an intrinsic time constant of ~1 s, a timescale estimated previously from inactivation experiments43.

Finally, we examined which features of the wiring diagram are important for generating the above results. We did this by randomly shuffling the connections within a one-, two- and seven-block model of modO. The one-block model failed to capture the eye position sensitivity distributions, the network dynamics dimensionality and the exponential decay dynamics distributions, whereas both the two- and seven-block models reasonably captured the experimental findings (Extended Data Fig. 8a–c, top). The eye position sensitivities in the seven-block model were slightly less sensitive to shuffling than those in the two-block model (Extended Data Fig. 8a–c, middle). The observation that the two- and seven-block models behaved very similarly likely reflects that our experiments probed only the longest timescales of behavior (the two leading eigenvalues), and the two- and seven-block models have highly similar leading eigenvalues (Extended Data Fig. 8a–c, bottom, and Extended Data Fig. 9). Thus, although the seven-block model is required to recover the cyclic connectivity of the network and the separate vestibular-dominated block OVe2, it is not required to reveal the slowest timescale decay dynamics.

Discussion

This work shows how the neural coding properties throughout a hindbrain network relate to its synapse-resolution structure. Through connectomic reconstruction and analysis, we reveal that hindbrain regions previously thought to be largely unstructured instead contain a hierarchical modular organization that closely aligns with known motor functions. Directly inserting the connectome into a network model statistically predicts the single-neuron coding properties assessed with two-photon calcium imaging throughout the reconstructed volume, representing a breakthrough in deriving neuronal properties within a topographically heterogeneous network from the connectome. EM-resolution analysis was critical to revealing the above network structure and predictions, as there were large discrepancies when lower-resolution analysis based on potential synapses was used to assess connectivity. The resulting connectome and modular analysis tools are being provided as an open resource to the community.

Low-dimensional brain dynamics have been found to underlie a wide array of motor, navigational and cognitive functions. Twenty-five years ago, ‘line attractor’ and ‘ring attractor’ network models were proposed for low-dimensional neural dynamics in the oculomotor15 and head direction systems51,52, respectively. Connectomic information from the Drosophila head direction system10 is currently being used to inform ring attractor network models. Our work similarly informs line attractor network models, for which the VPNI has served as a model system53,54. The classic view of this system is as a line attractor with unstructured recurrent feedback leading to persistent activity15. This picture corresponds well to the one-block shuffled connectivity of our network analysis (Fig. 4a, left) but does not fit well with the true connectome. Rather, we found the connectome for the VPNI to contain two weakly coupled three-block cycles that lead to a corresponding pair of large real eigenvalues, providing an anatomical basis for the low-dimensional neural activity observed in this network16,55. Future connectomes incorporating the different classes of anatomical inputs targeting the VPNI may additionally explain the previously observed context dependence of these low-dimensional dynamics16.

The reconstruction and analysis of modO provides further insights into the organization and function of the VPNI and its relation to the larger oculomotor system. The presence of two weakly coupled VPNI subnetworks, preferentially projecting either to ABDM neurons (modOM) or ABDI neurons (modOI), likely enables separate control of the two muscles (lateral rectus and medial rectus, respectively) controlling horizontal eye movements. Together, these subnetworks could provide for maintenance of conjugate and disconjugate eye movements56,57,58. The identified cyclic structure implies a pair of complex conjugate eigenvalues in the eigenspectrum (Fig. 4a) that, dynamically, corresponds to overdamped oscillations (Extended Data Fig. 10). This may be required for control of the eye plant, which can be approximated as a highly damped mass-spring system whose impulse-response function exhibits overdamped oscillations59 and may suggest the importance of a weak, but non-negligible, mass term in the plant that is commonly ignored in simplified descriptions55,60. Finally, at a gross anatomical level, previous physiological studies of VPNI cells mainly focused on rhombomere 7/8 (R7/R8; refs. 16,21,22,55). Our map of modO (Fig. 2c) suggests that, at least at this stage of development, the VPNI should also include R4–R6. This extension is consistent with a previous observation that VPNI function was only partially abolished by sizable inactivation of R7/R8 (ref. 55) and with previous reports of eye position signals in R4–R6 neurons that are not from ABD neurons50,57.

In most neural network models of brain function, the synaptic weight matrix has been regarded as the solution to an ‘inverse problem’. Given the observed effects (neural activity and behavior), the modeler attempts to identify the unobserved cause (weight matrix). Connectomics offers the possibility of treating network modeling as more of a ‘forward problem’. The forward approach has been feasible for small nervous systems in which the weight matrix can be directly observed and completely mapped by synaptic physiology61. In sensory systems, a forward approach has been used to demonstrate how connectomic information could be used to model whitening of odor representations in a vertebrate olfactory bulb5 and to constrain a model of orientation and direction selectivity in the Drosophila visual motion detection circuit62, although fine-tuning by backpropagation learning was necessary in the latter case. At a lower, ‘mesoscopic’ level of resolution, interarea projection maps in primates have been used to explain the temporal dynamics of cortical responses63. Here, we have shown how, with a minimal number of assumptions, the forward modeling approach could predict the statistical distribution of single-neuron eye position sensitivities and exponential decay dynamics for several neural populations measured by calcium imaging of animals during ocular fixations (Fig. 5c). Some ‘inverse’ aspects to our modeling remained because we constrained ourselves to modeling eye movements in the absence of body movements. Our forward approach also incorporated a minimal set of key anatomical and physiological constraints such as known signs of connections21,32, physiological observations about the approximate linearity of oculomotor responses18,49 and functional independence of the bilateral halves of the circuit20,56. Nevertheless, given the minimal nature of these physiological constraints, our success in matching physiological observations is perhaps surprising. This is especially so given our naive estimates of synaptic weights from the simple measure of number of synapses, simplified linear rate model neuron treatment of cell morphology, synaptic and intrinsic cellular biophysics and neglect of possible neuromodulatory effects64. Comparisons between model predictions and physiological data at the level of single cells, rather than populations, might require more sophisticated modeling of cell- and synapse-specific biophysics.

As more connectomes become available in other settings, it will be important to consider which physiological constraints need to be incorporated to make appropriate use of these powerful datasets. Answers to this question depend on many factors, including the breadth of behaviors to be produced in a single model, the range of dynamics of the constituent components and the degree to which the model is to produce quantitative versus qualitative matches to data. Nevertheless, we hope that our work suggests how, when guided by knowledge of the various behaviors a circuit participates in and appropriate physiological constraints gleaned from recordings and perturbations of activity, it may be possible in even more complex circuits to identify physiological modules whose function can be well understood using the connectome-based analysis and modeling approach taken here.

Methods

Image acquisition and alignment

We acquired a dataset of the larval zebrafish hindbrain (mitfab692/+ (ABxWIK), 6 days after fertilization, sex unspecified, Zebrafish International Resource Center) that extends 250 μm rostrocaudally and includes R4 to R7/R8. The volume extends 120 μm laterally from the midline and 80 μm ventrally from the plane of the Mauthner cell axon. The serial-section Electron Microscopy dataset was an extension of the original dataset in Vishwanathan et al.22 and was extended by additional imaging of the same serial sections. The images were acquired using a Zeiss Field-Emission Scanning Electron Microscope using WaferMapper software65. Only a few tens of neurons had been manually reconstructed in our original publication on the ssEM dataset in Vishwanathan et al.22. The dataset was stitched and aligned using the custom package Alembic (see Code availability). The tiles from each section were first montaged in two dimensions (two steps) and then registered and aligned in 3D as whole sections (four steps). Point correspondences for alignment were generated by block matching via normalized cross-correlations both between tiles and across sections. Errors in each step were found by a combination of programmed flags (such as lower than expected correspondences, small search radius, large distribution of norms or high residuals after mesh relaxation) and visual inspection. They were corrected by either changing the parameters or by manual filtering of points. In most cases, the template and the source were both passed through a bandpass filter. The final parameters that were used are listed in Table 1.

Table 1 Parameters used for image alignment

Stitching of tiles (montaging) within a single section was split into a linear translation step (premontage) and a nonlinear elastic step (elastic montage). In the premontage step, individual tiles were assembled to have 10% overlap between neighboring tiles, as specified during imaging and by fixing a single tile (anchoring) in place. They were then translated row by row and column by column according to the signal correspondence found between the overlaps. In the elastic montage step, the locations of the tiles were initialized from the translated locations found previously, and blockmatches were computed every 100 pixels on a regular triangular mesh (see Table 1 for the parameters used). Once the correspondences were found, outliers were filtered by checking the spatial distribution of the cross-correlogram (σ filter), height of the peak of the correlogram (Pearson r value), dynamic range of the source patch contrast, kurtosis of the source patch, local consensus (average of immediate neighbors) and global consensus (inside the section). After the errors had been corrected by filtering bad matches, the tiles were modeled as individual spring meshes attached via the matches66, and the linear system was solved using conjugate gradient descent. The mean residual errors were in the range of 0.5–1.0 pixels after relaxation.

The intersection alignment was split into a translation step (pre-prealignment), a regularized affine step (prealignment), a fast coarse elastic step (rough alignment) and a slow fine elastic step (fine alignment). In the pre-prealignment step, a central patch of the given montaged section was matched to the previous montaged section to obtain the rough translation between two montaged sections. In the prealignment step, the montaged images were offset by that translation, and then a small number of correspondences were found between the two montaged sections. The sections were then aligned using an affine transformation fit using least squares, regularized with a 10% (empirically derived) weighting of the identity transformation to reduce shear accumulating and propagating across multiple sections. Proceeding sequentially allowed the entire stack to get roughly in place. The mean residual errors after prealignment were in the range of 3.5 pixels after relaxation. In the subsequent rough and fine alignment steps, the blockmatches were computed and filtered between bandpassed neighboring sections in a regular triangular mesh, similar to the elastic montaging step, followed by a conjugate gradient descent on the linear system. Splitting the elastic step into rough and fine alignments allowed the search radius to be reduced during the fine alignment relative to what would have been necessary with a direct attempt, thereby lowering the likelihood of spurious matches as well as the computing time.

Convolutional net training

Dataset

Four expert brain image analysts (D. Visser, Princeton Neuroscience Institute; K.W.; M. Moore, Princeton Neuroscience Institute; and S.K.) manually segmented neuronal cell boundaries from six subvolumes of EM images with the manual annotation tool VAST67, labeling 194.4 million voxels in total. These labeled subvolumes were used as the ground truth for training convolutional networks to detect neuronal boundaries. We used 187.7 million voxels for training and reserved 6.7 million voxels for validation.

Network architecture

To detect neuronal boundaries, we used a multiscale 3D convolutional network architecture similar to the boundary detector in Zung et al.68. This architecture was similar to U-Net69 but with more pathways between scales. We augmented the original architecture of Zung et al.68 with two modifications. First, we added a ‘lateral’ convolution between every pair of horizontally adjacent layers (that is, between feature maps at the same scale). Second, we used batch normalization70 at every layer (except for the output layer). These two architectural modifications were found to improve boundary detection accuracy and stabilize/speed-up training, respectively. For more details, we refer the reader to Zung et al.68.

Training procedures

We implemented the training and inference of our boundary detectors with the Caffe deep learning framework71. We trained the networks on a single Titan X Pascal GPU. We optimized the binary cross-entropy loss with the Adam optimizer72 initialized with α = 0.001, β1 = 0.9, β2 = 0.999 and ε = 0.01. The step size α was halved when the validation loss plateaued after 135,000, 145,000 and 175,000 iterations. We used a single training example (minibatch of size 1) to compute gradients for each training iteration. The gradient for target affinities (the degree to which image pixels are grouped together) in each training example was reweighted dynamically to compensate for the high imbalance between target classes (that is, low and high affinities). Specifically, we weighted each affinity inversely proportional to the class frequency, which was computed independently within each of the three affinity maps (x, y and z) and dynamically in each training example. We augmented training data using (1) random flips and rotations by 90°, (2) brightness and contrast augmentation, (3) random warping by combining five types of linear transformation (continuous rotation, shear, twist, scale and perspective stretch) and (4) misalignment augmentation73 with a maximum displacement of 20 pixels in the x and y dimensions. The training was terminated after 1 million iterations, which took about 2 weeks. We chose the model with the lowest validation loss at 550,000 iterations.

Convolutional net inference

The above trained network was used to produce an affinity map of the whole dataset using the ChunkFlow.jl package74. Briefly, the computational tasks were defined in a JSON formatted string and submitted to a queue in Amazon Web Services Simple Queue Service (AWS SQS). We launched 13 computational workers locally with NVIDIA Titan X GPU. The workers fetched tasks from the AWS SQS queue and performed the computation. The workers first cut out a chunk of the image volume using BigArrays.jl and decomposed it into overlapping patches. The patches were fed into the convolutional network model to perform inference in PyTorch75. The output affinity map patches were blended in a buffer chunk. The output chunk was cropped around the margin to reduce boundary effects. The final affinity map chunk, which is aligned with block size in cloud storage, was uploaded using BigArrays.jl. Both image and affinity map volumes were stored in Neuroglancer precomputed format (https://neurodata.io/help/precomputed/). The inference took about 17 days in total and produced about 26 TB of affinity map.

Chunk-wise segmentation

To perform segmentation of the entire volume, we divided the volume into ‘chunks’. Overlapping affinity map chunks were cut out using BigArrays.jl, and a size-dependent watershed algorithm76 was applied to agglomerate neighboring voxels to make supervoxels. The agglomerated supervoxels are represented as a graph where the supervoxels are nodes and the mean affinity values between contacting supervoxels are the edge weights. A minimum spanning tree was constructed from the graph by recursively merging the highest weight edges. This oversegmented volume containing all supervoxels and the minimum spanning tree was ingested into Eyewire (https://eyewire.org) for crowdsourced proofreading.

Semiautomated reconstructions on Eyewire

Neurons were chosen for proofreading in Eyewire based on an initial set of ‘seed’ neurons that were identified as carrying eye position signals by co-registering the EM volume to calcium imaging performed on the same animal22. All pre- and postsynaptic partners of the initial seed of 22 neurons were reconstructed. Following this, we reconstructed partners of the neurons that were reconstructed in the initial round in a random manner. Eyewirers were provided the option of agglomerating (merging) supervoxels using a slider to change the threshold of agglomeration. To ensure accurate reconstructions, we did two things: (1) only players who met a certain threshold (see below), determined by their accuracy on a previously published retinal dataset14,77, were allowed to reconstruct zebrafish neurons, and (2) the reconstructions were performed by two players in two rounds, in which the second player could modify the first player’s reconstruction (Supplementary Information). Finally, after two rounds of reconstruction, neuronal reconstructions were checked and if necessary corrected by expert in-house image analysts who each had more than 5,000 h of experience. The accuracy of the players in the crowd compared to experts (assuming experts are 100%) was >80% in the first round and ~95% after the second round of tracing. The validated reconstructions were subsequently skeletonized for analysis purposes. Player accuracy was calculated as an F1 score, where F1 = 2TP/(2TP + FP + FN), where TP represents true positives, FP represents false positives, and FN represents false negatives. All scores were calculated as a sum over voxels. TP was assigned when both the player and the expert agreed that the segmentation was correct, FN was assigned when the player missed segments that were added in by the expert, and FP was assigned when the player erroneously added segments that did not belong. Two F1 scores were calculated for each player, once for round 1 and once for round 2. No player played the same neuron in both rounds. Typically, at an agglomeration threshold of 0.3, the segmentation had an F1 score of 62%.

Only the most experienced players on Eyewire were allowed to participate. A small group of four highly experienced players received an invitation to test this new dataset. The players were given the title of ‘Mystic’, which was a new status created to enable gameplay and became the highest achievable rank in the game. Subsequent Mystic players had to apply for the status to unlock access to the zebrafish dataset within Eyewire. There was a high threshold of achievement required for a player to gain Mystic status. Each player was required to reach the previous highest status within the game as well as complete 200+ cubes a month and maintain 95% accuracy when resolving cell errors. Once a player was approved by the lab, they were granted access to the zebrafish dataset and given the option to have a live tutorial session with an Eyewire Admin. There was also a prerecorded tutorial video and written tutorial materials for players who could not attend the live session or who desired a review of the materials. Newly promoted players were also given a special badge, a new chat color and access to a new chat room exclusive to Mystics. These rewards helped motivate players by showing their elevated status within the game as well as giving them a space to discuss issues specific to the zebrafish dataset.

Cells were parceled out in batches to players. When a cell went live on Eyewire, it could be claimed by any Mystic. Each cell was reconstructed by only one player at a time. Once this player had finished their reconstruction, a second player would check over the cell for errors. After the second check, a final check was done by an Eyewire Admin. To mitigate confusion when claiming cells, a special graphical user interface was built into the game that allowed players to see the status of each cell currently online. A cell could be at one of the following five statuses: ‘Need Player A’, ‘Player A’, ‘Need Player B’, ‘Player B’ and ‘Need Admin’. These statuses indicated whether a cell needed to be checked or was in the process of being checked and whether it needed a first-, second- or Admin-level check. At each stage, the username of the player or Admin who had done a first, second or final check was also visible. It was made mandatory that the first and second checks were performed by two separate players.

Collaboration and feedback were important parts of the checking process. If a player was unsure about an area of the cell they were working on, they could leave a note with a screenshot and detail of the issue or create an alert that would notify an Admin. If a ‘Player B’ or an Admin noticed a mistake made earlier in the pipeline, they could inform the player of the issue via a ‘Review’ document or through an in-game notification (player tagging).

To differentiate the zebrafish cells from the regular dataset, each cell was labeled with a Mystic tag. This tag helped identify the cells as separate from the e2198 retinal Eyewire dataset and also populated them to a menu of active zebrafish cells within Eyewire.

Players were rewarded for their work in the zebrafish dataset with points. For every edit they made to a cell, they received 300 points. Points earned while playing zebrafish were added to a player’s overall points score for all gameplay done on Eyewire and appeared in the Eyewire leaderboard.

Skeletonization

The neuron segmentation IDs were ingested to an AWS SQS queue, and multiple distributed workers were launched in Google Cloud using kubernetes. Each worker fetched the segmentation chunks associated with a neuron ID. The segmentation voxels were extracted as a point cloud, and the distance to boundary field (DBF) was computed inside each chunk. Finally, a modified version of the skeletonization algorithm TEASAR was applied78. Briefly, we constructed a weighted undirected graph from the point cloud, where the neighboring points are connected with an edge and the edge weight is computed from the DBF. We then took the point with the largest DBF as source and found the furthest point as the target. The shortest path from source to target in the graph was computed as the skeleton nodes. The surrounding points were labeled as visited, and the closest remaining unvisited point was taken as the new source. We repeated this process until all the points were visited. The skeleton node diameter was set as its DBF. The skeleton nodes were postprocessed by removing redundant nodes, removing ‘hairs’ based on diameter, removing branches inside somata, downsampling the nodes, merging single-child segments and smoothing the skeleton path. All skeletonization was performed at multum in parvo level 4.

Synapse detection

Synapses were automatically segmented in this dataset using neural networks to detect clefts and assign the correct partner as previously described30. Briefly, a subset of the imaged data (219 μm3) was selected for annotation. The annotations were performed using the manual annotation tool VAST67. Trained human annotators labeled the voxels that were part of the postsynaptic density and presynaptic docked vesicle pools. A convolutional neural network30 was trained to match the postsynaptic density, using 107 μm3 as a training set and 36 μm3 as a validation set and leaving the remaining 76 μm3 as an initial test set. All of these sets were compared to the predictions of the model tuned to an F-score of 1.5 on the validation set to bias toward recall, where recall = TP/(TP + FN). Biasing the predictor toward recall reduces false negatives at the cost of more false positives, which are easier to correct. Apparent human errors were corrected, and training was restarted with a new model. We also later expanded the test set by proofreading similar automated results applied to new sections of the datasets (to increase representation of rare structures in the full image volume). The final model used an RS-U-Net architecture73 implemented using PyTorch75 and was trained using a manual learning rate schedule, decreasing the rate by a factor of 10 when the smoothed validation error converged. The final network reached 86% precision and 83% recall on the test set after 230,000 training iterations.

A convolutional network was also trained to assign synaptic partners to each predicted cleft as previously described30. All 361 synapses in the ground truth were labeled with their synaptic partners, and the partner network used 204 synapses as a training set, 73 as a validation set and the remaining 84 as a test set. The final network was 95% accurate in assigning the correct partners of the test set after 380,000 training iterations.

The final cleft network was applied across the entire image volume and formed discrete predictions of synaptic clefts by running a distributed version of connected components. Each cleft was assigned synaptic partners by applying the partner network to each predicted cleft within nonoverlapping regions of the dataset (1,024 × 1,024 × 1,792 voxels each). In the case where a cleft spanned multiple regions, the assignment within the region that contained most of that cleft was accepted, and the others were discarded. Cleft regions whose centroid coordinates were within 1 μm and were assigned the same synaptic partners were merged together to merge artificially split components.

Finally, spurious synapse assignments (that is, postsynapses on axons and presynapses on dendrites) were cleaned by querying the identity of the ten nearest synapses to every synapse, where each synapse was associated with its closest skeleton node on both the pre- and postsynaptic sides. If the majority of the ten nearest neighbors were of the same identity (presynaptic or postsynaptic), then the synapse was assigned correctly. If the majority were of an opposing identity, these synapses were assigned incorrectly and were deleted. This process eliminated 1,975 falsely assigned synapses (~2% of the total).

Registration to reference atlas

Registration of the EM dataset to the Z-Brain reference atlas31 was performed in two stages. We created an intermediate EM stack from the low-resolution (270 nm per pixel) EM images of the entire larval brain tissue. This intermediate stack had the advantage of a similar field of view as the light microscopy reference volume while also being of the same imaging modality as the high-resolution EM stack. The low-resolution EM stack was registered to the reference brain (elavl3:H2b-RFP) by fitting an affine transform that maps the entire EM volume onto the light microscopy volume. To do this, we selected corresponding points such as neuronal clusters and fiber tracts using the tool BigWarp79. These corresponding points were used to determine an affine transform using the MATLAB least squares solver (mldivide). Subsequently, the intermediate EM stack in the same reference frame as the Z-Brain atlas was used as the template to register the high-resolution EM stack. This was performed in a similar manner by selecting corresponding points and fitting an affine transform. This transform was used to map the reconstructed skeletons from high-resolution EM space to the reference atlas space.

Identification of ABD neurons

We identified ABDM neurons (Fig. 1), which drive lateral movements of the ipsilateral eye, by their overlap with the Mnx transgenic line (Supplementary Fig. 3a, left). ABDM axons exited R5 and R6 through the ABD (VIth) nerve (Fig. 1f, black box) as reported previously22.

Medial movements of the contralateral eye are driven by the medial rectus muscle, which is innervated by motor neurons in the contralateral oculomotor nucleus, which in turn are driven by internuclear neurons (ABDI) in the ipsilateral ABD complex (Fig. 1f). ABDI neurons were identified (Supplementary Fig. 3a, right) by their overlap with two nuclei in the Evx2 transgenic line that labels glutamatergic interneurons. The ABDI neurons were just dorsal and caudal to the ABDM neurons, and their axons crossed the midline80.

Identification of Ve2 neurons

We identified a class of Ve2 neurons (Fig. 1h, dark brown) that received synapses from primary vestibular afferents. The latter were orphan axons in R4 identified as the vestibular branch of the vestibulocochlear nerve (VIIIth nerve) by comparison with the Isl2 line, which labels the major cranial nerves (Supplementary Fig. 3b, blue axons). We speculate that Ve2 neurons are descending octavolateral neurons but withhold final annotation until functional confirmation is available32.

Identification of RS neurons

RS neurons were divided into large and small groups (Fig. 1h). Large RS neurons were the M, Mid2, MiM1, Mid3i and CaD neurons. Small RS neurons were RoV3, MiV1 and MiV2 cells. These were identified by their stereotypic locations81 and by comparison with the Z-Brain atlas (Supplementary Fig. 4).

Centrality-based division into the center and periphery

The division of the reconstructed wiring diagram into center and periphery is based on standard measures of ‘centrality’, which have been devised in network science82. We define the simplest measure, known as degree centrality, as the geometric mean of the in-degree and out-degree of a neuron (it is more common to choose one or the other). Another popular measure, known as eigenvector centrality or eigencentrality, is a neuron’s element in the eigenvector of the connection matrix Nij (the number of synapses onto neuron i from neuron j) corresponding to the eigenvalue with maximal real part; this measure extends the simpler concept of degree centrality by weighing a neuron’s connections by its centrality (a neuron is more central to the network if it receives inputs from other high-centrality neurons). Mathematically, this defines the eigenvector problem νi = ΣNijνj, where νi is the (input) centrality of neuron i. An analogous formula, but instead replacing Nij by its transpose (thus, defining left rather than right eigenvectors), can be used to weight output connections by the (output) centrality of the neuron to which output is projected. The eigenvector elements can be chosen non-negative by the Perron–Frobenius theorem. It is standard to use either the left or right eigenvector, but we use both for our definition by computing the geometric mean of the left and right eigenvector elements. Degree centrality and eigencentrality are correlated but not perfectly (Supplementary Fig. 1c). For a visualization of the network based on eigencentrality, see Supplementary Fig. 1e.

We define the periphery as those cells with vanishing (<10−8) eigencentrality but nonzero degree centrality. All but 62 of these 2,344 neurons had strictly 0 eigencentrality. The remaining 540 recurrently connected neurons are defined as the ‘center’ of the graph. See below for the effects of varying the eigencentrality threshold for center–periphery division.

Graph clustering

We applied two graph clustering algorithms to divide the center into two supermodules and obtained similar results from each algorithm. The clustering from the Louvain algorithm is presented in the main text and that of the spectral algorithm in Extended Data Fig. 3. We further subdivided each supermodule into subclusters using an SBM, as described below.

Louvain clustering

Graph clustering was performed using the Louvain clustering algorithm83 for identifying different ‘communities’ or ‘modules’ in an interconnected network by optimizing the ‘modularity’ of the network, where modularity measures the (weighted) density of connections within a module compared to between modules. Formally, the modularity measure maximized is Qgen = ∑Bijδ(ci, cj), where δ(ci,cj) equals 1 if neurons a and b are in the same module and 0 otherwise, where Bij=1ω(Wijγsisjω)+transpose[1ω(Wijγsisjω)]. Here, si = ∑cWic is the sum of weights into neuron i, sj = ∑bWjb is the sum of weights out of neuron j, ω = ∑cdWcd is the total sum of weights in the network, and the resolution parameter γ determines how much the naively expected weight of connections γsisjω is subtracted from the connectivity matrix. Potential degeneracy in graph clustering was addressed by computing the consensus of the clustering similar to Sporns and Betzel84. Briefly, an association matrix, counting the number of times a neuron is assigned to a given module, was constructed by running the Louvain algorithm 200 times. Next, a randomized association matrix was constructed by permuting the module assignment for each neuron. Reclustering the thresholded association matrix, where threshold was the maximum element of the randomized association matrix, provided consensus modules. We used the community_louvain.m function from the Brain Connectivity Toolbox package (https://sites.google.com/site/bctnet/home). In addition to the Louvain graph clustering algorithm, we also clustered the ‘center’ with an alternative graph clustering algorithm, spectral clustering, described below.

Spectral clustering

We used a generalized spectral clustering algorithm for weighted directed graphs to bisect the zebrafish ‘center’ subgraph, as proposed by Chung39. Given a graph G(V, E) and its weighted adjacency matrix AR0n×n, where Aij indicates the number of synapses from neuron i to neuron j, one can construct a Markov chain on the graph with a transition matrix Pα, such that [Pα]ij:=(1α)Aij/kAik+α/n. The coefficient α > 0 ensures that the constructed Markov chain is irreducible, and the Perron–Frobenius theorem guarantees that Pα has a unique positive left eigenvector π with eigenvalue 1, where π is also called the stationary distribution. The normalized symmetric Laplacian of the Markov chain is L=I12(Π1/2PαΠ1/2+Π1/2PαΠ1/2).

To approximately search for the optimal cut, we use the Cheeger inequality for a directed graph39 that bridges the spectral gap of L and the Cheeger constant ϕ*. As shown in Gleich85, the eigenvector v corresponding to the second smallest eigenvalue of L, λ2, results in optimal clusters. We obtained two clusters by a binary rounding scheme, that is, S+ = {i Vvi ≥ 0} and S = {i Vvi < 0}.

We modified the directed_laplacian_matrix function in the NetworkX package (https://networkx.github.io) to calculate the symmetric Laplacian for sparse connectivity matrices, with a default α = 0.05. The spectral gap for the eigenvector centrality subgraph is λ2eigen=0.137 and for the partitioned oculomotor (modO) module is λ2eigenOM=0.256.

Degree-corrected SBM

Unlike the Louvain and spectral clustering algorithms that assume fewer intercluster connections than intracluster connections, SBMs assume only that there exist blocks such that the probability of two neurons being connected depends only on their block identities and degrees. SBMs are therefore capable of finding structures like cycles that Louvain clustering would not find because SBMs are built on intercluster connections. Starting with the two modules found by applying Louvain clustering to the ‘center’ subgraph, we applied an efficient statistical inference algorithm86 to obtain SBMs for each module.

The traditional SBM87 is composed of n vertices divided into B blocks with {nr} vertices in each block and with probability prs that an edge exists from block r to block s. Here, we use an equivalent definition that uses average edge counts from the observation ers = nrnsprs to replace the probability parameters, and we construct a degree-corrected stochastic block model88 that further specifies the in- and out-degree sequences {ki+,ki} of the graph as additional parameters.

To infer the best block membership {bi} of the vertices in the observed graph G, we maximize the likelihood P(G|{bi})=1/Ω({ers},{nr},{ki+,ki}), where Ω({ers},{nr},{ki+,ki}) is the total number of different graph realizations with the same degree distribution {ki+,ki} and with {ers} edges among and within blocks of sizes {nr}, corresponding to the block membership {bi}. Therefore, maximizing likelihood is equivalent to minimizing the microcanonical entropy89 S({ers},{nr})=lnΩ({ers},{nr}), which can be calculated as SMi(ki+!)i(ki!)rsersln(erssersrers), where M = ∑rsers is the total number of edges. We used the minimize_blockmodel_dl function in the graph-tool package (https://graph-tool.skewed.de) to bisect the central subgraphs by setting Bmin = Bmax = 2 and degcorr = true.

Shuffled networks

To test the degree to which network properties were determined by specific connections as opposed to statistical connections between blocks, we shuffled synapses while preserving a given block organization. For each cell, we initialized counters to the values of its blockwise in-degree and out-degree. We then randomly reassigned synapses in the following manner to shuffle connections while preserving these degrees. We picked a random presynaptic cell i and a random postsynaptic cell j under the constraint that the counter for the out-degree of cell i to the block containing cell j was greater than 0 and the counter for the in-degree of cell j from the block containing cell i was greater than 0. Each of these counters was then decremented by 1, and the process was repeated until all synapses were reassigned. In this way, once a cell had been selected enough times to reach its original in-degree and out-degree, it could no longer be chosen when assigning the remaining synapses. The shuffling procedure continued until all synapses were reassigned and the blockwise in- and out-degree of each cell was exactly preserved; if this became impossible midshuffle, the procedure was restarted. We considered three different organizations of modO into one, two and seven blocks and calculated each cell’s relative eye position sensitivity and best-fit double exponential decay dynamics as well as the variance explained by the leading eigenvectors and the eigenvalue distribution of the network for 100 random shuffles under each block organization (Extended Data Figs. 8 and 9).

Potential synapse formalism

We define a potential synapse as a presynapse–postsynapse pair within a certain threshold distance (Fig. 2d). This definition is somewhat different from the light microscopic approach, which defines a potential synapse37,90 as an approach of an axon and dendrite within some threshold distance. Also, we use neurons reconstructed from a single animal, whereas the light microscopic approach aggregates neurons from multiple animals or ‘clones’ a single neuron many times91.

Calcium imaging and eye position signals

The complete methods for recording calcium activity used to create the functional maps are reported in Ramirez and Aksay50. Briefly, we used two-photon, raster-scanning microscopy to image calcium activity from single neurons throughout the hindbrain of 7- to 8-day-old transgenic larvae (sex indeterminate) expressing nuclear-localized GCaMP6f (Tg(HuC:GCaMP6f-H2B) strain cy73-431 from the laboratory of M. Ahrens, HHMI Janelia Research Campus). Horizontal eye movements were recorded simultaneously with calcium signals using a substage CMOS camera. We used CalmAn-MATLAB software to extract the neuronal locations from fluorescence movies92.

We analyzed STA activity to determine which neurons were related to eye movements50. For each cell, we interpolated fluorescence activity that occurred within 5 s before or after saccades to a grid of equally spaced, 0.33-s time points and then averaged the interpolated activity across saccades to compute the STA. Separate STAs were taken for saccades toward the left and right. We performed a one-way analysis of variance (ANOVA) on each STA to determine which neurons had significant saccade-triggered changes in average activity (P < 0.01 using the Holm–Bonferroni method to correct for multiple comparisons). To determine which of these neurons had activity related to eye position and eye velocity, we first performed a principal components analysis on the STAs from neurons with significant saccade-triggered changes. We found that the first and second principal components had postsaccadic activity largely related to eye movement and eye velocity sensitivity, respectively (see Fig. 3a in Ramirez and Aksay50). We characterized each STA using a scalar index, named ϕ in Ramirez and Aksay50, created from that STA’s projections onto the first two principal components. We found that this index provides a good characterization of the average eye position- and eye velocity-related activity seen across the population (see Fig. 3c in Ramirez and Aksay50 for a map of values and population average STAs). Eye position and eye velocity neurons were defined as neurons with an STA whose value of ϕ was within a specific range (–83 to 37 and 38 to 68, respectively). We removed any neurons with presaccadic activity that was significantly correlated with time until the upcoming saccade. The locations of each neuron were then registered to the Z-Brain atlas31 (see Ramirez and Aksay50 for complete details).

Categorization of functional cells into cell type was guided by registration to the EM dataset. Candidate functional ABD neurons were identified by proximity (6 μm) to any ABD neuron from the registered EM dataset. Functional Ve2 neurons were identified by proximity (6 μm) to any Ve2 neuron from the registered EM dataset; because the above restrictions filtered out all of the candidate Ve2 neurons, which had very weak sensitivity to spontaneous eye movements, in this case we simply used the constraint that these neurons be deemed active in the CaImAn-MATLAB step. Candidate VPNI cells were determined to be the remaining cells in R4 to R7/8 that satisfied the above restrictions and were more than 30 μm away from the center of the abducens.

Relationship between firing rate and eye position

To assess the functional characteristics of various oculomotor neurons (Fig. 5), we fit a traditional model of eye position sensitivity to neuronal firing rates extracted from our fluorescence measurements. Neuronal firing rates were estimated, up to an unknown scaling factor, by deconvolving the raw calcium fluorescence traces using the deconvolution algorithm with non-negativity constraint described in Pnevmatikakis et al.93. Comparison of the resulting relative firing rates across neurons in different populations was justified as we observed similar sensor expression levels and baseline noise levels in these populations.

We modeled the dependence of firing rate on eye position using the equation r=[k~(EEth)]+, where k~ is the relative eye position sensitivity, Eth is the threshold eye position at which r becomes positive, and the function [x]+ = x if x > 0 and 0 otherwise49. To facilitate comparison of results across animals, we normalized the units of eye position before fitting the model by subtracting the median eye position about the null position (measured as the average raw eye position) and then dividing by the 95th percentile of the resulting positions. Because our focus was on a cell’s position dependence, we also eliminated the eye velocity-dependent burst of spiking activity at the time of the saccade that ABD and VPNI neurons are known to display by removing samples that occur within 1.5 s before or 2 s after each saccade. Saccade times were found in an automated fashion by determining when eye velocity crossed a threshold value50. Finally, because the eye position and fluorescence were recorded at different sampling rates, we linearly interpolated the values of neuronal activity at the eye position sample times.

To fit the value of k~, for each cell, we defined the eye movements toward the cell’s responsive direction as positive so that k~ is positive by construction. We then determined the threshold Eth using an iterative estimation technique based on a Taylor series approximation of [x]+ described by Muggeo94. Using the resulting estimate of Eth, we determined k~ as the slope resulting from a linear regression (with offset) to r using E as a regressor. Because we do not know the cell’s responsive direction or preferred eye a priori, we ran the model four times (for each eye, once with movements to the left as positive and once with movements to the right as positive) and used the value of k~ that resulted in the highest R2 value.

As a goodness-of-fit measure, we required all neurons, except Ve2 neurons, to have an R2 value greater than 0.4. Additionally, non-Ve2 neurons were required to have an STA with at least one significant time point (P < 0.01 by ANOVA using Holm–Bonferroni correction) as defined in Ramirez and Aksay50 and to have a ΔF/F response that was loosely related to eye position (R2 greater than 0.2 when we run the above model replacing r with ΔF/F). The relative eye position sensitivity, k~, for fluorescence data was then scaled to the average physiological VPNI response from goldfish49.

All functional neurons associated with the circuit schematized in Fig. 5a were ipsiversive in position sensitivity. All ABD neurons increased activity with ipsiversive steps in eye position. The vast majority of putatutive VPNI neurons (98%) also had ipsiversive sensitivity; those few cells with contraversive sensitivity were deemed unlikely to be part of the VPNI based on previous work49,56 and not considered further. Candidate Ve2 neurons were mixed in their sensitivity to the direction of eye movement (52% ipsiversive); both types were included in the distributions of Fig. 5d. However, based on previous work32 demonstrating contrasting activity patterns for cells with ipsilateral versus contralateral projections, it was assumed in the below model that only those with ipsiversive sensitivity had ipsilateral projections as schematized in Fig. 5a.

Identification of excitatory versus inhibitory neurons

Prior work has identified two major classes of VPNI cells, those with ipsilateral recurrent excitatory projections and those with inhibitory contralateral projections21,49. In R7/R8, the former were found to have somata primarily within a medial glutamatergic parasagittal stripe associated with the Alx transcription factor and the latter with a more lateral GABAergic parasagittal stripe associated with the Abx1b transcription factor21,95,96. We therefore reasoned that membership in modO, which was predicated to have an ipsilateral recurrent connection with another member of modO, selected for glutamatergic VPNI cells and selected against GABAergic VPNI cells (which were retained in the ‘periphery’). Consistent with this, the VPNI neurons found in the center were located primarily in a medial stripe running from R4 to R7/R8 (Fig. 2c) that overlapped in the Z-Brain atlas with a region of alx expression (ZBB-alx-gal4). ModO neurons at the extreme lateral edge of R5/R6, identified as Ve2 neurons (see above), also had ipsilateral projections, but these have been identified previously as being inhibitory through structure/function analysis32.

Network model based on synaptic wiring diagram

A unilateral model of the oculomotor integrator was built using the reconstructed synapses for ABD, Ve2 and generic modO neurons that contain the majority of the VPNI population. Although the VPNI is a bilateral circuit, previous experiments20,56 have shown that one-half of the VPNI is nevertheless capable of maintaining the ipsilateral range of eye positions after the contralateral half of the VPNI is silenced. This may reflect that most neurons in the contralateral half are below a threshold for transmitting synaptic currents to the opposite side when the eyes are at ipsilateral positions43. Therefore, we built a recurrent network model of one-half of the VPNI circuit based on the modO neurons that we had reconstructed from one side of the zebrafish brainstem. We did not include modA neurons, assuming that input from modA cells fell below a threshold needed to drive modO neurons, similar to the manner in which bilateral interactions have been shown to be negligible for the maintenance of persistent activity20,43. We added the ABD neurons and the feedforward connections from modO to ABD to the model because the ABD neurons are the ‘read-out’ of the oculomotor signals in the VPNI. Projections from the Ve2 population were taken to be inhibitory, and all other connections were taken to be excitatory, as explained above. The remaining neurons in modO were included as part of the VPNI; we did not consider other vestibular populations because only Ve2 neurons in modO received vestibular afferents from the VIIIth nerve. Saccadic burst neurons and recently discovered presaccadic ramp neurons located in R2/R3 (ref. 50) were outside of our reconstructed volume and were therefore not included. Candidate burst neurons in dorsal regions of R5 and R6 have also been recently identified97; to examine the possible impact of misclassifying putative burst neurons as VPNI cells, we ran the model with and without these candidates (identified via registration to the functional maps in Ramirez and Aksay50).

Directed connection weights between each pair of neurons were set in proportion to the number of synapses from the presynaptic neuron onto the postsynaptic neuron divided by the total number of synapses onto the postsynaptic neuron, Wij=±βNijΣkNik. Thus, we assume that each element Wij corresponds to the fraction of total inputs to neuron i that are provided by neuron j. We then constructed a linear rate model for the network, governed by τdridt=ri+Wijrj, where ri is the firing rate of the ith neuron, Wij are the connection weights, and τ is an intrinsic cellular time constant. τ for ABD and Ve2 neurons was fixed at 100 ms. The scale factor β and the intrinsic time constant τ for the remaining putative VPNI neurons then provided two free parameters that could be used to set the timescale of persistence for the leading two eigenvectors of the network. For a given choice of scale factor and VPNI neuron time constant, we simulated the response of the network to a random pulse of excitatory input and compared the time course of the leading principal components of the simulated network to that of the calcium imaging data. These parameters were tuned so that the simulations roughly matched the decay times of the leading two principal components in the data, resulting in an intrinsic VPNI neuron time constant of 1 s and a scale factor set such that the leading eigenvalue of the network was equal to 0.9. These free parameters of the network model were not finely tuned, and the precise values used are not critical to the results obtained here.

Relative eye position sensitivities were determined numerically by initializing all neurons to zero firing rate and then simulating the response of the network to a series of three saccades occurring at intervals of 7 s. The input received by the network during a saccade was modeled as a 10-ms pulse along a vector composed of a fixed component that was an equal mixture of the leading three eigenvectors of the weight matrix and a random excitatory vector with an amplitude that was 15% of the amplitude of the fixed component. The random component was assigned at the beginning of the simulation, so the network received the same input for each saccade. The same method used to relate firing rates extracted from calcium imaging data to position sensitivity was used for the simulated firing rates, with the exception that the threshold estimation step was not required for the simulations because all firing rates in the simulations were above threshold. Saccade times in the simulations were taken to be at the onset of each input pulse. Finally, to obtain absolute position sensitivities that could be compared to experiments, we multiplied the relative sensitivities by a single global scale factor that was determined by matching the average VPNI population responses from the model to the average physiological VPNI responses from goldfish49.

For both the simulated network and the calcium imaging data, we fit each neuron’s firing rate during the period between 2 s and 6 s following a saccade to a double exponential function, r(t)=a1et/τ1+a2et/τ2, where the time constants were fixed at 10 s and 4 s, respectively, leaving the amplitudes of each exponential as the only free parameters. These time constants were chosen because a simulated network with leading eigenvectors having these time constants had principal components of its neuronal firing rates that consisted of mixtures of exponentials whose summed time courses approximately matched those of the leading two principal components of the data. We found that the correlation between these two amplitudes in simulation better matched the correlation seen in the data when the saccadic input consisted of fixed and random components, as described above. The exact number of leading eigenvectors included in the fixed component is not important to achieve this tuning of the correlation between a1 and a2. We chose to use the leading three eigenvectors because the eigenvalues associated with them were clearly separated from the central disk of eigenvalues.

As a test of the robustness of our results to possible errors in connectome reconstruction, we generated a connectome that accounted for the estimated false-positive and false-negative rates of synapse detection by our connectome reconstruction procedure. We generated 100 models by randomly varying the identified synapses according to the estimated false-positive and false-negative rates and calculated the connection weights as described above. The eye position sensitivities with this synaptic detection jitter were reported as the average of these 100 models (Extended Data Fig. 7b). We also tested how robust our results were to the cutoff criterion for including neurons in the recurrently connected center by progressively increasing the minimum eigenvector centrality criterion for counting a neuron as belonging to the center as opposed to the periphery. We then plotted how the simulation model results changed as the number of center neurons was decreased and simultaneously reported the resulting number of generic (non-Ve2) modO neurons (Extended Data Fig. 7a). For each of these robustness tests, we also performed a singular value decomposition and double exponential fits on the simulated firing rates to compare to the functional imaging results (Extended Data Fig. 7c,d). To characterize the degradation in model performance when the actual connectome was replaced by connectomes generated by spatial proximity (potential connections), we reran the position sensitivity analysis using potential connectomes defined for connections within 5 μm (Fig. 5e).

Statistics and reproducibility

Here, we briefly summarize factors relevant for rigor and reproducibility that have been addressed in more detail elsewhere in the manuscript.

EM data were acquired from one reimaged animal (Fig. 1). No statistical method was used to predetermine sample size. Reconstructions were seeded by a group of 22 neurons that had previously been identified as VPNI cells through functional imaging22. Reconstructions were semiautomated and validated (and if necessary corrected) by in-house image analysis experts. Synapses were automatically segmented with a convolutional network that was 95% accurate when tested on ground truth data; an error correction step checking for preidentity/postidentity eliminated 2% of the total. During subsequent analysis of connectivity, whenever statistical tests were applied, distributions were assumed to be normal, but this was not formally tested. Randomization was used extensively in determination of modular organization. Investigators were not blinded during these analyses.

Fluorescence microscopy data were previously acquired from 20 animals50. No statistical method was used to predetermine sample size. A one-way ANOVA (with multiple comparison correction; normality was assumed) on STA activity was used to identify activity of significance. Investigators were not blinded during these analyses.

Reporting summary

Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article.