hqgp!2`h)KPEbĂL{N`NPlbIdWsGmn7GQ6I皈b(WT`H?C ju[hd=<@"=IATiZ7Bi}BR,͒ONZB&?Z`b qBZGr}]1p,!1 ={1:-HdSGf0B0X_PjUp35V;aH>xڿ#@r]XH0 +NLۿ8= 1U] Hox;#k6e& FUM}EnB D*ꫳy!b'*F$DW FLP,TA" }dq5~d0T1P4*b8!U6AfzP[E\%~cj,67'1{QT i3 ,Zd(14V`:'5DF3F$>ҷʈ31"X 8@PBPcAXLDRNdh2w5h/2f1V"AA=$-Ln3};si\CN"q QQxs7!Pi24dA #f+2XIPrbfQ/%T&94X`_Cf AF\m?U-łcO-nöd4LAM$[[%յa"n:>jQ3$~*QciБnHTNh[_}RDAW4 gu)034@p~kDjzbb7wȈ[OjS@$+jh}UXsL @znmL l"KX"1i-*ip{G'L`VXr+?lV%iId%ˍ2s(܂ԓ(=hL$8h M)*b#5vl;Griw_#o\J'tLzSnm[Q`2>"U9cwp52&PN; 5.0c@)X.D>Fw<TnRu*#Ս.O\ ;Dp&KϫJɍ"@ d+r@Ph[֒rj۠c,#ӚqA* 8;[3ފc8KkٶLjpVlOL vmA2{TPK {4AUS}wB B % theoryofselfrepr00vonn_0_scandata.xmlUT 7c7c PK {4AU$/ / ! theoryofselfrepr00vonn_0_djvu.txtUT 7c7cDUCING MATA Commerce NOTICE: Return or renew all Library Materials! The Minimum Fee for each Lost Book is $50.00. The person charging this material is responsible for its return to the library from which it was withdrawn on or before the Latest Date stamped below. Theft mutilation, and underlining of books are reasons for discipli- nary action and may result in dismissal from the University. To renew call Telephone Center, 333-8400 UNIVERSITY OF ILLINOIS LIBRARY AT URBANA-CHAMPAIGN 4 "y. JUL 0 7 ?93 TO re: CALL: ME 0 5 2009 W BOOKS 333-8400 L161— O-1096 Theory of Self-Reproducing Automata Theory of Self-Reproducing Automata JOHN VON NEUMANN edited and completed by Arthur W. Burks University of Illinois Press URBANA AND LONDON 1966 © 1966 by the Board of Trustees of the University of Illinois. Manufactured in the United States of America. Library of Congress Catalog Card No. 63-7246. 5 l o , CONTENTS Preface xv EDITOR'S INTRODUCTION Von Neumann's Work on Computers 1 Von Neumann the Mathematician 2 Von Neumann and Computing 2 Logical Design of Computers 6 Programming and Flow Diagrams 12 Computer Circuits 15 Von Neumann's Theory of Automata 17 Introduction 17 Natural and Artificial Automata 21 Mathematics of Automata Theory 25 PART ONE THEORY AND ORGANIZATION OF COMPLICATED AUTOMATA First Lecture: Computing Machines in General 31 Second Lecture: Rigorous Theories of Control and Information. . 42 Third Lecture: Statistical Theories of Information 57 Fourth Lecture: The Role of High and of Extremely High Complication 64 Fifth Lecture: Re-evaluation of the Problems of Complicated Automata — Problems of Hierarchy and Evolution 74 PART TWO THE THEORY OF AUTOMATA: CONSTRUCTION, REPRODUCTION, HOMOGENEITY CHAPTER 1 GENERAL CONSIDERATIONS 1.1 Introduction 91 1.1.1.1 The theory of automata 91 1.1.1.2 The constructive method and its limitations 91 1.1.2.1 The main questions: (A)-(E) 92 1.1.2.2 The nature of the answers to be obtained 92 V Vi THEORY OF SELF-REPRODUCING AUTOMATA [1.1.2.3 Von Neumann's models of self -reproduction] 93 1.2 The Role of Logics— Question (A) 99 1.2.1 The logical operations — neurons 99 1.2.2 Neural vs. muscular functions 101 1.3 The Basic Problems of Construction — Question (B) 101 1.3.1.1 The immediate treatment, involving geometry, kinematics, etc 101 1.3.1.2 The non-geometrical treatment — structure of the vacuum 102 1.3.2 Stationarity — quiescent vs. active states 103 1.3.3.1 Discrete vs. continuous framework 103 1.3.3.2 Homogeneity: discrete (crystalline) and continuous (Euclidean) 103 1.3.3.3 Questions of structure: (P)-(R) 104 1.3.3.4 Nature of results, crystalline vs. Euclidean: state- ments (X)-(Z) 105 [1.3.3.5 Homogeneity, quiescence, and self -reproduction] . . 106 1.3.4.1 Simplification of the problems of construction by the treatment according to Section 1.3.1.2 108 1.3.4.2 Quiescence vs. activity; excitability vs. unexcita- bility; ordinary and special stimuli 109 1.3.4.3 Critique of the distinctions of Section 1.3.4.2. . . . 110 1.4 General Construction Schemes — Question (B) Continued. . Ill 1.4.1.1 Construction of cell aggregates — the built-in plan. . Ill 1.4.1.2 The three schemes for building in multiple plans — the parametric form 112 1.4.2.1 The descriptive statement L for numerical param- eters 112 1.4.2.2 Applications of L 113 1.4.2.3 Use of L as an unlimited memory for (A) 113 1.4.2.4 Use of base two for L 114 [1.4.2.5 The linear array L] 114 1.5 Universal Construction Schemes — Question (C) 116 1.5.1 Use of L for non-numerical (universal) parametri- zation 116 1.5.2 The universal type of plan 116 1.6 Self-Reproduction — Question (D) 118 1.6.1.1 The apparent difficulty of using L in the case of self -reproduction 118 1.6.1.2 Circumvention of the difficulty — the types E andE F 118 1.6.2.1 First remark: shape of L 119 CONTENTS vii 1.6.2.2 Second remark: avoidance of collision in a single reproduction 120 1.6.2.3 Third remark: analysis of the method for overcom- ing the difficulty of Section 1.6.1.1 — the role of L. . 121 1.6.3.1 Copying: use of descriptions vs. originals 122 [1.6.3.2 The Richard paradox and Turing machines] 123 1.7 Various Problems of External Construction Intermediate Between Questions (D) and (E) 126 1.7.1 Positioning of primary, secondary, ternary, etc 126 1.7.2.1 Constructed automata: initial state and starting stimulus 127 1.7.2.2 Single-action vs. sequential self-reproduction 128 1.7.3 Construction, position, conflict 129 1.7.4.1 E F and the gene-function 130 1.7.4.2 E F and the mutation — types of mutation 130 1.8 Evolution— Question (E) 131 CHAPTER 2 A SYSTEM OF 29 STATES WITH A GENERAL TRANSITION RULE 2.1 Introduction 132 2.1.1 The model: states and the transition rule 132 2.1.2 Formalization of the spatial and the temporal rela- tions 132 2.1.3 Need for a pre-formalistic discussion of the states. . 134 2.2 Logical Functions — Ordinary Transmission States 134 2.2.1 Logical-neuronal functions 134 2.2.2.1 Transmission states — connecting lines 135 2.2.2.2 Delays, corners, and turns in connecting lines. . . . 136 2.3 Neurons— Confluent States 136 2.3.1 The + neuron 136 2.3.2 Confluent states: the • neuron 136 2.3.3 The - neuron 138 2.3.4 The split 138 2.4 Growth Functions: Unexcitable State and Special Transmis- sion States 139 2.4.1 Muscular or growth function — ordinary vs. special stimuli 139 2.4.2 Unexcitable state 139 2.4.3 Direct and reverse processes — special transmission states 140 Vlll THEORY OF SELF-REPRODUCING AUTOMATA 2.5 The Reverse Process 140 2.5.1.1 The reverse process for the ordinary states 140 2.5.1.2 The reverse process for the special states 141 2.5.2 Origination of special stimuli 141 2.6 The Direct Process — Sensitized States 142 2.6.1 The direct process 142 2.6.2.1 First remark: duality of ordinary and special states 142 2.6.2.2 The need for the reverse process 143 2.6.3.1 Second remark: the need for fixed stimulus se- quences to control the direct process 143 2.6.3.2 Additional states required 145 2.6.4 Sensitized states 145 2.7 Even and Odd Delays 146 2.7.1 Even delays by path differences 146 2.7.2.1 Odd delays and single delays 146 2.7.2.2 Single delays through the confluent states 147 2.8 Summary 148 2.8.1 Rigorous description of the states and of the transi- tion rule 148 2.8.2 Verbal summary 150 [2.8.3 Illustrations of the transition rule] 151 chapter 3 DESIGN OF SOME BASIC ORGANS 3.1 Introduction 157 3.1.1 Free and rigid timing, periodic repetition, and phase stop 157 3.1.2 Construction of organs, simple and composite 158 3.2 Pulsers 159 3.2.1 The pulser: structure, dimensions, and timing. . . . 159 3.2.2 The periodic pulser: structure, dimensions, timing, and the PP(T) form 162 3.3 The Decoding Organ: Structure, Dimensions, and Timing. . 175 3.4 The Tripl e-retur n Counter 179 3.5 The I vs. 10101 Discriminator: Structure, Dimensions, and Timing 187 3.6 The Coded Channel 190 3.6.1 Structure, dimensions, and timing of the coded channel 190 3.6.2 Cyclicity in the coded channel 198 CONTENTS ix CHAPTER 4 DESIGN OF A TAPE AND ITS CONTROL 4.1 Introduction 201 [4.1.1 Abstract] 201 4.1.2 The linear array L 202 4.1.3 The constructing unit CU and the memory con- trol MC 204 4.1.4 Restatement of the postulates concerning the con- structing unit CU and the memory control MC . . 207 4.1.5 The modus operandi of the memory control MC on the linear array L 207 4.1.6 The connecting loop Ci 210 4.1.7 The timing loop C 2 213 4.2 Lengthening and Shortening Loops Ci and C 2 , and Writing in the Linear Array L 214 4.2.1 Moving the connection on L 214 4.2.2 Lengthening on L 216 4.2.3 Shortening on L 220 4.2.4 Altering x n in L 224 4.3 The Memory Control MC 226 [4.3.1 The organization and operation of MC] 226 4.3.2 Detailed discussion of the functioning of MC. . . . 231 4.3.3 The read-write-erase unit RWE 237 4.3.4 The basic control organ CO in MC 243 4.3.5 The read-write-erase control RWEC 246 CHAPTER 5 [AUTOMATA SELF-REPRODUCTION] [5.1 Completion of the Memory Control MC 251 5.1.1 The rest of the manuscript 251 5.1.2 Solution of the interference problem 259 5.1.3 Logical universality of the cellular structure 265 5.2 The Universal Constructor CU + (MC + L) 271 5.2.1 The constructing arm 271 5.2.2 Redesign of the memory control MC 277 5.2.3 The constructing unit CU 279 5.3 Conclusion 286 5.3.1 Summary of the present work 286 5.3.2 Self -reproducing automata] 294 X THEORY OF SELF-REPRODUCING AUTOMATA Bibliography 297 Figures 305 Symbol Index 379 Author and Subject Index 381 TABLES [Table I. External characteristics of periodic pulsers] 173 Table II. Summary of the pulse sequences sent into loops Ci and C 2 235 Table III. Summary of excess delay requirements for the peri- odic pulsers stimulating loops Ci and C 2 236 Table IV. The pulsers used to stimulate loops Ci and C 2 237 [Table V. How the control organs of RWEC control the pul- sers and periodic pulsers of RWE] 248 LIST OF FIGURES Second Lecture Fig. 1. Neural network in which dominance is not transitive . . . 305 Fifth Lecture Fig. 2. A binary tape constructed from rigid elements 306 Chapter 1 Fig. 3. The basic neurons 306 Chapter 2 Fig. 4. The quadratic lattice 307 (a) Nearest (O) and next nearest ( • ) neighbors of X (b) Unit vectors (c) Nearest (O) and next nearest ( • ) neighbors of X Fig. 5. Ordinary transmission states 308 Fig. 6. Confluent states 310 (a) Achievement of a • neuron by using confluent states (b) Achievement of a split line by using confluent states Fig. 7. Organization of an area 311 Fig. 8. The need for both even and odd delays 311 Fig. 9. The 29 states and their transition rule 312 LIST OF FIGURES xi Fig. 10. Succession of states in the direct process 313 Fig. 11. Illustrations of transmission and confluent states 314 (a) Realization of f(i + 3) = [a(i) + b(t + 1)] (b) Realization of g(t + 3) = [c(t) • d(t)] (c) Conversion of ordinary stimuli into special stimuli by confluent states, and wire-branching Fig. 12. Illustration of the direct process 315 Fig. 13. Examples of the reverse process 316 (a) Special transmission state killing a confluent state (b) Special and ordinary transmission states killing each other (c) Killing dominates reception but not emission Fig. 14. Procedure for modifying a remote cell and returning the constructing path to its original unexcitable state. ... 317 Chapter 8 Fig. 15. Two pulsers 318 Fig. 16. Design of a pulser 319 Fig. 17. Two periodic pulsers 321 (a) Periodic pulser PP(10010001) (b) Special periodic pulser PP(1) Fig. 18. Design of a periodic pulser: initial construction 322 Fig. 19. Design of a periodic pulser: equalization of delays 325 Fig. 20. Alternate periodic pulser PP( 1) 329 Fig. 21. Decoding organ D(100100001) 329 Fig. 22. Design of a decoder 330 Fig. 23. Triple-return counter 3> 332 Fig. 24. Design of triple-return counter theoryofselfrepr00vonn_0 418 Canon EOS 5D Mark II 650 false lr false true true 0.1 9 5 23 19 25 1 52 28 412 388 3 associate-olivia-ingram@archive.org scribe3.il.archive.org 209 0 2835 20150625203706 20150625212421 associate-lauretta-doellman@archive.org 0 0 728 20150626152211 20150626153419 associate-karen-hoffman@archive.org 0 10 287 20150629200020 20150629200507 0 true false LEFT Color Card false -90 0.69 0.69 0.00 false 3744 5616 0 0 1 1 true 650 RIGHT Cover true 90 3744 5616 650 466 96 3207 5423 5.16 96 0 3641 5609 96 3736 0 560896 3736 0 560897 3735 1 5607132 64 3552 5492 156 64 3520 5436 160 108 3492 5396 -0.44 -0.44 45.73 true LEFT Normal true -90 3744 5616 650 281 199 3188 5180 6.44 37 23 3521 5508 37 3557 23 55300 3592 0 5552732 2676 1986 452188 76 3484 5404 384 300 3188 5180 728 1984 1784 2428 -0.86 -0.86 4.93 true RIGHT Normal true 90 3744 5616 650 408 235 3188 5180 6.55 178 29 3504 5535 178 3681 29 556388 3736 0 55921128 3008 961 4795132 80 3496 5432 132 80 3188 5180 1188 956 1812 3832 -1.27 -1.27 11.93 true LEFT Normal true -90 3744 5616 650 218 206 3188 5180 8.06 1 15 3588 5561 1 3588 1 55900 3592 0 5592-1 -1 -1 -152 68 3520 5460 220 204 3188 5180 0 0 0 0 0.0 0.0 0.0 true RIGHT Normal true 90 3744 5616 650 450 227 3188 5180 7.91 132 17 3563 5519 132 3712 17 553588 3736 0 55761242 3711 989 5529132 68 3512 5416 156 68 3188 5180 1240 2112 1812 340 0.66 0.66 8.28 true LEFT Normal true -90 3744 5616 650 223 190 3188 5180 9.50 10 1 3563 5557 1 3572 1 55570 3592 0 5568-1 -1 -1 -160 52 3512 5456 224 188 3188 5180 0 0 0 0 0.0 0.0 0.0 true RIGHT Title true 90 3744 5616 650 393 230 3188 5180 7.82 123 6 3607 5552 123 3729 6 555788 3736 0 55681038 3326 1070 4390124 56 3556 5448 392 328 3188 5180 1036 1488 2288 2896 -0.25 -0.25 8.16 true LEFT Copyright true -90 3744 5616 650 330 66 3188 5180 8.95 27 16 3540 5561 27 3566 16 55770 3592 0 5576785 3130 -1 447580 68 3488 5460 328 64 3188 5180 784 4172 2348 300 -0.61 -0.61 10.55 true RIGHT Contents true 90 3744 5616 650 394 55 3188 5180 7.47 116 3 3618 5556 116 3733 3 555888 3736 0 5576656 3324 537 4943116 56 3568 5452 396 56 3188 5180 656 216 2664 4752 -0.12 -0.12 9.99 true 5 LEFT Contents true -90 3744 5616 650 176 75 3188 5180 7.59 40 23 3514 5554 40 3553 23 55760 3592 0 5576436 3107 293 494292 76 3480 5452 176 76 3188 5180 432 276 2676 4660 -0.91 -0.91 11.57 true 6 RIGHT Contents true 90 3744 5616 650 348 59 3188 5180 7.49 112 6 3619 5558 112 3730 6 556388 3736 0 5576113 3279 245 4895132 56 3544 5456 348 60 3188 5180 608 224 2668 4668 -0.23 -0.23 13.05 true 7 LEFT Contents true -90 3744 5616 650 216 75 3188 5180 7.73 38 23 3518 5553 38 3555 23 55750 3592 0 5584476 3154 259 489988 76 3484 5452 216 76 3188 5180 472 240 2676 4656 -0.88 -0.88 12.6 true 8 RIGHT Contents true 90 3744 5616 650 346 55 3188 5180 7.04 107 3 3627 5555 107 3733 3 555788 3736 0 5568109 3277 236 4825132 56 3548 5452 348 56 3188 5180 608 220 2664 4600 -0.11 -0.11 10.6 true 9 LEFT Contents true -90 3744 5616 650 222 75 3188 5180 7.31 42 25 3509 5552 42 3550 25 55760 3592 0 5584481 3156 270 490092 76 3480 5448 224 76 3188 5180 480 268 2672 4640 -0.97 -0.97 9.5 true 10 RIGHT Contents true 90 3744 5616 650 396 59 3188 5180 7.70 115 7 3614 5558 115 3728 7 556488 3736 0 5576657 3329 281 4956116 60 3564 5456 396 60 3188 5180 656 268 2668 4684 -0.28 -0.28 11.52 true 11 LEFT Contents true -90 3744 5616 650 174 79 3188 5180 7.11 44 26 3506 5550 44 3549 26 55750 3592 0 5584430 3115 292 498296 76 3476 5448 176 80 3188 5180 424 276 2688 4700 -1.02 -1.02 10.44 true 12 RIGHT Contents true 90 3744 5616 650 354 63 3188 5180 8.31 124 10 3600 5558 124 3723 10 556788 3736 0 5576619 3286 283 -1124 60 3548 5456 356 64 3188 5180 616 268 2664 912 -0.41 -0.41 13.82 true 13 LEFT Normal true -90 3744 5616 650 218 196 3188 5180 8.71 1 5 3587 5561 1 3587 1 55710 3592 0 5584-1 3568 -1 -152 56 3520 5460 220 196 3188 5180 0 0 0 0 0.0 0.0 0.0 true 14 RIGHT Preface true 90 3744 5616 650 320 339 3188 5180 6.65 236 10 3487 5561 236 3722 10 557188 3736 0 5584581 3256 644 5020132 60 3536 5460 320 340 3188 5180 576 640 2676 4376 -0.42 -0.42 19.1 true 15 LEFT Normal true -90 3744 5616 650 198 75 3188 5180 7.74 40 23 3516 5556 40 3555 23 55780 3592 0 5584452 3141 280 491492 76 3480 5452 200 76 3188 5180 448 264 2688 4644 -0.91 -0.91 18.39 true 16 RIGHT Normal true 90 3744 5616 650 324 63 3188 5180 8.40 123 12 3598 5558 123 3720 12 556988 3736 0 5584578 3262 275 4901124 64 3548 5456 324 64 3188 5180 576 260 2684 4636 -0.47 -0.47 24.91 true 17 LEFT Normal true -90 3744 5616 650 210 71 3188 5180 8.59 29 20 3536 5561 29 3564 17 55840 3592 0 5584472 3144 265 489880 72 3484 5460 212 72 3188 5180 468 244 2672 4652 -0.66 -0.66 22.06 true 18 RIGHT Normal true 90 3744 5616 650 232 59 3188 5180 8.14 122 9 3604 5558 122 3725 9 556688 3736 0 5568496 3165 293 3206124 60 3552 5456 232 60 3188 5180 492 272 2668 2944 -0.36 -0.36 21.27 true 19 LEFT Normal true -90 3744 5616 650 218 198 3188 5180 8.54 1 7 3588 5561 1 3588 1 55750 3592 0 5584-1 -1 -1 -152 60 3520 5460 220 196 3188 5180 0 0 0 0 0.0 0.0 0.0 true RIGHT Introduction true 90 3744 5616 650 256 335 3188 5180 7.94 121 6 3610 5560 121 3730 6 556588 3736 0 5576515 3189 689 5082120 56 3560 5456 256 336 3188 5180 512 688 2676 4392 -0.23 -0.23 19.68 true 1 LEFT Normal true -90 3744 5616 650 298 71 3188 5180 8.84 6 19 3563 5561 25 3568 15 55850 3592 0 5584536 3210 300 493656 72 3512 5460 300 72 3188 5180 532 288 2720 4644 -0.58 -0.58 25.15 true 2 RIGHT Normal true 90 3744 5616 650 292 59 3188 5180 7.67 116 6 3614 5557 116 3729 6 556288 3736 0 5568554 3224 279 4863116 56 3564 5456 292 60 3188 5180 552 264 2668 4596 -0.25 -0.25 21.61 true 3 LEFT Normal true -90 3744 5616 650 230 71 3188 5180 8.89 7 19 3563 5561 25 3569 15 55850 3592 0 5584491 3167 280 489260 72 3512 5460 232 72 3188 5180 488 268 2672 4628 -0.58 -0.58 30.21 true 4 RIGHT Normal true 90 3744 5616 650 268 59 3188 5180 7.95 119 9 3607 5556 119 3725 9 556488 3736 0 5576525 3201 264 4877120 60 3556 5452 268 60 3188 5180 524 248 2676 4624 -0.34 -0.34 25.19 true 5 LEFT Normal true -90 3744 5616 650 234 71 3188 5180 8.98 8 19 3563 5561 24 3570 15 55850 3592 0 5584492 3169 269 485260 72 3512 5460 236 72 3188 5180 488 260 2680 4588 -0.55 -0.55 24.98 true 6 RIGHT Normal true 90 3744 5616 650 214 55 3188 5180 7.25 111 4 3623 5556 111 3733 4 555988 3736 0 5576112 3149 243 4866132 56 3548 5452 216 56 3188 5180 468 228 2680 4632 -0.16 -0.16 18.68 true 7 LEFT Normal true -90 3744 5616 650 270 71 3188 5180 9.07 9 18 3563 5561 22 3571 13 55840 3592 0 5584526 3207 270 490960 68 3512 5460 272 72 3188 5180 524 256 2680 4648 -0.5 -0.5 22.87 true 8 RIGHT Normal true 90 3744 5616 650 294 55 3188 5180 7.32 114 3 3620 5554 114 3733 3 555688 3736 0 5568501 3177 251 4845116 56 3568 5452 296 56 3188 5180 500 236 2776 4608 -0.12 -0.12 17.11 true 9 LEFT Normal true -90 3744 5616 650 266 71 3188 5180 8.82 5 19 3563 5561 26 3567 16 55840 3592 0 5584527 3202 295 502256 72 3512 5460 268 72 3188 5180 524 280 2672 4748 -0.59 -0.59 24.16 true 10 RIGHT Normal true 90 3744 5616 650 236 55 3188 5180 7.11 109 5 3624 5553 109 3732 5 555780 3736 0 5560497 3166 279 5552132 56 3548 5452 236 56 3188 5180 496 260 2668 4720 -0.19 -0.19 20.5 true 11 LEFT Normal true -90 3744 5616 650 256 71 3188 5180 8.62 28 21 3538 5561 28 3565 17 55860 3592 0 5592515 3191 292 491080 72 3488 5460 256 72 3188 5180 512 272 2676 4636 -0.64 -0.64 22.54 true 12 RIGHT Normal true 90 3744 5616 650 272 51 3188 5180 6.78 101 1 3633 5555 101 3733 1 555580 3736 0 5560531 3212 293 4920132 52 3548 5452 272 52 3188 5180 528 276 2676 4640 -0.02 -0.02 19.58 true 13 LEFT Normal true -90 3744 5616 650 238 67 3188 5180 9.05 5 15 3563 5561 25 3567 15 55770 3592 0 5576492 3178 316 483856 68 3512 5460 240 68 3188 5180 488 300 2688 4532 -0.56 -0.56 24.78 true 14 RIGHT Normal true 90 3744 5616 650 328 51 3188 5180 7.03 108 1 3626 5553 108 3733 1 555388 3736 0 5552589 3259 272 4782132 52 3548 5452 328 52 3188 5180 584 256 2676 4524 -0.02 -0.02 15.62 true 15 LEFT Normal true -90 3744 5616 650 196 71 3188 5180 8.93 7 19 3563 5561 24 3569 15 55850 3592 0 5584454 3135 278 499760 72 3512 5460 196 72 3188 5180 448 264 2684 4736 -0.55 -0.55 28.06 true 16 RIGHT Normal true 90 3744 5616 650 308 55 3188 5180 7.49 117 5 3616 5554 117 3732 5 555888 3736 0 5560564 3246 263 4976116 56 3564 5452 308 56 3188 5180 560 248 2684 4724 -0.2 -0.2 20.2 true 17 LEFT Normal true -90 3744 5616 650 232 67 3188 5180 9.15 10 17 3563 5561 19 3572 11 55850 3592 0 5584434 3142 267 492260 68 3512 5460 232 68 3188 5180 428 252 2796 4668 -0.42 -0.42 26.63 true 18 RIGHT Normal true 90 3744 5616 650 274 55 3188 5180 7.25 115 5 3618 5549 115 3732 5 555388 3736 0 5560532 3208 253 4927116 56 3568 5448 276 56 3188 5180 532 236 2672 4688 -0.19 -0.19 21.11 true 19 LEFT Normal true -90 3744 5616 650 220 71 3188 5180 9.10 8 18 3563 5561 19 3570 12 55850 3592 0 5584476 3161 275 558460 68 3512 5460 220 72 3188 5180 472 264 2684 4620 -0.44 -0.44 25.44 true 20 RIGHT Normal true 90 3744 5616 650 284 55 3188 5180 7.20 115 5 3619 5549 115 3733 5 555388 3736 0 5552542 3220 263 4849116 56 3568 5448 284 56 3188 5180 540 156 2676 4688 -0.17 -0.17 19.62 true 21 LEFT Normal true -90 3744 5616 650 224 71 3188 5180 9.16 10 18 3563 5561 21 3572 13 55840 3592 0 5584498 3174 292 495060 68 3512 5460 224 72 3188 5180 452 280 2732 4668 -0.47 -0.47 24.78 true 22 RIGHT Normal true 90 3744 5616 650 328 55 3188 5180 7.12 111 2 3623 5552 111 3733 2 555388 3736 0 5552588 3265 273 4898132 52 3548 5448 328 56 3188 5180 584 256 2676 4640 -0.08 -0.08 18.88 true 23 LEFT Normal true -90 3744 5616 650 188 67 3188 5180 9.22 6 14 3563 5561 15 3568 9 55810 3592 0 5584444 3127 298 496956 64 3512 5460 188 68 3188 5180 440 280 2684 4684 -0.33 -0.33 23.83 true 24 RIGHT Normal true 90 3744 5616 650 324 55 3188 5180 7.25 111 4 3623 5556 111 3733 4 555988 3736 0 5560579 3253 273 4907132 56 3548 5452 324 56 3188 5180 576 256 2684 4648 -0.16 -0.16 19.86 true 25 LEFT Normal true -90 3744 5616 650 164 67 3188 5180 8.98 2 15 3563 5561 15 3564 9 55830 3592 0 5584423 3102 304 492452 68 3512 5460 164 68 3188 5180 420 288 2676 4628 -0.33 -0.33 24.81 true 26 RIGHT Normal true 90 3744 5616 650 274 59 3188 5180 7.64 117 6 3614 5556 117 3730 6 556188 3736 0 5560534 3209 255 4865116 56 3564 5452 276 60 3188 5180 532 244 2672 4604 -0.23 -0.23 22.87 true 27 LEFT Normal true -90 3744 5616 650 218 71 3188 5180 8.91 5 18 3563 5561 21 3567 13 55850 3592 0 5584471 3158 281 468856 68 3512 5460 220 72 3188 5180 468 264 2688 4416 -0.47 -0.47 26.78 true 28 RIGHT Normal true 90 3744 5616 650 394 52 3188 5180 7.16 109 2 3625 5556 109 3733 2 555788 3736 0 5560764 3219 1717 -1132 52 3548 5452 396 52 3188 5180 768 1808 2440 892 0.06 0.06 23.22 true 29 LEFT Normal true -90 3744 5616 650 202 348 3188 5180 7.50 42 25 3512 5554 42 3553 25 55780 3592 0 5584466 3133 -1 361392 76 3480 5452 204 348 3188 5180 464 3096 2664 508 -0.97 -0.97 13.33 true 30 RIGHT Normal true 90 3744 5616 650 222 164 3188 5180 7.18 114 9 3613 5540 114 3726 9 554888 3736 0 5560482 3157 655 5029116 60 3560 5436 224 164 3188 5180 484 656 2664 4196 0.34 0.34 21.24 true 31 LEFT Normal true -90 3744 5616 650 240 83 3188 5180 6.15 52 31 3489 5543 52 3540 31 55730 3584 0 5584495 3176 264 4881104 84 3468 5440 240 84 3188 5180 496 248 2676 4624 -1.2 -1.2 21.0 true 32 RIGHT Normal true 90 3744 5616 650 254 59 3188 5180 6.95 109 7 3622 5545 109 3730 7 555188 3736 0 5560490 3204 254 4882132 60 3544 5444 256 60 3188 5180 488 248 2720 4628 0.23 0.23 23.21 true 33 LEFT Normal true -90 3744 5616 650 320 75 3188 5180 7.71 40 24 3514 5558 40 3553 24 55810 3592 0 5584525 3198 291 490392 76 3480 5456 320 76 3188 5180 524 284 2780 4616 -0.92 -0.92 18.68 true 34 RIGHT Normal true 90 3744 5616 650 262 59 3188 5180 6.88 106 9 3622 5543 106 3727 9 555180 3736 0 5560520 3200 256 4885132 60 3544 5440 264 60 3188 5180 520 244 2672 4632 0.31 0.31 24.29 true 35 LEFT Normal true -90 3744 5616 650 214 79 3188 5180 6.66 49 29 3497 5548 49 3545 29 55760 3592 0 5584472 3151 267 4900100 80 3472 5444 216 80 3188 5180 472 256 2672 4636 -1.12 -1.12 17.09 true 36 RIGHT Normal true 90 3744 5616 650 248 59 3188 5180 6.64 102 7 3629 5545 102 3730 7 555180 3736 0 5560507 3182 239 4870132 60 3544 5444 248 60 3188 5180 508 228 2668 4636 0.23 0.23 22.55 true 37 LEFT Normal true -90 3744 5616 650 204 75 3188 5180 7.46 41 24 3509 5557 41 3549 24 55800 3568 0 5584460 3141 260 489392 76 3480 5456 204 76 3188 5180 460 244 2676 4640 -0.94 -0.94 16.76 true 38 RIGHT Normal true 90 3744 5616 650 181 55 3188 5180 6.50 96 4 3638 5553 96 3733 4 555680 3736 0 556897 3110 230 4839132 56 3548 5452 180 56 3188 5180 432 224 2672 4608 0.14 0.14 19.87 true 39 LEFT Normal true -90 3744 5616 650 254 79 3188 5180 7.07 44 26 3502 5554 44 3545 26 55790 3584 0 5584501 3176 262 489696 76 3476 5452 256 80 3188 5180 492 244 2712 4644 -1.02 -1.02 16.44 true 40 RIGHT Normal true 90 3744 5616 650 200 55 3188 5180 6.40 95 4 3639 5551 95 3733 4 555480 3736 0 556896 3134 248 3059132 56 3548 5448 200 56 3188 5180 460 248 2668 2800 0.14 0.14 17.54 true 41 LEFT Normal true -90 3744 5616 650 266 208 3188 5180 7.43 42 25 3509 5556 42 3550 25 55800 3568 0 5592524 3201 693 507892 76 3480 5452 268 208 3188 5180 524 692 2672 4212 -0.95 -0.95 16.71 true 42 RIGHT Normal true 90 3744 5616 650 218 55 3188 5180 6.64 97 4 3637 5556 97 3733 4 555980 3736 0 556898 3155 295 4915132 56 3548 5452 220 56 3188 5180 476 284 2672 4632 0.14 0.14 19.53 true 43 LEFT Normal true -90 3744 5616 650 244 75 3188 5180 7.65 42 25 3511 5560 42 3552 25 55840 3592 0 5592501 3182 306 494792 76 3480 5456 244 76 3188 5180 500 304 2676 4636 -0.95 -0.95 14.35 true 44 RIGHT Normal true 90 3744 5616 650 202 55 3188 5180 6.67 97 2 3637 5557 97 3733 2 555880 3736 0 556899 3142 302 5259132 52 3548 5456 204 56 3188 5180 456 292 2680 4640 0.06 0.06 19.1 true 45 LEFT Normal true -90 3744 5616 650 220 75 3188 5180 8.58 6 24 3563 5561 26 3568 16 55930 3592 0 5592476 3157 328 496956 76 3512 5460 220 76 3188 5180 476 328 2676 4636 -0.59 -0.59 23.22 true 46 RIGHT Normal true 90 3744 5616 650 246 55 3188 5180 6.74 97 2 3637 5559 97 3733 2 556080 3736 0 556898 3179 268 4902132 52 3548 5456 248 56 3188 5180 504 260 2672 4636 0.06 0.06 17.46 true 47 LEFT Normal true -90 3744 5616 650 206 75 3188 5180 8.49 28 23 3538 5561 28 3565 17 55900 3576 0 5592465 3143 299 501680 76 3488 5460 208 76 3188 5180 460 288 2680 4748 -0.64 -0.64 24.26 true 48 RIGHT Normal true 90 3744 5616 650 254 55 3188 5180 7.18 110 2 3624 5555 110 3733 2 555688 3736 0 5568522 3191 258 4997132 52 3548 5452 256 56 3188 5180 512 252 2672 4736 -0.06 -0.06 16.47 true 49 LEFT Normal true -90 3744 5616 650 216 79 3188 5180 8.26 26 26 3540 5561 26 3565 15 55980 3568 0 5600476 3151 295 483176 76 3488 5460 216 80 3188 5180 476 284 2668 4540 -0.58 -0.58 27.34 true 50 RIGHT Normal true 90 3744 5616 650 232 55 3188 5346 7.31 112 3 3622 5561 112 3733 1 556788 3736 0 5576488 3166 255 5124132 56 3548 5460 232 56 3188 5348 488 252 2676 4868 -0.02 -0.02 18.04 true 51 LEFT Normal true -90 3744 5616 650 234 75 3188 5180 8.35 30 23 3534 5561 30 3563 18 55900 3576 0 5592496 3170 272 488080 76 3484 5460 236 76 3188 5180 492 264 2672 4612 -0.67 -0.67 21.05 true 52 RIGHT Normal true 90 3744 5616 650 230 55 3188 5180 9.55 160 3 3563 5561 160 3733 3 556488 3736 0 5568488 3166 272 4904132 56 3536 5460 232 56 3188 5180 488 260 2672 4632 -0.12 -0.12 16.28 true 53 LEFT Normal true -90 3744 5616 650 210 71 3188 5180 8.51 0 21 3563 5561 17 3561 11 55930 3568 0 5592468 3560 302 494572 72 3488 5460 212 72 3188 5180 468 292 2672 4648 -0.39 -0.39 20.48 true 54 RIGHT Normal true 90 3744 5616 650 252 55 3188 5180 6.66 96 3 3638 5558 96 3733 3 556080 3736 0 556898 3191 261 4871132 56 3548 5456 252 56 3188 5180 508 248 2676 4616 0.09 0.09 19.4 true 55 LEFT Normal true -90 3744 5616 650 186 75 3188 5180 8.26 29 24 3535 5561 29 3563 17 55930 3568 0 5592445 3122 297 -180 76 3484 5460 188 76 3188 5180 444 288 2672 1768 -0.66 -0.66 21.88 true 56 RIGHT Normal true 90 3744 5616 650 276 192 3188 5180 8.84 145 5 3589 5561 145 3733 4 556788 3736 0 5568534 3208 691 5056144 56 3536 5460 276 192 3188 5180 536 688 2668 4188 -0.16 -0.16 20.23 true 57 LEFT Normal true -90 3744 5616 650 204 75 3188 5180 8.46 2 23 3563 5561 24 3564 15 55930 3576 0 5592450 3126 312 492552 76 3512 5460 204 76 3188 5180 448 300 2700 4616 -0.55 -0.55 27.22 true 58 RIGHT Normal true 90 3744 5616 650 256 55 3188 5180 7.56 114 5 3620 5561 114 3733 5 556688 3736 0 5568516 3192 266 4896116 56 3568 5460 256 56 3188 5180 516 256 2668 4628 -0.17 -0.17 18.59 true 59 LEFT Normal true -90 3744 5616 650 206 71 3188 5180 8.49 0 21 3563 5561 19 3561 12 55920 3584 0 5592463 3560 279 487372 72 3488 5460 208 72 3188 5180 464 264 2672 4604 -0.44 -0.44 26.17 true 60 RIGHT Normal true 90 3744 5616 650 250 55 3188 5180 9.06 149 2 3563 5561 149 3733 2 556288 3736 0 5568512 3182 257 4860148 52 3512 5460 252 56 3188 5180 512 252 2664 4608 -0.06 -0.06 17.3 true 61 LEFT Normal true -90 3744 5616 650 210 71 3188 5180 8.60 2 21 3563 5561 23 3564 14 55890 3584 0 5592469 3145 286 501852 72 3512 5460 212 72 3188 5180 468 276 2672 4736 -0.52 -0.52 26.13 true 62 RIGHT Normal true 90 3744 5616 650 230 51 3188 5180 6.37 109 1 3625 5531 109 3733 1 553188 3736 0 5568490 3166 287 -1132 52 3548 5428 232 52 3188 5180 488 280 2672 1744 -0.03 -0.03 19.52 true 63 LEFT Normal true -90 3744 5616 650 238 351 3188 5180 8.57 5 22 3563 5561 26 3567 15 55910 3584 0 5592499 3173 708 509356 72 3512 5460 240 352 3188 5180 496 712 2672 4376 -0.58 -0.58 26.63 true 64 RIGHT Normal true 90 3744 5616 650 218 55 3188 5180 7.10 104 2 3630 5561 104 3733 2 556288 3736 0 5568106 3144 265 4893132 52 3548 5460 220 56 3188 5180 468 252 2688 4636 -0.08 -0.08 18.66 true 65 LEFT Normal true -90 3744 5616 650 224 75 3188 5180 8.29 25 23 3537 5561 25 3561 15 55930 3568 0 5592484 3161 285 502876 76 3484 5460 224 76 3188 5180 484 272 2668 4748 -0.56 -0.56 23.57 true 66 RIGHT Normal true 90 3744 5616 650 222 55 3188 5180 7.57 115 3 3619 5561 115 3733 3 556588 3736 0 5576439 3156 275 5012116 56 3568 5460 224 56 3188 5180 480 264 2672 4740 -0.12 -0.12 18.59 true 67 LEFT Normal true -90 3744 5616 650 202 79 3188 5180 8.53 8 26 3563 5561 22 3570 13 56000 3592 0 5600462 3569 305 494760 76 3512 5460 204 80 3188 5180 460 296 2672 4640 -0.5 -0.5 28.74 true 68 RIGHT Normal true 90 3744 5616 650 242 59 3188 5180 8.98 143 8 3563 5561 143 3727 7 557188 3736 0 5584504 3174 264 4896144 60 3512 5460 244 60 3188 5180 504 248 2664 4644 -0.3 -0.3 22.46 true 69 LEFT Normal true -90 3744 5616 650 212 75 3188 5180 8.24 19 22 3538 5561 19 3556 12 55930 3560 0 5592462 3140 279 502772 72 3500 5460 212 76 3188 5180 460 264 2692 4756 -0.44 -0.44 24.82 true 70 RIGHT Normal true 90 3744 5616 650 214 59 3188 5180 8.82 140 6 3591 5561 140 3730 6 556788 3736 0 5576477 3147 270 5007140 56 3540 5460 216 60 3188 5180 476 256 2664 4744 -0.23 -0.23 20.92 true 71 LEFT Normal true -90 3744 5616 650 212 75 3188 5180 8.17 26 23 3532 5561 26 3557 16 55910 3560 0 5592501 3175 280 489676 76 3496 5460 212 76 3188 5180 444 264 2724 4632 -0.59 -0.59 21.32 true 72 RIGHT Normal true 90 3744 5616 650 192 51 3188 5180 7.32 109 1 3625 5561 109 3733 1 556188 3736 0 5576454 3129 278 3614132 52 3548 5460 192 52 3188 5180 448 264 2676 3348 -0.05 -0.05 19.01 true 73 LEFT Normal true -90 3744 5616 650 220 220 3188 5180 7.55 34 26 3517 5561 34 3550 20 55940 3560 0 5600473 3157 710 508584 76 3488 5460 220 220 3188 5180 476 712 2676 4196 -0.77 -0.77 21.36 true 74 RIGHT Normal true 90 3744 5616 650 194 55 3188 5180 7.55 119 5 3615 5561 119 3733 2 557088 3736 0 5584454 3128 314 4947120 56 3564 5460 196 56 3188 5180 452 300 2672 4640 -0.08 -0.08 18.74 true 75 LEFT Normal true -90 3744 5616 650 220 75 3188 5180 8.73 9 23 3563 5561 24 3571 14 55930 3592 0 5592475 3157 330 497360 76 3512 5460 220 76 3188 5180 476 316 2676 4652 -0.53 -0.53 25.22 true 76 RIGHT Normal true 90 3744 5616 650 222 55 3188 5180 7.31 108 5 3625 5561 108 3732 5 556780 3736 0 5584109 3634 306 4938132 56 3548 5460 224 56 3188 5180 484 292 2664 4640 -0.19 -0.19 19.63 true 77 LEFT Normal true -90 3744 5616 650 194 75 3188 5180 8.38 27 24 3539 5561 27 3565 16 55940 3576 0 5600456 3126 329 496980 76 3488 5460 196 76 3188 5180 456 316 2664 4648 -0.61 -0.61 25.2 true 78 RIGHT Normal true 90 3744 5616 650 242 59 3188 5180 8.81 144 6 3589 5561 144 3732 5 556988 3736 0 5584506 3172 279 4883144 56 3536 5460 244 60 3188 5180 504 268 2664 4612 -0.19 -0.19 20.38 true 79 LEFT Normal true -90 3744 5616 650 182 79 3188 5180 8.00 26 26 3534 5561 26 3559 15 55980 3568 0 5600445 3116 308 494676 76 3484 5460 184 80 3188 5180 440 300 2672 4636 -0.58 -0.58 24.84 true 80 RIGHT Normal true 90 3744 5616 650 254 59 3188 5180 7.47 117 8 3617 5561 117 3733 5 557388 3736 0 5592518 3187 273 4833116 60 3564 5460 256 60 3188 5180 516 268 2664 4644 -0.17 -0.17 20.09 true 81 LEFT Normal true -90 3744 5616 650 190 79 3188 5180 8.01 18 26 3540 5561 18 3557 11 56020 3560 0 5608453 3125 314 495768 76 3504 5460 192 80 3188 5180 448 308 2672 4644 -0.41 -0.41 23.56 true 82 RIGHT Normal true 90 3744 5616 650 244 63 3188 5180 7.60 117 13 3611 5561 117 3727 8 558088 3736 0 5600509 3174 281 4915116 64 3560 5460 244 64 3188 5180 508 276 2660 4632 -0.31 -0.31 22.11 true 83 LEFT Normal true -90 3744 5616 650 218 75 3188 5180 8.72 9 25 3563 5561 17 3571 11 56000 3576 0 5600473 3156 298 495860 76 3512 5460 220 76 3188 5180 472 292 2680 4656 -0.39 -0.39 20.4 true 84 RIGHT Normal true 90 3744 5616 650 242 59 3188 5180 8.89 143 8 3587 5561 143 3729 7 557188 3736 0 5584501 3177 270 4897144 60 3536 5460 244 60 3188 5180 500 260 2672 4632 -0.27 -0.27 15.54 true 85 LEFT Normal true -90 3744 5616 650 188 75 3188 5180 8.27 2 25 3563 5561 24 3564 14 55980 3568 0 5600446 3126 283 481352 76 3512 5460 188 76 3188 5180 444 268 2676 4536 -0.53 -0.53 26.89 true 86 RIGHT Normal true 90 3744 5616 650 260 63 3188 5180 7.62 113 10 3615 5561 113 3727 8 557480 3736 0 5584114 3184 275 -1132 60 3544 5460 260 64 3188 5180 528 268 2652 784 -0.31 -0.31 19.07 true 87 LEFT Normal true -90 3744 5616 650 220 211 3188 5180 8.59 0 20 3563 5561 14 3562 8 55930 3568 0 5592-1 -1 -1 -164 72 3496 5460 220 212 3188 5180 0 0 0 0 -0.3 -0.3 0.0 true 88 RIGHT Normal true 90 3744 5616 650 444 58 3188 5180 8.84 140 8 3589 5561 140 3728 7 557088 3736 0 5584763 3199 1729 2988140 60 3536 5460 444 60 3188 5180 884 1812 2308 1168 -0.28 -0.28 16.19 true 89 LEFT Normal true -90 3744 5616 650 184 78 3188 5180 8.07 20 26 3540 5561 20 3559 12 56010 3568 0 5600446 3118 -1 396672 76 3488 5460 184 76 3188 5180 448 3132 2660 828 -0.45 -0.45 17.26 true 90 RIGHT Chapter true 90 3744 5616 650 260 182 3188 5180 8.66 140 11 3588 5561 140 3727 7 557788 3736 0 5592520 3194 682 5033140 64 3536 5460 260 184 3188 5180 516 688 2676 4168 -0.3 -0.3 22.69 true 91 LEFT Normal true -90 3744 5616 650 180 79 3188 5180 8.22 26 26 3539 5561 26 3564 15 55980 3592 0 5600438 3118 276 500076 76 3488 5460 180 80 3188 5180 436 260 2676 4736 -0.58 -0.58 23.22 true 92 RIGHT Normal true 90 3744 5616 650 208 71 3188 5180 9.29 131 18 3563 5560 131 3709 18 557780 3736 0 5592466 3146 272 4961132 68 3512 5456 208 72 3188 5180 464 268 2676 4688 -0.72 -0.72 21.52 true 93 LEFT Normal true -90 3744 5616 650 230 75 3188 5180 8.60 9 25 3563 5561 22 3571 13 55990 3592 0 5600490 3163 290 490560 76 3512 5460 232 76 3188 5180 488 272 2672 4620 -0.48 -0.48 26.3 true 94 RIGHT Normal true 90 3744 5616 650 200 67 3188 5180 9.75 143 17 3563 5558 143 3709 17 557488 3736 0 5584459 3203 281 4893144 68 3512 5456 200 68 3188 5180 456 272 2676 4616 -0.72 -0.72 18.57 true 95 LEFT Normal true -90 3744 5616 650 264 79 3188 5180 8.22 0 26 3563 5561 21 3562 13 56000 3576 0 5600431 3209 310 494464 76 3496 5460 264 80 3188 5180 504 292 2708 4648 -0.47 -0.47 26.16 true 96 RIGHT Normal true 90 3744 5616 650 218 67 3188 5180 9.36 137 16 3563 5561 137 3713 15 557888 3736 0 5592440 3183 279 4802136 68 3512 5460 220 68 3188 5180 440 268 2744 4600 -0.62 -0.62 24.32 true 97 LEFT Normal true -90 3744 5616 650 220 75 3188 5180 7.51 17 25 3527 5561 17 3543 11 56000 3568 0 5600468 3177 291 492968 76 3504 5460 220 76 3188 5180 456 272 2716 4648 -0.39 -0.39 22.9 true 98 RIGHT Normal true 90 3744 5616 650 188 71 3188 5180 9.36 131 18 3563 5561 131 3708 18 557880 3736 0 5592447 3123 272 4879132 68 3512 5460 188 72 3188 5180 448 268 2668 4604 -0.73 -0.73 19.8 true 99 LEFT Normal true -90 3744 5616 650 226 75 3188 5180 7.50 17 23 3526 5561 17 3542 10 55980 3576 0 5600484 3162 285 490568 76 3504 5460 228 76 3188 5180 484 268 2672 4628 -0.38 -0.38 17.59 true 100 RIGHT Normal true 90 3744 5616 650 192 67 3188 5180 8.50 123 17 3594 5561 123 3716 14 558180 3736 0 5592124 3127 292 4908124 68 3544 5460 192 68 3188 5180 452 276 2668 4616 -0.55 -0.55 27.95 true 101 LEFT Normal true -90 3744 5616 650 224 75 3188 5180 8.89 11 23 3563 5561 16 3573 10 55980 3600 0 5600482 3570 298 497564 76 3512 5460 224 76 3188 5180 480 280 2676 4688 -0.36 -0.36 18.98 true 102 RIGHT Normal true 90 3744 5616 650 188 67 3188 5180 8.20 120 15 3600 5561 120 3719 12 558080 3736 0 5592121 3102 292 4959120 68 3548 5460 188 68 3188 5180 420 276 2676 4676 -0.48 -0.48 26.15 true 103 LEFT Normal true -90 3744 5616 650 244 71 3188 5180 8.58 0 20 3563 5561 15 3562 9 55930 3576 0 5592507 3561 293 503272 72 3492 5460 244 72 3188 5180 508 280 2660 4748 -0.34 -0.34 23.88 true 104 RIGHT Normal true 90 3744 5616 650 183 71 3188 5180 8.95 127 18 3563 5561 127 3711 16 558180 3736 0 5592412 3079 293 5030128 68 3512 5460 184 72 3188 5180 412 280 2664 4736 -0.67 -0.67 22.52 true 105 LEFT Normal true -90 3744 5616 650 244 75 3188 5180 7.45 23 22 3517 5561 23 3539 14 55910 3560 0 5592504 3178 298 493376 72 3496 5460 244 76 3188 5180 504 288 2668 4640 -0.52 -0.52 25.13 true 106 RIGHT Normal true 90 3744 5616 650 172 67 3188 5180 8.80 125 16 3590 5561 125 3714 15 557880 3736 0 5592408 3074 316 4942124 68 3540 5460 172 68 3188 5180 404 304 2664 4632 -0.61 -0.61 20.33 true 107 LEFT Normal true -90 3744 5616 650 268 75 3188 5180 8.03 25 23 3531 5561 25 3555 15 55930 3584 0 5592528 3200 328 496676 76 3496 5460 268 76 3188 5180 528 320 2668 4640 -0.56 -0.56 25.21 true 108 RIGHT Normal true 90 3744 5616 650 188 67 3188 5180 8.80 125 16 3590 5561 125 3714 15 557880 3736 0 5592449 3123 301 4928124 68 3540 5460 188 68 3188 5180 448 296 2668 4620 -0.59 -0.59 24.26 true 109 LEFT Normal true -90 3744 5616 650 216 75 3188 5180 8.53 4 22 3563 5561 26 3566 15 55910 3592 0 5592478 3148 312 493356 72 3512 5460 216 76 3188 5180 476 300 2668 4644 -0.58 -0.58 20.5 true 110 RIGHT Normal true 90 3744 5616 650 220 67 3188 5180 8.67 123 14 3594 5561 123 3716 14 557480 3736 0 5584124 3155 273 4905124 64 3544 5460 220 68 3188 5180 476 276 2676 4624 -0.56 -0.56 22.24 true 111 LEFT Normal true -90 3744 5616 650 206 71 3188 5180 9.03 12 21 3563 5561 18 3574 11 55930 3592 0 5592467 3143 300 504164 72 3512 5460 208 72 3188 5180 464 296 2672 4740 -0.41 -0.41 19.64 true 112 RIGHT Normal true 90 3744 5616 650 220 63 3188 5180 8.23 119 12 3603 5561 119 3721 11 557480 3736 0 5592120 3147 267 4998120 64 3552 5460 220 64 3188 5180 484 260 2660 4732 -0.45 -0.45 22.98 true 113 LEFT Normal true -90 3744 5616 650 206 75 3188 5180 8.54 28 22 3540 5561 28 3567 16 55900 3592 0 5592470 3139 285 492480 72 3488 5460 208 76 3188 5180 468 284 2664 4632 -0.62 -0.62 20.14 true 114 RIGHT Normal true 90 3744 5616 650 198 63 3188 5180 8.21 119 10 3605 5561 119 3723 10 557180 3736 0 5592464 3130 270 4899120 60 3552 5460 200 64 3188 5180 460 260 2664 4632 -0.41 -0.41 22.05 true 115 LEFT Normal true -90 3744 5616 650 224 71 3188 5180 9.14 16 20 3563 5561 13 3578 8 55930 3592 0 5592481 3162 298 492668 72 3512 5460 224 72 3188 5180 480 292 2676 4628 -0.3 -0.3 15.51 true 116 RIGHT Normal true 90 3744 5616 650 202 59 3188 5180 7.62 112 8 3617 5561 112 3728 7 557080 3736 0 5592113 3136 261 4891132 60 3544 5460 204 60 3188 5180 464 248 2664 4632 -0.28 -0.28 17.84 true 117 LEFT Normal true -90 3744 5616 650 212 71 3188 5180 8.80 7 21 3563 5561 22 3569 13 55900 3584 0 5592476 3145 267 490060 72 3512 5460 212 72 3188 5180 472 252 2668 4644 -0.5 -0.5 21.29 true 118 RIGHT Normal true 90 3744 5616 650 222 59 3188 5180 8.02 116 9 3609 5561 116 3724 9 556980 3736 0 5584118 3152 269 4897116 60 3556 5460 224 60 3188 5180 484 260 2664 4632 -0.38 -0.38 21.71 true 119 LEFT Normal true -90 3744 5616 650 196 75 3188 5180 8.79 9 22 3563 5561 22 3571 13 55930 3592 0 5592460 3128 293 492660 72 3512 5460 196 76 3188 5180 456 280 2668 4636 -0.48 -0.48 23.51 true 120 RIGHT Normal true 90 3744 5616 650 222 63 3188 5180 8.29 119 11 3603 5561 119 3721 11 557280 3736 0 5592121 3155 276 4885120 64 3552 5460 224 64 3188 5180 480 268 2672 4620 -0.45 -0.45 24.97 true 121 LEFT Normal true -90 3744 5616 650 186 71 3188 5180 8.25 34 21 3526 5561 34 3559 20 55840 3592 0 5592449 3119 277 496984 72 3476 5460 188 72 3188 5180 448 264 2664 4696 -0.78 -0.78 20.6 true 122 RIGHT Normal true 90 3744 5616 650 216 63 3188 5180 8.04 117 10 3607 5559 117 3723 10 556880 3736 0 5584119 3148 269 4949116 60 3556 5456 216 64 3188 5180 476 256 2668 4688 -0.41 -0.41 20.58 true 123 LEFT Normal true -90 3744 5616 650 214 75 3188 5180 8.52 28 22 3538 5561 28 3565 17 55890 3592 0 559270 3148 273 490680 72 3488 5460 216 76 3188 5180 472 260 2672 4640 -0.64 -0.64 21.3 true 124 RIGHT Normal true 90 3744 5616 650 186 63 3188 5180 8.47 121 12 3599 5561 121 3719 12 557380 3736 0 5592122 3116 258 4874120 64 3548 5460 188 64 3188 5180 448 252 2664 4616 -0.48 -0.48 25.23 true 125 LEFT Normal true -90 3744 5616 650 244 75 3188 5180 8.62 6 23 3563 5561 25 3568 15 55920 3592 0 5592505 3178 285 494656 76 3512 5460 244 76 3188 5180 504 272 2668 4668 -0.56 -0.56 23.34 true 126 RIGHT Normal true 90 3744 5616 650 179 67 3188 5180 8.69 123 14 3594 5561 123 3716 14 557580 3736 0 5592124 3086 276 4899124 64 3544 5460 180 68 3188 5180 412 268 2668 4624 -0.55 -0.55 25.19 true 127 LEFT Normal true -90 3744 5616 650 260 71 3188 5180 9.14 16 20 3563 5561 13 3578 8 55930 3592 0 5592516 3199 293 492768 72 3512 5460 260 72 3188 5180 516 276 2676 4640 -0.28 -0.28 16.47 true 128 RIGHT Normal true 90 3744 5616 650 178 67 3188 5180 9.01 128 17 3563 5561 128 3712 16 557980 3736 0 5592406 3082 285 4912128 68 3512 5460 180 68 3188 5180 404 268 2672 4640 -0.66 -0.66 24.06 true 129 LEFT Normal true -90 3744 5616 650 256 75 3188 5180 8.69 8 24 3563 5561 16 3570 10 56000 3584 0 5600515 3194 299 498360 76 3512 5460 256 76 3188 5180 512 288 2676 4692 -0.36 -0.36 20.48 true 130 RIGHT Normal true 90 3744 5616 650 189 71 3188 5180 8.90 130 18 3563 5561 130 3713 15 558388 3736 0 5600420 3088 269 3409132 68 3512 5460 188 72 3188 5180 420 260 2664 3140 -0.62 -0.62 23.18 true 131 LEFT Chapter true -90 3744 5616 650 252 351 3188 5180 9.23 15 20 3563 5561 15 3577 9 55930 3592 0 5592516 3188 690 511268 72 3512 5460 252 352 3188 5180 512 688 2668 4420 -0.33 -0.33 18.84 true 132 RIGHT Normal true 90 3744 5616 650 184 67 3188 5180 9.23 135 14 3563 5561 135 3715 14 557488 3736 0 5592417 3090 265 4916136 64 3512 5460 184 68 3188 5180 416 268 2668 4640 -0.58 -0.58 17.44 true 133 LEFT Normal true -90 3744 5616 650 254 71 3188 5180 9.17 14 21 3563 5561 16 3576 10 55930 3592 0 5592513 3190 295 493464 72 3512 5460 256 72 3188 5180 512 284 2672 4644 -0.36 -0.36 16.02 true 134 RIGHT Normal true 90 3744 5616 650 147 63 3188 5180 8.95 132 12 3588 5561 132 3719 12 557388 3736 0 5592380 3055 260 4890132 64 3536 5460 148 64 3188 5180 380 264 2672 4624 -0.5 -0.5 25.09 true 135 LEFT Normal true -90 3744 5616 650 282 71 3188 5180 9.23 15 20 3563 5561 15 3577 9 55930 3592 0 5592541 3217 285 497468 72 3512 5460 284 72 3188 5180 540 272 2672 4696 -0.34 -0.34 17.39 true 136 RIGHT Normal true 90 3744 5616 650 140 67 3188 5180 9.20 135 14 3563 5560 135 3715 14 557388 3728 0 5592373 3043 275 4914136 64 3512 5456 140 68 3188 5180 372 276 2668 4632 -0.58 -0.58 18.83 true 137 LEFT Normal true -90 3744 5616 650 282 75 3188 5180 8.79 9 22 3563 5561 22 3571 13 55930 3592 0 5592540 3215 289 491760 72 3512 5460 284 76 3188 5180 540 272 2672 4636 -0.48 -0.48 24.72 true 138 RIGHT Normal true 90 3744 5616 650 144 67 3188 5180 9.20 130 17 3563 5560 130 3710 17 557680 3728 0 5592371 3045 291 4949132 68 3512 5456 144 68 3188 5180 368 292 2672 4648 -0.7 -0.7 18.83 true 139 LEFT Normal true -90 3744 5616 650 272 75 3188 5180 8.74 10 24 3563 5561 22 3572 13 55960 3592 0 5600525 3209 318 494960 76 3512 5460 272 76 3188 5180 528 308 2676 4640 -0.48 -0.48 17.22 true 140 RIGHT Normal true 90 3744 5616 650 171 71 3188 5180 9.73 147 20 3563 5561 147 3706 19 558388 3728 0 5600390 3069 276 4920148 72 3512 5460 172 72 3188 5180 388 284 2676 4632 -0.8 -0.8 17.01 true 141 LEFT Normal true -90 3744 5616 650 270 59 3188 5180 8.74 8 7 3563 5595 12 3570 7 56010 3592 0 5600531 3205 314 497460 60 3512 5492 272 60 3188 5180 528 304 2672 4664 -0.27 -0.27 14.31 true 142 RIGHT Normal true 90 3744 5616 650 200 71 3188 5180 9.47 138 19 3563 5561 138 3710 17 558288 3728 0 5600420 3100 268 4914140 72 3512 5460 200 72 3188 5180 424 264 2672 4644 -0.7 -0.7 19.71 true 143 LEFT Normal true -90 3744 5616 650 260 71 3188 5180 7.11 13 20 3515 5561 13 3527 8 55930 3576 0 5592518 3196 306 497864 72 3508 5460 260 72 3188 5180 516 292 2676 4680 -0.28 -0.28 17.0 true 144 RIGHT Normal true 90 3744 5616 650 177 67 3188 5180 8.91 128 15 3587 5559 128 3714 15 557388 3720 0 5592409 3079 256 4963128 68 3536 5456 176 68 3188 5180 408 256 2668 4700 -0.59 -0.59 17.62 true 145 LEFT Normal true -90 3744 5616 650 260 75 3188 5180 8.65 7 23 3563 5561 24 3569 14 55930 3592 0 5592520 3194 288 496260 76 3512 5460 260 76 3188 5180 520 280 2668 4676 -0.53 -0.53 20.8 true 146 RIGHT Normal true 90 3744 5616 650 163 71 3188 5180 9.51 133 19 3563 5561 133 3707 19 558080 3712 0 5592384 3060 277 4920132 72 3512 5460 164 72 3188 5180 384 288 2672 4624 -0.77 -0.77 20.25 true 147 LEFT Normal true -90 3744 5616 650 280 75 3188 5180 8.59 6 23 3563 5561 25 3568 15 55930 3592 0 5592540 3216 298 481856 76 3512 5460 280 76 3188 5180 540 288 2668 4532 -0.56 -0.56 10.45 true 148 RIGHT Normal true 90 3744 5616 650 172 71 3188 5180 9.88 145 19 3563 5561 145 3706 19 558188 3712 0 5592392 3067 253 4760144 72 3512 5460 172 72 3188 5180 392 256 2672 4520 -0.78 -0.78 9.8 true 149 LEFT Normal true -90 3744 5616 650 264 75 3188 5180 8.37 30 24 3536 5561 30 3565 18 55920 3592 0 5592523 3200 264 492180 76 3484 5460 264 76 3188 5180 520 252 2676 4664 -0.67 -0.67 13.54 true 150 RIGHT Normal true 90 3744 5616 650 211 71 3188 5180 9.88 143 20 3563 5561 143 3704 20 558288 3720 0 5600433 3102 272 4914144 72 3512 5460 212 72 3188 5180 432 276 2664 4632 -0.83 -0.83 19.62 true 151 LEFT Normal true -90 3744 5616 650 232 75 3188 5180 8.52 4 24 3563 5561 16 3566 10 56000 3592 0 5600486 3175 295 491756 76 3512 5460 232 76 3188 5180 484 288 2684 4636 -0.36 -0.36 20.88 true 152 RIGHT Normal true 90 3744 5616 650 194 67 3188 5180 9.42 130 16 3563 5561 130 3704 15 557988 3712 0 5600423 3099 267 4904132 68 3512 5460 192 68 3188 5180 424 264 2668 4636 -0.61 -0.61 18.76 true 153 LEFT Normal true -90 3744 5616 650 224 75 3188 5180 8.41 3 24 3563 5561 19 3565 11 55990 3592 0 5600485 3158 282 490256 76 3512 5460 224 76 3188 5180 484 268 2668 4640 -0.42 -0.42 21.42 true 154 RIGHT Normal true 90 3744 5616 650 171 67 3188 5180 9.76 135 15 3563 5561 135 3701 14 557888 3704 0 5600406 3697 274 4914136 68 3512 5460 172 68 3188 5180 404 276 2668 4628 -0.56 -0.56 22.64 true 155 LEFT Normal true -90 3744 5616 650 232 75 3188 5180 8.51 28 22 3537 5561 28 3564 17 55880 3592 0 5592491 3165 271 -180 72 3484 5460 232 76 3188 5180 492 256 2668 2520 -0.64 -0.64 18.85 true 156 RIGHT Chapter true 90 3744 5616 650 183 347 3188 5180 9.08 149 23 3563 5553 149 3697 23 557588 3712 0 5592398 3688 676 5078148 76 3512 5452 184 348 3188 5180 400 676 2672 4392 -0.98 -0.98 15.38 true 157 LEFT Normal true -90 3744 5616 650 234 55 3188 5180 8.48 20 5 3563 5596 9 3582 5 56000 3592 0 5600492 3581 307 482072 56 3512 5492 236 56 3188 5180 488 292 2680 4524 -0.19 -0.19 15.42 true 158 RIGHT Normal true 90 3744 5616 650 152 75 3188 5180 9.07 148 22 3563 5561 148 3691 21 558488 3696 0 5600368 3690 287 4819148 72 3512 5460 152 76 3188 5180 368 276 2672 4536 -0.86 -0.86 13.12 true 159 LEFT Normal true -90 3744 5616 650 260 59 3188 5180 8.16 0 8 3563 5594 13 3557 8 56010 3592 0 5600519 3201 304 492560 60 3504 5492 260 60 3188 5180 516 288 2676 4632 -0.28 -0.28 16.54 true 160 RIGHT Normal true 90 3744 5616 650 149 71 3188 5180 9.14 146 20 3563 5561 146 3690 20 558288 3696 0 5600362 3687 274 4910148 72 3512 5460 148 72 3188 5180 360 268 2684 4636 -0.84 -0.84 13.02 true 161 LEFT Normal true -90 3744 5616 650 270 75 3188 5180 8.68 7 22 3563 5561 24 3569 14 55920 3592 0 5592526 3207 281 491560 72 3512 5460 272 76 3188 5180 528 264 2672 4640 -0.53 -0.53 14.75 true 162 RIGHT Normal true 90 3744 5616 650 167 67 3188 5180 9.61 139 16 3563 5561 139 3693 16 557788 3696 0 5600393 3688 248 4879140 68 3512 5460 168 68 3188 5180 392 240 2672 4632 -0.67 -0.67 15.74 true 163 LEFT Normal true -90 3744 5616 650 246 75 3188 5180 8.53 3 22 3563 5561 22 3565 13 55930 3592 0 5592506 3183 267 493256 72 3512 5460 248 76 3188 5180 504 256 2672 4668 -0.5 -0.5 14.4 true 164 RIGHT Normal true 90 3744 5616 650 182 71 3188 5180 8.91 147 21 3539 5561 147 3685 21 558288 3688 0 5600401 3681 291 4980148 72 3488 5460 180 72 3188 5180 400 280 2668 4692 -0.86 -0.86 14.97 true 165 LEFT Normal true -90 3744 5616 650 238 75 3188 5180 8.57 4 22 3563 5561 22 3566 13 55930 3592 0 5592498 3175 302 485356 72 3512 5460 240 76 3188 5180 496 284 2672 4560 -0.5 -0.5 17.36 true 166 RIGHT Normal true 90 3744 5616 650 143 71 3188 5180 8.89 145 20 3540 5561 145 3684 19 558288 3688 0 5600365 3037 275 4801144 72 3488 5460 144 72 3188 5180 364 268 2668 4528 -0.8 -0.8 14.41 true 167 LEFT Normal true -90 3744 5616 650 270 75 3188 5180 8.18 0 23 3563 5561 16 3556 10 55970 3592 0 5600531 3208 294 480160 76 3504 5460 272 76 3188 5180 520 280 2688 4516 -0.36 -0.36 13.27 true 168 RIGHT Normal true 90 3744 5616 650 135 63 3188 5180 9.26 135 13 3563 5561 135 3681 13 557488 3696 0 5600358 3677 286 4789136 64 3512 5460 136 64 3188 5180 360 272 2660 4512 -0.55 -0.55 18.51 true 169 LEFT Normal true -90 3744 5616 650 280 75 3188 5180 7.17 18 25 3520 5561 18 3537 11 56010 3584 0 5600541 3215 291 496568 76 3504 5460 280 76 3188 5180 540 280 2668 4688 -0.41 -0.41 10.81 true 170 RIGHT Normal true 90 3744 5616 650 129 71 3188 5180 9.06 131 20 3563 5561 131 3679 16 558688 3688 0 5608350 3677 292 4834132 72 3512 5460 128 72 3188 5180 348 280 2672 4636 -0.64 -0.64 12.87 true 171 LEFT Normal true -90 3744 5616 650 270 75 3188 5180 8.41 28 23 3537 5561 28 3564 17 55910 3592 0 5600527 3210 293 492380 76 3484 5460 272 76 3188 5180 524 284 2680 4632 -0.64 -0.64 15.32 true 172 RIGHT Normal true 90 3744 5616 650 153 75 3188 5180 8.05 151 25 3521 5559 151 3671 25 558388 3680 0 5600361 3037 308 4944152 76 3468 5456 152 76 3188 5180 360 300 2672 4576 -1.05 -1.05 13.33 true 173 LEFT Normal true -90 3744 5616 650 246 55 3188 5180 7.61 5 3 3536 5599 5 3540 3 56010 3592 0 5600504 3182 319 495756 56 3516 5496 248 56 3188 5180 504 312 2672 4640 -0.11 -0.11 9.95 true 174 RIGHT Normal true 90 3744 5616 650 151 75 3188 5180 7.72 153 24 3513 5561 153 3665 23 558688 3672 0 5608356 3032 290 4914152 76 3460 5460 152 76 3188 5180 352 276 2676 4632 -0.97 -0.97 8.6 true 175 LEFT Normal true -90 3744 5616 650 260 55 3188 5180 7.53 5 3 3534 5599 5 3538 3 56010 3592 0 5600512 3200 301 492956 56 3516 5496 260 56 3188 5180 512 288 2684 4636 -0.11 -0.11 12.3 true 176 RIGHT Normal true 90 3744 5616 650 174 87 3188 5180 5.98 172 37 3490 5536 172 3661 37 557288 3664 0 5608348 3043 311 4937132 88 3476 5432 176 88 3188 5180 348 304 2688 4624 -1.58 -1.58 11.52 true 177 LEFT Normal true -90 3744 5616 650 264 51 3188 5180 7.45 1 1 3533 5600 1 3533 1 56000 3592 0 5600521 3199 301 494652 52 3520 5496 264 52 3188 5180 520 284 2676 4660 0.02 0.02 11.13 true 178 RIGHT Normal true 90 3744 5616 650 199 87 3188 5180 6.44 173 36 3500 5537 173 3672 36 557288 3720 0 5600381 3060 288 4917132 88 3488 5436 200 88 3188 5180 380 276 2672 4632 -1.58 -1.58 12.37 true 179 LEFT Normal true -90 3744 5616 650 232 71 3188 5180 6.97 9 18 3514 5561 9 3522 5 55930 3592 0 5600487 3170 298 493560 68 3512 5460 232 72 3188 5180 488 280 2676 4648 0.2 0.2 19.7 true 180 RIGHT Normal true 90 3744 5616 650 168 79 3188 5180 7.89 154 26 3518 5558 154 3671 26 558388 3696 0 5608372 3048 306 4932132 76 3488 5456 168 80 3188 5180 372 292 2672 4636 -1.11 -1.11 14.3 true 181 LEFT Normal true -90 3744 5616 650 242 71 3188 5180 9.17 14 21 3563 5561 16 3576 10 55930 3592 0 5592499 3572 322 493864 72 3512 5460 244 72 3188 5180 496 308 2680 4620 -0.36 -0.36 13.76 true 182 RIGHT Normal true 90 3744 5616 650 168 83 3188 5180 7.66 159 32 3523 5544 159 3681 32 557580 3696 0 5600362 3680 308 4934132 84 3496 5440 168 84 3188 5180 360 292 2672 4636 -1.36 -1.36 16.8 true 183 LEFT Normal true -90 3744 5616 650 238 71 3188 5180 9.10 17 20 3563 5561 13 3579 8 55930 3592 0 5592494 3578 317 495468 72 3512 5460 240 72 3188 5180 492 312 2680 4636 -0.3 -0.3 8.47 true 184 RIGHT Normal true 90 3744 5616 650 124 59 3188 5180 8.69 123 7 3536 5561 123 3658 6 557088 3680 0 5600356 3029 269 4902124 60 3484 5460 124 60 3188 5180 356 260 2668 4636 -0.25 -0.25 14.71 true 185 LEFT Normal true -90 3744 5616 650 244 83 3188 5180 6.20 54 31 3485 5550 54 3538 31 55800 3592 0 5600497 3185 292 4934104 84 3468 5448 244 84 3188 5180 496 280 2684 4644 -1.23 -1.23 10.23 true 186 RIGHT Normal true 90 3744 5616 650 118 63 3188 5180 8.58 119 13 3535 5561 119 3653 11 557780 3664 0 5608120 3014 295 4923120 64 3484 5460 120 64 3188 5180 340 284 2668 4632 -0.45 -0.45 12.51 true 187 LEFT Normal true -90 3744 5616 650 258 75 3188 5180 7.83 33 24 3520 5561 33 3552 20 55890 3592 0 5600515 3194 298 493384 76 3488 5460 260 76 3188 5180 516 284 2672 4644 -0.75 -0.75 13.1 true 188 RIGHT Normal true 90 3744 5616 650 124 59 3188 5180 8.37 123 8 3528 5561 123 3650 7 557088 3672 0 5608317 2988 238 4856124 60 3476 5460 124 60 3188 5180 316 232 2664 4616 -0.28 -0.28 9.85 true 189 LEFT Normal true -90 3744 5616 650 264 75 3188 5180 7.96 37 24 3520 5561 37 3556 22 55870 3592 0 5600521 3204 262 489888 76 3484 5460 264 76 3188 5180 520 244 2676 4648 -0.84 -0.84 14.7 true 190 RIGHT Normal true 90 3744 5616 650 131 71 3188 5180 7.95 131 19 3519 5561 131 3649 18 558280 3664 0 5608331 3007 279 4911132 72 3468 5460 132 72 3188 5180 332 272 2676 4636 -0.72 -0.72 12.19 true 191 LEFT Normal true -90 3744 5616 650 254 75 3188 5180 7.76 40 24 3513 5561 40 3552 24 55840 3592 0 5592513 3189 276 491692 76 3480 5460 256 76 3188 5180 512 268 2672 4640 -0.92 -0.92 15.09 true 192 RIGHT Normal true 90 3744 5616 650 137 67 3188 5180 7.88 136 15 3516 5561 136 3651 15 557788 3672 0 5608318 2990 272 4897136 68 3464 5460 136 68 3188 5180 316 268 2668 4628 -0.59 -0.59 11.12 true 193 LEFT Normal true -90 3744 5616 650 294 75 3188 5180 8.02 30 22 3525 5561 30 3554 18 55880 3592 0 5592551 3229 289 491080 72 3492 5460 296 76 3188 5180 552 280 2672 4636 -0.69 -0.69 11.22 true 194 RIGHT Normal true 90 3744 5616 650 151 75 3188 5180 7.30 151 25 3503 5560 151 3653 25 558488 3672 0 5600302 2997 298 4896152 76 3452 5456 152 76 3188 5180 300 296 2692 4592 -1.06 -1.06 4.91 true 195 LEFT Normal true -90 3744 5616 650 310 55 3188 5180 8.53 15 3 3563 5596 5 3577 3 55980 3592 0 5600562 3253 310 483468 56 3512 5492 312 56 3188 5180 560 300 2688 4528 -0.11 -0.11 12.17 true 196 RIGHT Normal true 90 3744 5616 650 123 59 3188 5180 9.48 122 7 3563 5552 122 3680 7 555888 3688 0 5576367 3679 281 4679124 60 3512 5448 124 60 3188 5180 364 272 2672 4496 -0.27 -0.27 11.83 true 197 LEFT Normal true -90 3744 5616 650 262 67 3188 5180 8.88 5 16 3563 5561 1 3567 1 55930 3592 0 5592519 3200 293 497556 68 3512 5460 264 68 3188 5180 520 280 2672 4692 -0.05 -0.05 13.72 true 198 RIGHT Normal true 90 3744 5616 650 166 63 3188 5180 9.01 127 12 3563 5551 127 3675 12 556288 3688 0 5576399 3076 231 4866128 64 3512 5448 164 64 3188 5180 400 228 2672 4632 -0.48 -0.48 13.59 true 199 LEFT Normal true -90 3744 5616 650 220 71 3188 5180 9.23 15 20 3563 5561 15 3577 9 55930 3592 0 5600483 3151 276 -168 72 3512 5460 220 72 3188 5180 484 260 2660 1456 -0.33 -0.33 14.72 true 200 RIGHT Chapter true 90 3744 5616 650 167 331 3188 5180 9.12 125 9 3563 5553 125 3674 9 556188 3680 0 5576416 3084 650 5051124 60 3512 5452 168 332 3188 5180 416 652 2660 4396 -0.36 -0.36 16.79 true 201 LEFT Normal true -90 3744 5616 650 212 71 3188 5180 9.02 9 20 3563 5561 19 3571 12 55890 3600 0 5592471 3146 294 493460 72 3512 5460 212 72 3188 5180 472 288 2668 4640 -0.44 -0.44 23.35 true 202 RIGHT Normal true 90 3744 5616 650 165 59 3188 5180 9.40 119 9 3563 5551 119 3676 9 555988 3680 0 5576412 3077 257 4885120 60 3512 5448 164 60 3188 5180 412 248 2660 4632 -0.34 -0.34 11.38 true 203 LEFT Normal true -90 3744 5616 650 272 71 3188 5180 9.17 13 20 3563 5561 15 3575 9 55920 3592 0 5592532 3201 289 493264 72 3512 5460 272 72 3188 5180 532 284 2668 4644 -0.33 -0.33 14.75 true 204 RIGHT Normal true 90 3744 5616 650 136 63 3188 5180 9.00 124 10 3563 5552 124 3671 10 556188 3680 0 5576377 3048 248 4874124 60 3512 5448 136 64 3188 5180 376 236 2668 4624 -0.41 -0.41 16.03 true 205 LEFT Normal true -90 3744 5616 650 234 71 3188 5180 9.31 15 19 3563 5561 16 3577 10 55900 3592 0 5592494 3166 266 491768 72 3512 5460 236 72 3188 5180 496 256 2664 4652 -0.36 -0.36 10.5 true 206 RIGHT Normal true 90 3744 5616 650 187 63 3188 5180 8.71 127 12 3563 5551 127 3668 12 556288 3672 0 5576428 3667 255 4943128 64 3512 5448 188 64 3188 5180 428 244 2660 4692 -0.47 -0.47 12.69 true 207 LEFT Normal true -90 3744 5616 650 200 71 3188 5180 9.15 12 20 3563 5561 19 3574 12 55890 3592 0 5592460 3136 289 491164 72 3512 5460 200 72 3188 5180 460 276 2668 4628 -0.44 -0.44 17.29 true 208 RIGHT Normal true 90 3744 5616 650 185 63 3188 5180 8.99 123 11 3563 5553 123 3669 11 556388 3672 0 5576424 3099 283 4896124 64 3512 5452 184 64 3188 5180 424 272 2668 4624 -0.42 -0.42 15.3 true 209 LEFT Normal true -90 3744 5616 650 206 71 3188 5180 8.94 7 20 3563 5561 13 3569 8 55930 3592 0 5592465 3142 324 494360 72 3512 5460 208 72 3188 5180 464 308 2672 4644 -0.28 -0.28 18.09 true 210 RIGHT Normal true 90 3744 5616 650 167 59 3188 5180 9.12 120 9 3563 5553 120 3669 9 556188 3672 0 5576413 3668 271 4898120 60 3512 5452 168 60 3188 5180 412 264 2664 4628 -0.34 -0.34 16.33 true 211 LEFT Normal true -90 3744 5616 650 224 71 3188 5180 9.34 15 19 3563 5561 16 3577 10 55890 3592 0 5592482 3160 290 492268 72 3512 5460 224 72 3188 5180 480 280 2676 4632 -0.36 -0.36 16.09 true 212 RIGHT Normal true 90 3744 5616 650 121 63 3188 5180 8.74 120 12 3563 5552 120 3661 12 556380 3664 0 5576121 3028 275 4890120 64 3512 5448 120 64 3188 5180 352 268 2672 4616 -0.47 -0.47 18.72 true 213 LEFT Normal true -90 3744 5616 650 276 67 3188 5180 8.86 11 17 3563 5561 5 3573 3 55930 3576 0 5592534 3209 297 492664 68 3512 5460 276 68 3188 5180 536 284 2668 4636 -0.11 -0.11 11.29 true 214 RIGHT Normal true 90 3744 5616 650 127 63 3188 5180 9.47 128 11 3563 5553 128 3685 11 556388 3688 0 5576313 3663 301 4929128 64 3512 5452 128 64 3188 5180 312 288 2664 4636 -0.44 -0.44 20.79 true 215 LEFT Normal true -90 3744 5616 650 296 67 3188 5180 9.32 16 15 3563 5561 12 3578 7 55850 3592 0 5584557 3233 301 493468 68 3512 5460 296 68 3188 5180 556 292 2668 4636 -0.27 -0.27 16.24 true 216 RIGHT Normal true 90 3744 5616 650 128 67 3188 5180 9.55 128 14 3563 5549 128 3692 14 556288 3696 0 5568304 3690 266 4904128 64 3512 5448 128 68 3188 5180 304 268 2668 4628 -0.58 -0.58 17.46 true 217 LEFT Normal true -90 3744 5616 650 312 67 3188 5180 9.39 13 16 3563 5561 15 3575 9 55850 3600 0 5584575 3244 278 491864 68 3512 5460 312 68 3188 5180 576 268 2660 4644 -0.33 -0.33 12.53 true 218 RIGHT Normal true 90 3744 5616 650 128 67 3188 5180 9.27 129 15 3563 5548 129 3685 15 556288 3688 0 5568312 3674 244 4875128 68 3512 5444 128 68 3188 5180 312 244 2672 4624 -0.61 -0.61 16.16 true 219 LEFT Normal true -90 3744 5616 650 270 71 3188 5180 8.50 22 18 3563 5561 7 3584 4 55930 3592 0 5592530 3583 277 490872 68 3500 5460 272 72 3188 5180 532 260 2664 4644 -0.14 -0.14 11.06 true 220 RIGHT Normal true 90 3744 5616 650 142 71 3188 5180 8.41 142 19 3563 5543 142 3682 19 556188 3696 0 5576343 3674 265 4913144 72 3512 5440 140 72 3188 5180 332 264 2676 4636 -0.77 -0.77 8.11 true 221 LEFT Normal true -90 3744 5616 650 351 63 3188 5180 7.55 11 11 3515 5561 11 3525 6 55770 3592 0 5576624 3295 269 490364 64 3508 5460 352 64 3188 5180 624 256 2664 4640 0.23 0.23 11.84 true 222 RIGHT Normal true 90 3744 5616 650 135 67 3188 5180 7.73 168 14 3563 5513 168 3714 14 552688 3736 0 5536332 3027 225 4872132 64 3544 5412 136 68 3188 5180 332 232 2688 4632 -0.59 -0.59 12.02 true 223 LEFT Normal true -90 3744 5616 650 350 67 3188 5180 6.43 10 5 3511 5602 10 3520 5 56060 3592 0 5608620 3294 338 497960 56 3512 5500 348 68 3188 5180 620 340 2668 4632 0.22 0.22 12.58 true 224 RIGHT Normal true 90 3744 5616 650 120 75 3188 5180 7.50 121 25 3601 5561 121 3721 11 560080 3736 0 5608353 3024 305 4932120 76 3548 5460 120 76 3188 5180 352 292 2664 4632 -0.45 -0.45 14.95 true 225 LEFT Normal true -90 3744 5616 650 340 55 3188 5180 8.47 16 5 3563 5602 9 3578 5 56060 3592 0 5608610 3283 329 496668 56 3512 5500 340 56 3188 5180 608 312 2672 4648 0.2 0.2 11.68 true 226 RIGHT Normal true 90 3744 5616 650 131 59 3188 5180 6.74 114 8 3614 5596 114 3727 8 560380 3736 0 5608115 3044 313 4942116 60 3564 5492 132 60 3188 5180 380 300 2660 4636 -0.31 -0.31 17.43 true 227 LEFT Normal true -90 3744 5616 650 341 71 3188 5180 3.61 1 1 3439 5606 1 3439 1 56060 3640 0 5608602 3277 358 498552 52 3520 5504 340 72 3188 5180 600 344 2672 4636 0.02 0.02 10.41 true 228 RIGHT Normal true 90 3744 5616 650 139 59 3188 5180 6.46 113 6 3619 5598 113 3731 6 560380 3736 0 5608393 3058 329 4957132 56 3548 5496 140 60 3188 5180 392 312 2660 4636 -0.22 -0.22 14.24 true 229 LEFT Normal true -90 3744 5616 650 326 71 3188 5180 6.73 13 8 3515 5598 13 3527 8 56050 3592 0 5608587 3258 351 498164 60 3508 5496 328 72 3188 5180 588 344 2664 4628 -0.3 -0.3 12.93 true 230 RIGHT Normal true 90 3744 5616 650 190 55 3188 5180 6.07 106 3 3628 5598 106 3733 3 560080 3736 0 5608107 3126 297 4909132 56 3548 5496 192 56 3188 5180 448 284 2672 4624 -0.12 -0.12 10.66 true 231 LEFT Normal true -90 3744 5616 650 276 55 3188 5180 7.15 8 5 3529 5604 8 3536 5 56080 3576 0 5608539 3212 334 497060 56 3512 5500 276 56 3188 5180 536 320 2668 4640 -0.17 -0.17 11.62 true 232 RIGHT Normal true 90 3744 5616 650 208 51 3188 5180 8.36 158 1 3563 5597 158 3733 1 559788 3736 0 5608470 3138 290 4914132 52 3536 5496 208 52 3188 5180 468 284 2668 4624 -0.03 -0.03 15.67 true 233 LEFT Normal true -90 3744 5616 650 250 79 3188 5180 6.26 20 26 3499 5561 20 3518 12 56020 3576 0 5608507 3187 335 497272 76 3500 5460 252 80 3188 5180 508 328 2672 4636 -0.44 -0.44 17.24 true 234 RIGHT Normal true 90 3744 5616 650 231 55 3188 5180 8.54 162 3 3563 5597 162 3733 3 559988 3736 0 5608486 2903 280 5042132 56 3540 5496 232 56 3188 5180 488 284 2456 4756 -0.09 -0.09 9.39 true 235 LEFT Normal true -90 3744 5616 650 300 79 3188 5180 6.78 20 26 3511 5561 20 3530 12 56020 3592 0 560869 3233 324 498772 76 3500 5460 300 80 3188 5180 560 324 2668 4588 -0.44 -0.44 7.9 true 236 RIGHT Normal true 90 3744 5616 650 144 55 3188 5180 6.50 115 3 3619 5597 115 3733 3 559980 3736 0 5608399 3731 299 4917116 56 3568 5496 144 56 3188 5180 400 284 2664 4624 -0.12 -0.12 15.3 true 237 LEFT Normal true -90 3744 5616 650 278 51 3188 5180 7.51 1 1 3587 5609 1 3587 1 56090 3592 0 5608539 3213 331 496352 52 3520 5508 280 52 3188 5180 536 316 2672 4640 0.05 0.05 13.72 true 238 RIGHT Normal true 90 3744 5616 650 204 59 3188 5180 6.83 113 6 3612 5596 113 3724 6 560180 3736 0 5608466 3137 305 4916132 56 3540 5492 204 60 3188 5180 464 292 2668 4632 -0.22 -0.22 18.96 true 239 LEFT Normal true -90 3744 5616 650 254 59 3188 5180 6.51 7 4 3515 5605 7 3521 4 56080 3576 0 5608511 3188 347 498260 56 3512 5504 256 60 3188 5180 512 332 2672 4640 -0.14 -0.14 10.03 true 240 RIGHT Normal true 90 3744 5616 650 188 67 3188 5180 8.57 145 16 3563 5561 145 3691 5 558888 3736 0 5608443 3113 314 4942144 68 3512 5460 188 68 3188 5180 440 312 2668 4624 -0.17 -0.17 15.76 true 241 LEFT Normal true -90 3744 5616 650 244 75 3188 5180 7.02 8 5 3526 5604 8 3533 5 56080 3584 0 5608508 3177 364 499860 56 3512 5500 244 76 3188 5180 508 348 2660 4644 -0.17 -0.17 16.73 true 242 RIGHT Normal true 90 3744 5616 650 164 55 3188 5180 7.51 145 3 3589 5606 145 3733 3 560888 3736 0 5608419 3732 301 5607144 56 3536 5504 164 56 3188 5180 420 296 2664 4624 -0.12 -0.12 13.28 true 243 LEFT Normal true -90 3744 5616 650 272 51 3188 5180 7.46 1 1 3588 5609 1 3588 1 56090 3592 0 5608530 3209 336 497052 52 3520 5508 272 52 3188 5180 532 320 2668 4644 -0.02 -0.02 15.75 true 244 RIGHT Normal true 90 3744 5616 650 176 59 3188 5180 7.30 124 9 3602 5595 124 3725 9 560380 3736 0 5608438 3112 314 4944124 60 3552 5492 176 60 3188 5180 436 308 2668 4628 -0.34 -0.34 20.12 true 245 LEFT Normal true -90 3744 5616 650 280 67 3188 5180 6.15 11 6 3503 5600 11 3513 6 56050 3576 0 5608539 3214 356 495564 56 3508 5496 280 68 3188 5180 540 340 2668 4684 0.23 0.23 16.21 true 246 RIGHT Normal true 90 3744 5616 650 155 63 3188 5180 9.91 158 11 3563 5560 158 3720 11 557088 3736 0 5608394 3064 338 4968132 64 3536 5456 156 64 3188 5180 392 328 2668 4636 -0.47 -0.47 20.63 true 247 LEFT Normal true -90 3744 5616 650 250 67 3188 5180 7.87 22 3 3563 5607 5 3584 3 56090 3592 0 5608657 3149 355 499372 56 3500 5504 252 68 3188 5180 580 340 2528 4656 -0.11 -0.11 19.31 true 248 RIGHT Normal true 90 3744 5616 650 226 75 3188 5180 8.71 148 22 3563 5561 148 3723 10 559588 3736 0 5608593 2856 323 4897148 72 3512 5460 228 76 3188 5180 592 316 2456 4640 -0.41 -0.41 16.31 true 249 LEFT Normal true -90 3744 5616 650 312 59 3188 5180 6.48 9 6 3512 5602 9 3520 6 56070 3584 0 5608574 3246 341 381460 56 3512 5500 312 60 3188 5180 572 332 2668 3472 -0.2 -0.2 9.97 true 250 RIGHT Chapter true 90 3744 5616 650 169 371 3188 5180 8.26 151 7 3563 5596 151 3729 7 560288 3736 0 5608416 3092 685 5076152 60 3512 5492 168 372 3188 5180 416 692 2672 4392 -0.27 -0.27 19.21 true 251 LEFT Normal true -90 3744 5616 650 252 59 3188 5180 7.51 1 1 3587 5609 1 3587 1 56090 3592 0 5608512 3189 344 498152 52 3520 5508 252 60 3188 5180 512 332 2668 4640 -0.02 -0.02 15.01 true 252 RIGHT Normal true 90 3744 5616 650 196 75 3188 5180 9.00 160 25 3563 5561 160 3719 12 559988 3736 0 5608433 3106 301 4932132 76 3536 5460 196 76 3188 5180 432 304 2668 4620 -0.5 -0.5 20.15 true 253 LEFT Normal true -90 3744 5616 650 274 63 3188 5180 7.51 1 1 3587 5609 1 3587 1 56090 3592 0 5608537 3208 346 491552 52 3520 5508 276 64 3188 5180 536 336 2664 4568 -0.02 -0.02 11.79 true 254 RIGHT Normal true 90 3744 5616 650 135 75 3188 5180 7.91 134 25 3590 5561 134 3723 10 560188 3736 0 5608362 3037 301 4940136 76 3540 5460 136 76 3188 5180 364 304 2668 4628 -0.39 -0.39 17.65 true 255 LEFT Normal true -90 3744 5616 650 332 55 3188 5180 8.14 20 5 3563 5604 7 3582 5 56080 3592 0 5608586 3271 335 496872 56 3512 5500 332 56 3188 5180 588 320 2676 4640 -0.16 -0.16 13.43 true 256 RIGHT Normal true 90 3744 5616 650 128 59 3188 5180 7.42 126 9 3600 5594 126 3725 9 560288 3736 0 5608363 3041 292 4917128 60 3548 5492 128 60 3188 5180 364 292 2672 4632 -0.36 -0.36 16.93 true 257 LEFT Normal true -90 3744 5616 650 318 55 3188 5180 6.23 5 3 3510 5607 5 3514 3 56090 3576 0 5608580 3255 322 496656 56 3516 5504 320 56 3188 5180 576 312 2672 4644 0.09 0.09 14.02 true 258 RIGHT Normal true 90 3744 5616 650 151 63 3188 5180 8.13 141 13 3593 5561 141 3733 4 558488 3736 0 5592406 3078 302 4895140 64 3540 5460 152 64 3188 5180 404 292 2668 4612 -0.14 -0.14 15.76 true 259 LEFT Normal true -90 3744 5616 650 284 71 3188 5180 8.38 18 5 3563 5602 9 3580 5 56060 3592 0 5608542 3225 357 502168 56 3512 5500 284 72 3188 5180 540 344 2676 4672 0.2 0.2 14.54 true 260 RIGHT Normal true 90 3744 5616 650 157 59 3188 5180 7.00 112 9 3622 5561 112 3733 2 557888 3736 0 5592416 3087 289 4975132 60 3548 5460 156 60 3188 5180 412 288 2672 4680 0.06 0.06 12.3 true 261 LEFT Normal true -90 3744 5616 650 262 51 3188 5180 7.77 19 1 3563 5609 1 3581 1 56090 3584 0 5608518 3202 328 501372 52 3512 5508 264 52 3188 5180 516 320 2680 4688 -0.03 -0.03 15.17 true 262 RIGHT Normal true 90 3744 5616 650 200 59 3188 5180 7.60 123 9 3611 5561 123 3733 4 557688 3736 0 5592460 3131 259 4922124 60 3560 5460 200 60 3188 5180 460 252 2668 4664 0.14 0.14 12.77 true 263 LEFT Normal true -90 3744 5616 650 238 55 3188 5180 8.14 20 5 3563 5604 7 3582 5 56080 3592 0 5608495 3175 322 495872 56 3512 5500 240 56 3188 5180 496 308 2672 4644 -0.16 -0.16 17.18 true 264 RIGHT Normal true 90 3744 5616 650 228 63 3188 5180 7.25 120 12 3614 5561 120 3733 3 558288 3736 0 5592489 3161 279 4906120 64 3564 5460 228 64 3188 5180 488 272 2668 4628 -0.09 -0.09 17.17 true 265 LEFT Normal true -90 3744 5616 650 216 51 3188 5180 7.77 19 1 3563 5609 1 3581 1 56090 3584 0 5608481 3152 333 497172 52 3512 5508 216 52 3188 5180 476 324 2668 4644 -0.02 -0.02 17.39 true 266 RIGHT Normal true 90 3744 5616 650 204 63 3188 5180 7.06 117 12 3617 5561 117 3733 2 558388 3736 0 5592461 3132 294 4906116 64 3564 5460 204 64 3188 5180 460 284 2676 4628 -0.08 -0.08 15.3 true 267 LEFT Normal true -90 3744 5616 650 270 55 3188 5180 8.06 22 4 3563 5604 7 3584 4 56070 3592 0 5608526 3206 336 497572 56 3500 5500 272 56 3188 5180 528 324 2672 4644 0.16 0.16 9.47 true 268 RIGHT Normal true 90 3744 5616 650 180 63 3188 5180 7.16 115 10 3619 5561 115 3733 3 557888 3736 0 5592440 3113 273 4875116 60 3568 5460 180 64 3188 5180 440 256 2668 4612 0.11 0.11 14.44 true 269 LEFT Normal true -90 3744 5616 650 248 55 3188 5180 8.00 19 3 3563 5607 5 3581 3 56090 3584 0 5608505 3180 306 492572 56 3512 5504 248 56 3188 5180 508 288 2668 4628 -0.11 -0.11 17.69 true 270 RIGHT Normal true 90 3744 5616 650 234 67 3188 5180 7.40 123 16 3609 5561 123 3731 6 558788 3736 0 5592493 3167 275 4884124 68 3556 5460 236 68 3188 5180 492 264 2672 4620 -0.22 -0.22 15.31 true 271 LEFT Normal true -90 3744 5616 650 204 59 3188 5180 7.36 15 9 3528 5596 15 3542 9 56040 3592 0 5608465 3138 325 495968 60 3504 5492 204 60 3188 5180 464 304 2668 4648 -0.34 -0.34 23.33 true 272 RIGHT Normal true 90 3744 5616 650 232 63 3188 5180 8.05 140 13 3594 5561 140 3733 3 558488 3736 0 5592494 3165 293 4889140 64 3544 5460 232 64 3188 5180 492 280 2668 4604 -0.11 -0.11 16.52 true 273 LEFT Normal true -90 3744 5616 650 230 67 3188 5180 6.35 6 4 3512 5606 6 3517 4 56090 3584 0 5608487 3164 351 497856 56 3516 5504 232 68 3188 5180 488 340 2672 4632 -0.12 -0.12 15.9 true 274 RIGHT Normal true 90 3744 5616 650 226 55 3188 5180 7.32 115 5 3619 5561 115 3733 1 557188 3736 0 5576480 3154 284 4878116 56 3568 5460 228 56 3188 5180 476 268 2688 4616 -0.02 -0.02 13.97 true 275 LEFT Normal true -90 3744 5616 650 252 51 3188 5180 7.42 1 1 3589 5609 1 3589 1 56090 3592 0 5608510 3188 330 496552 52 3520 5508 252 52 3188 5180 508 316 2676 4640 -0.02 -0.02 16.35 true 276 RIGHT Normal true 90 3744 5616 650 216 59 3188 5180 9.36 162 7 3563 5561 162 3733 3 557396 3736 0 5576474 3152 296 4925132 60 3540 5460 216 60 3188 5180 472 288 2676 4632 -0.12 -0.12 15.68 true 277 LEFT Normal true -90 3744 5616 650 256 59 3188 5180 5.37 11 6 3485 5600 11 3495 6 56050 3584 0 5608512 3198 329 496264 56 3508 5496 256 60 3188 5180 508 312 2684 4640 0.25 0.25 13.3 true 278 RIGHT Normal true 90 3744 5616 650 188 59 3188 5180 7.90 127 7 3607 5561 127 3733 4 557296 3736 0 5576447 3118 307 4936128 60 3556 5460 188 60 3188 5180 448 300 2668 4632 -0.14 -0.14 18.8 true 279 LEFT Normal true -90 3744 5616 650 300 59 3188 5180 6.01 15 8 3497 5596 15 3511 8 56030 3592 0 5608555 3239 331 496768 60 3504 5492 300 60 3188 5180 556 312 2676 4648 0.33 0.33 22.21 true 280 RIGHT Normal true 90 3744 5616 650 164 55 3188 5180 9.44 161 3 3563 5561 161 3733 1 5567104 3736 0 5576415 3108 280 4897132 56 3540 5460 164 56 3188 5180 416 268 2680 4624 -0.03 -0.03 13.63 true 281 LEFT Normal true -90 3744 5616 650 300 59 3188 5180 8.94 19 9 3563 5594 17 3581 9 56020 3600 0 5608555 3238 314 495572 60 3512 5492 300 60 3188 5180 556 292 2676 4656 0.38 0.38 18.96 true 282 RIGHT Normal true 90 3744 5616 650 194 59 3188 5180 9.63 162 9 3563 5561 162 3729 6 5573104 3736 0 5576454 3130 265 4888132 60 3540 5460 196 60 3188 5180 452 252 2672 4624 -0.25 -0.25 16.62 true 283 LEFT Normal true -90 3744 5616 650 264 75 3188 5180 9.04 19 25 3563 5561 18 3581 10 56010 3600 0 5608524 3197 294 503972 76 3512 5460 264 76 3188 5180 524 276 2668 4752 0.39 0.39 17.64 true 284 RIGHT Normal true 90 3744 5616 650 220 55 3188 5180 9.21 162 3 3563 5548 162 3733 3 5550104 3736 0 5552484 3156 258 4996132 56 3540 5444 220 56 3188 5180 480 248 2668 4744 -0.12 -0.12 17.51 true 285 LEFT Normal true -90 3744 5616 650 290 71 3188 5180 8.47 32 18 3563 5561 12 3594 7 55900 3608 0 5592547 3228 292 496984 68 3488 5460 292 72 3188 5180 544 280 2680 4680 0.27 0.27 19.12 true 286 RIGHT Normal true 90 3744 5616 650 238 55 3188 5180 8.51 174 2 3563 5520 174 3733 2 5521128 3736 0 5520459 3129 234 4907132 52 3552 5416 240 56 3188 5180 460 224 2744 4676 0.06 0.06 15.02 true 287 LEFT Normal true -90 3744 5616 650 288 63 3188 5180 9.29 31 10 3563 5556 18 3593 10 55650 3616 0 5568501 3272 247 489184 60 3488 5452 288 64 3188 5180 496 236 2772 4644 0.39 0.39 25.59 true 288 RIGHT Normal true 90 3744 5616 650 244 59 3188 5180 6.77 151 6 3563 5483 151 3731 6 5488128 3736 0 5488154 3180 196 4823152 56 3512 5380 244 60 3188 5180 504 188 2668 4628 0.22 0.22 21.64 true 289 LEFT Normal true -90 3744 5616 650 322 63 3188 5180 9.01 30 11 3563 5543 20 3592 11 55530 3616 0 5560578 3260 218 477680 64 3492 5440 324 64 3188 5180 576 212 2680 4556 0.45 0.45 25.78 true 290 RIGHT Normal true 90 3744 5616 650 270 59 3188 5180 6.39 153 8 3563 5464 153 3728 8 5471128 3736 0 5472157 3204 188 4699152 60 3512 5360 272 60 3188 5180 528 180 2672 4512 0.3 0.3 22.73 true 291 LEFT Normal true -90 3744 5616 650 318 67 3188 5180 8.61 26 14 3563 5519 26 3588 14 55320 3616 0 5536569 3264 223 487676 64 3496 5416 320 68 3188 5180 568 204 2688 4660 0.58 0.58 16.22 true 292 RIGHT Normal true 90 3744 5616 650 310 71 3188 5180 4.19 180 18 3533 5420 180 3712 18 5437136 3736 0 5448571 3246 144 4755132 68 3528 5316 312 72 3188 5180 568 140 2672 4612 0.69 0.69 19.61 true 293 LEFT Normal true -90 3744 5616 650 290 59 3188 5180 7.31 14 8 3589 5511 14 3602 8 55180 3616 0 5528544 3229 206 486164 60 3508 5408 292 60 3188 5180 544 200 2680 4656 0.31 0.31 19.01 true 294 RIGHT Normal true 90 3744 5616 650 384 0 3188 5180 3.95 178 19 3534 5411 178 3711 19 5429136 3736 0 5440637 3310 117 4728132 72 3528 5308 380 72 3188 5180 636 116 2672 4616 0.72 0.72 16.4 false 295 LEFT Normal true -90 3744 5616 650 240 51 3188 5180 6.17 1 1 3613 5508 1 3613 1 55080 3616 0 5528498 3180 197 -152 52 3520 5404 240 52 3188 5180 500 192 2668 1124 0.0 0.0 14.61 true 296 RIGHT Normal true 90 3744 5616 650 388 195 3188 5180 3.03 192 20 3518 5404 192 3709 20 5423144 3736 0 5432648 3328 496 4870132 72 3524 5300 388 196 3188 5180 648 492 2668 4372 0.78 0.78 12.86 true 297 LEFT Normal true -90 3744 5616 650 236 55 3188 5180 6.89 9 5 3598 5510 9 3606 5 55140 3616 0 5528424 3178 187 490660 56 3512 5408 236 56 3188 5180 492 172 2676 4732 0.2 0.2 13.34 true 298 RIGHT Normal true 90 3744 5616 650 368 67 3188 5180 4.27 175 17 3540 5413 175 3714 17 5429136 3736 0 5440626 3301 144 4837132 68 3528 5312 368 68 3188 5180 628 140 2668 4688 0.66 0.66 17.79 true 299 LEFT Normal true -90 3744 5616 650 286 51 3188 5180 6.41 1 1 3609 5510 1 3609 1 55100 3616 0 5520550 3214 195 482152 52 3520 5408 288 52 3188 5180 548 188 2664 4628 -0.02 -0.02 10.33 true 300 RIGHT Normal true 90 3744 5616 650 416 71 3188 5180 3.58 184 20 3527 5409 184 3710 20 5428136 3736 0 5440610 3273 120 4722132 72 3524 5308 416 72 3188 5180 756 124 2508 4588 0.75 0.75 18.59 true 301 LEFT Normal true -90 3744 5616 650 250 55 3188 5180 6.65 5 3 3602 5508 5 3606 3 55100 3616 0 5520506 3184 169 -156 56 3516 5404 252 56 3188 5180 512 152 2664 1960 0.09 0.09 13.2 true 302 RIGHT Normal true 90 3744 5616 650 374 60 3188 5180 5.30 154 10 3563 5424 154 3725 10 5433128 3736 0 5432156 3194 2086 -1132 60 3532 5320 376 60 3188 5180 1432 2084 1072 180 0.38 0.38 6.21 true 303 LEFT Normal true -90 3744 5616 650 218 165 3188 5180 6.31 1 1 3609 5507 1 3609 1 55070 3616 0 5512-1 -1 -1 -152 52 3520 5404 220 164 3188 5180 0 0 0 0 0.0 0.0 0.0 true 304 RIGHT Normal true 90 3744 5616 650 373 71 3188 5180 4.24 169 19 3563 5408 169 3711 19 5426128 3736 0 5432901 3059 149 2766132 72 3548 5304 372 72 3188 5180 896 132 2408 2632 0.72 0.72 9.96 true 305 LEFT Normal true -90 3744 5616 650 136 55 3188 5180 6.37 4 3 3604 5502 4 3607 3 55040 3616 0 5512652 2989 216 355256 56 3516 5400 136 56 3188 5180 476 192 2508 3356 -0.09 -0.09 3.6 true 306 RIGHT Normal true 90 3744 5616 650 372 71 3188 5180 3.69 181 19 3531 5407 181 3711 19 5425136 3736 0 5432681 3301 145 4118132 72 3528 5304 372 72 3188 5180 640 136 2652 3976 0.73 0.73 16.3 true 307 LEFT Normal true -90 3744 5616 650 252 51 3188 5180 6.29 1 1 3608 5505 1 3608 1 55050 3616 0 5512527 3184 207 441152 52 3520 5404 252 52 3188 5180 512 188 2668 4216 0.03 0.03 59.5 true 308 RIGHT Normal true 90 3744 5616 650 312 59 3188 5180 5.21 156 6 3563 5428 156 3732 6 5433128 3736 0 5432578 3244 169 3492132 56 3532 5324 312 60 3188 5180 576 164 2660 3320 0.2 0.2 39.09 true 309 LEFT Normal true -90 3744 5616 650 307 59 3188 5180 6.98 11 6 3595 5509 11 3605 6 55140 3616 0 55201198 2588 223 371864 56 3508 5408 308 60 3188 5180 564 208 2132 3500 0.23 0.23 17.83 true 310 RIGHT Normal true 90 3744 5616 650 408 59 3188 5180 5.72 162 9 3563 5429 162 3727 9 5437136 3736 0 5440165 2982 170 4548132 60 3540 5328 408 60 3188 5180 808 164 2388 4380 0.33 0.33 42.62 true 311 LEFT Normal true -90 3744 5616 650 322 57 3188 5405 6.82 11 6 3598 5508 11 3608 6 55130 3624 0 5512471 3365 38 518364 56 3508 5404 324 56 3188 5404 476 88 2880 5084 0.25 0.25 10.05 true 312 RIGHT Normal true 90 3744 5616 650 282 55 3188 5180 5.60 166 4 3563 5428 166 3733 4 5431144 3736 0 5432552 3206 137 2879132 56 3544 5324 284 56 3188 5180 556 112 2640 2764 0.12 0.12 4.44 true 313 LEFT Normal true -90 3744 5616 650 208 55 3188 5180 6.51 8 5 3606 5509 8 3613 5 55130 3624 0 5512780 3038 180 443660 56 3512 5408 208 56 3188 5180 576 172 2452 4256 0.17 0.17 5.92 true 314 RIGHT Normal true 90 3744 5616 650 486 59 3188 5180 5.11 181 7 3563 5426 181 3730 7 5432152 3736 0 5432939 2906 113 3974132 60 3560 5324 488 60 3188 5180 940 104 2280 3860 0.27 0.27 11.84 true 315 LEFT Normal true -90 3744 5616 650 288 59 3188 5180 6.79 13 7 3603 5514 13 3615 7 55200 3632 0 5520547 3218 170 448164 60 3508 5412 288 60 3188 5180 556 160 2652 4312 0.28 0.28 9.45 true 316 RIGHT Normal true 90 3744 5616 650 344 55 3188 5180 5.45 180 5 3563 5431 180 3733 5 5435160 3736 0 5440181 3292 151 4490132 56 3556 5328 344 56 3188 5180 592 140 2692 4348 0.19 0.19 5.96 true 317 LEFT Normal true -90 3744 5616 650 296 63 3188 5180 6.87 18 10 3594 5504 18 3611 10 55130 3632 0 5512627 3186 197 290568 60 3504 5400 296 64 3188 5180 580 176 2620 2732 0.41 0.41 4.9 true 318 RIGHT Normal true 90 3744 5616 650 394 71 3188 5180 2.31 211 20 3500 5406 211 3710 20 5425168 3736 0 5432749 3084 134 4392132 72 3524 5304 396 72 3188 5180 752 132 2472 4252 0.75 0.75 19.83 true 319 LEFT Normal true -90 3744 5616 650 156 55 3188 5180 5.72 3 2 3624 5509 3 3626 2 55100 3632 0 5512955 2891 174 459156 52 3516 5408 156 56 3188 5180 596 156 2308 4428 -0.06 -0.06 6.12 true 320 RIGHT Normal true 90 3744 5616 650 298 67 3188 5180 3.09 209 15 3510 5417 209 3718 15 5431176 3736 0 5432563 3220 136 3303132 68 3532 5316 300 68 3188 5180 568 124 2648 3168 0.55 0.55 41.04 true 321 LEFT Normal true -90 3744 5616 650 328 55 3188 5180 5.97 7 4 3619 5510 7 3625 4 55130 3632 0 5512598 3268 165 453560 56 3512 5408 328 56 3188 5180 604 156 2652 4372 0.16 0.16 10.53 true 322 RIGHT Normal true 90 3744 5616 650 408 83 3188 5180 0.00 243 33 3445 5380 243 3687 33 5412176 3736 0 5432750 3084 125 3989132 84 3504 5276 408 84 3188 5180 748 124 2508 3852 1.31 1.31 16.44 true 323 LEFT Normal true -90 3744 5616 650 298 83 3188 5180 6.11 59 31 3517 5503 59 3575 31 55330 3632 0 5568616 3309 207 3969112 84 3464 5400 300 84 3188 5180 620 196 2680 3764 1.34 1.34 17.73 true 324 RIGHT Normal true 90 3744 5616 650 318 59 3188 5180 4.51 207 6 3523 5444 207 3729 6 5449176 3736 0 5448858 3100 159 4887132 56 3544 5340 320 60 3188 5180 724 160 2376 4732 -0.25 -0.25 10.03 true 325 LEFT Normal true -90 3744 5616 650 270 55 3188 5180 5.73 4 2 3629 5516 4 3632 2 55170 3640 0 5536669 3173 217 452256 52 3516 5412 272 56 3188 5180 564 216 2600 4300 0.08 0.08 24.26 true 326 RIGHT Normal true 90 3744 5616 650 430 59 3188 5180 4.13 201 9 3528 5425 201 3728 9 5433176 3736 0 5432206 3229 134 3766132 60 3544 5324 432 60 3188 5180 752 120 2544 3636 0.31 0.31 23.84 true 327 LEFT Normal true -90 3744 5616 650 228 63 3188 5180 6.88 18 10 3604 5518 18 3621 10 55270 3640 0 5536645 3106 181 389968 60 3504 5416 228 64 3188 5180 540 164 2564 3724 0.39 0.39 27.01 true 328 RIGHT Normal true 90 3744 5616 650 454 59 3188 5180 4.15 210 7 3522 5434 210 3731 7 5440192 3736 0 5440213 3172 123 3881132 60 3548 5332 456 60 3188 5180 796 112 2504 3764 0.23 0.23 11.77 true 329 LEFT Normal true -90 3744 5616 650 118 63 3188 5180 6.85 18 10 3604 5517 18 3621 10 55260 3640 0 5528730 2932 148 324468 60 3504 5416 120 64 3188 5180 500 136 2424 3100 0.41 0.41 12.47 true 330 RIGHT Normal true 90 3744 5616 650 436 63 3188 5180 3.29 217 11 3508 5426 217 3724 11 5436192 3736 0 5440790 3124 121 4308132 64 3540 5324 436 64 3188 5180 792 120 2476 4176 0.39 0.39 23.01 true 331 LEFT Normal true -90 3744 5616 650 290 59 3188 5180 6.17 11 6 3621 5519 11 3631 6 55240 3648 0 5528553 3223 183 454164 56 3508 5416 292 60 3188 5180 552 172 2664 4364 0.25 0.25 14.96 true 332 RIGHT Normal true 90 3744 5616 650 400 51 3188 5180 3.60 224 1 3510 5433 224 3733 1 5433200 3736 0 5440712 3239 138 4404132 52 3548 5332 400 52 3188 5180 708 124 2572 4272 0.05 0.05 10.65 true 333 LEFT Normal true -90 3744 5616 650 288 67 3188 5180 6.73 27 15 3598 5505 27 3624 15 55190 3656 0 5528547 3223 186 444780 68 3492 5404 288 68 3188 5180 548 164 2668 4276 0.61 0.61 23.11 true 334 RIGHT Normal true 90 3744 5616 650 470 59 3188 5180 2.36 239 9 3488 5424 239 3726 9 5432216 3736 0 5432243 3381 160 4857132 60 3540 5320 472 60 3188 5180 752 152 2624 4700 0.34 0.34 15.84 true 335 LEFT Normal true -90 3744 5616 650 238 55 3188 5180 5.77 9 5 3628 5516 9 3636 5 55200 3648 0 5520499 3168 197 456160 56 3512 5412 240 56 3188 5180 500 184 2664 4364 0.2 0.2 11.78 true 336 RIGHT Normal true 90 3744 5616 650 436 63 3188 5180 1.96 237 13 3484 5417 237 3720 13 5429208 3736 0 5432704 3338 152 4789132 64 3536 5316 436 64 3188 5180 704 148 2652 4636 0.5 0.5 11.8 true 337 LEFT Normal true -90 3744 5616 650 215 55 3188 5180 5.18 3 2 3638 5511 3 3640 2 55120 3648 0 5520858 2758 191 449856 52 3516 5408 216 56 3188 5180 472 180 2280 4308 -0.06 -0.06 12.33 true 338 RIGHT Normal true 90 3744 5616 650 458 51 3188 5180 2.97 232 1 3502 5424 232 3733 1 5424208 3736 0 5432814 3181 125 4344132 52 3548 5320 460 52 3188 5180 812 96 2480 4244 -0.02 -0.02 7.44 true 339 LEFT Normal true -90 3744 5616 650 286 63 3188 5180 6.21 20 11 3612 5508 20 3631 11 55180 3656 0 5520534 3228 186 394672 64 3500 5404 288 64 3188 5180 540 164 2680 3776 0.45 0.45 13.1 true 340 RIGHT Normal true 90 3744 5616 650 424 59 3188 5180 2.44 239 9 3490 5424 239 3728 9 5432216 3736 0 5432241 3357 137 4426132 60 3544 5320 424 60 3188 5180 684 132 2668 4284 0.31 0.31 11.7 true 341 LEFT Normal true -90 3744 5616 650 266 51 3188 5180 5.07 1 1 3642 5513 1 3642 1 55130 3648 0 5520625 3206 171 393352 52 3520 5412 268 52 3188 5180 524 156 2672 3772 0.02 0.02 11.56 true 342 RIGHT Normal true 90 3744 5616 650 440 67 3188 5180 1.03 246 17 3469 5408 246 3714 17 5424208 3736 0 5432807 3128 141 3866132 68 3528 5304 440 68 3188 5180 808 136 2452 3720 0.66 0.66 14.21 true 343 LEFT Normal true -90 -0.11 -0.11 2.89 true 3744 5616 230 128 3188 5180 344 RIGHT Normal true 90 -0.2 -0.20 2.22 true 3744 5616 322 165 3188 5314 345 LEFT Normal true -90 3744 5616 650 116 67 3188 5180 5.90 22 14 3604 5487 22 3625 14 55000 3648 0 5512986 2872 191 497072 64 3500 5384 116 68 3188 5180 560 192 2300 4768 -0.52 -0.52 10.96 true 346 RIGHT Normal true 90 3744 5616 650 349 59 3188 5180 1.81 247 9 3482 5415 247 3728 9 5423216 3736 0 54241040 2896 135 4871132 60 3544 5312 348 60 3188 5180 1040 120 2240 4740 0.31 0.31 12.91 true 347 LEFT Normal true -90 3744 5616 650 279 55 3188 5180 7.13 28 4 3563 5500 6 3590 4 55030 3632 0 55041158 2569 177 397080 56 3492 5396 280 56 3188 5180 536 156 2136 3808 -0.14 -0.14 8.6 true 348 RIGHT Normal true 90 3744 5616 650 349 38 3188 5180 2.75 219 12 3505 5413 219 3723 12 5424192 3736 0 54241187 2755 115 4630132 64 3540 5312 348 64 3188 5180 1188 112 2092 4512 0.44 0.44 8.2 true 349 LEFT Normal true -90 3744 5616 650 292 55 3188 5180 5.17 3 2 3631 5501 3 3633 2 55020 3640 0 5504953 2813 147 483656 52 3516 5400 292 56 3188 5180 956 152 1860 4676 -0.06 -0.06 9.53 true 350 RIGHT Normal true 90 3744 5616 650 310 63 3188 5180 0.31 402 11 3323 5413 402 3724 11 5423192 3736 0 5424971 2846 87 4735132 64 3540 5312 312 64 3188 5180 972 104 1864 4620 0.41 0.41 11.28 true 351 LEFT Normal true -90 3744 5616 650 322 55 3188 5180 5.13 8 5 3634 5504 8 3641 5 55080 3648 0 5512589 3251 158 489160 56 3512 5400 324 56 3188 5180 588 164 2656 4720 -0.17 -0.17 14.71 true 352 RIGHT Normal true 90 3744 5616 650 348 55 3188 5180 0.69 253 17 3462 5407 253 3714 17 5423216 3736 0 5432473 3422 119 4206132 68 3528 5304 348 68 3188 5180 472 120 2940 4080 0.64 0.64 13.83 true 353 LEFT Normal true -90 -0.14 -0.14 9.64 true 3744 5616 316 150 3188 5180 354 RIGHT Normal true 90 -0.06 -0.06 3.27 true 3744 5616 274 187 3188 5297 355 LEFT Normal true -90 3744 5616 650 276 63 3188 5180 6.52 18 10 3610 5515 18 3627 10 55240 3648 0 5544596 3150 193 455768 60 3504 5412 276 64 3188 5180 596 172 2548 4376 0.39 0.39 9.61 true 356 RIGHT Normal true 90 3744 5616 650 344 59 3188 5180 2.89 236 9 3488 5441 236 3723 9 5449200 3736 0 5456517 3399 158 3958132 60 3540 5340 344 60 3188 5180 480 144 2916 3808 -0.39 -0.39 16.27 true 357 LEFT Normal true -90 -0.23 -0.23 2.89 true 3744 5616 268 105 3188 5390 358 RIGHT Normal true 90 -0.06 -0.06 2.39 true 3744 5616 375 254 3188 5180 359 LEFT Normal true -90 -0.06 -0.06 1.90 true 3744 5616 217 208 3188 5180 360 RIGHT Normal true 90 -0.45 -0.45 2.38 true 3744 5616 409 66 3204 5452 361 LEFT Normal true -90 3744 5616 650 168 55 3188 5180 5.77 7 4 3623 5509 7 3629 4 55120 3632 0 5512432 3099 148 488760 56 3512 5408 168 56 3188 5180 428 148 2668 4736 0.16 0.16 8.57 true 362 RIGHT Normal true 90 3744 5616 650 432 29 3188 5180 1.76 227 17 3488 5405 227 3714 17 5421192 3736 0 5424702 3357 107 4492132 68 3528 5304 432 68 3188 5180 700 108 2652 4376 0.64 0.64 11.97 true 363 LEFT Normal true -90 3744 5616 650 227 59 3188 5180 5.93 13 7 3612 5499 13 3624 7 55050 3640 0 5504971 2649 160 422864 60 3508 5396 228 60 3188 5180 484 148 2156 4072 0.28 0.28 10.66 true 364 RIGHT Normal true 90 3744 5616 650 432 111 3188 5180 3.61 205 7 3527 5410 205 3731 7 5416184 3736 0 5416207 3300 384 4445132 60 3548 5308 432 112 3188 5180 760 384 2532 4052 0.23 0.23 8.13 true 365 LEFT Normal true -90 3744 5616 650 186 51 3188 5180 5.38 1 1 3621 5494 1 3621 1 54940 3624 0 5496515 3103 175 490452 52 3520 5392 188 52 3188 5180 464 172 2632 4724 0.02 0.02 2.51 true 366 RIGHT Normal true 90 3744 5616 650 472 59 3188 5180 4.36 181 9 3563 5409 184 3728 9 5417160 3736 0 5416188 3342 135 3553132 60 3560 5308 472 60 3188 5180 772 116 2588 3436 0.31 0.31 2.29 true 367 LEFT Normal true -90 3744 5616 650 160 55 3188 5180 5.91 9 5 3608 5493 9 3616 5 54970 3624 0 5496492 3068 182 389460 56 3512 5392 160 56 3188 5180 444 172 2620 3712 0.19 0.19 3.83 true 368 RIGHT Normal true 90 3744 5616 650 424 63 3188 5180 3.57 190 13 3531 5403 190 3720 13 5415160 3736 0 5416718 3280 135 4602132 64 3536 5300 424 64 3188 5180 712 128 2612 4476 0.5 0.5 3.69 true 369 LEFT Normal true -90 3744 5616 650 62 59 3188 5180 6.11 10 6 3602 5491 10 3611 6 54960 3624 0 5496547 2912 181 388260 56 3512 5388 64 60 3188 5180 388 160 2520 3716 0.22 0.22 2.95 true 370 RIGHT Normal true 90 3744 5616 650 402 175 3188 5180 4.97 172 2 3563 5405 172 3733 2 5406152 3736 0 5416508 3487 712 4108132 52 3548 5304 404 176 3188 5180 508 708 2976 3392 0.08 0.08 6.01 true 371 LEFT Normal true -90 3744 5616 650 174 59 3188 5180 6.04 12 8 3592 5475 12 3603 8 54820 3616 0 5488437 3106 181 376864 60 3508 5372 176 60 3188 5180 436 168 2664 3592 -0.28 -0.28 8.75 true 372 RIGHT Normal true 90 3744 5616 650 308 59 3188 5290 4.56 157 7 3563 5403 157 3730 7 5409136 3736 0 5416158 3203 132 4948132 60 3536 5300 308 60 3188 5292 612 132 2580 4812 0.25 0.25 15.69 true 373 LEFT Normal true -90 3744 5616 650 166 59 3188 5180 6.04 10 6 3587 5468 10 3596 6 54730 3608 0 5472718 2997 170 440960 56 3512 5364 168 60 3188 5180 528 152 2464 4252 0.22 0.22 11.14 true 374 RIGHT Normal true 90 3744 5616 650 246 175 3188 5180 4.17 142 11 3563 5396 142 3724 11 5406112 3736 0 5408146 3203 619 4179144 64 3512 5292 248 176 3188 5180 480 624 2720 3548 0.41 0.41 7.94 true 375 LEFT Normal true -90 3744 5616 650 300 239 3188 5180 6.27 27 8 3563 5463 12 3589 8 54700 3600 0 5480506 3285 782 414580 60 3492 5360 300 240 3188 5180 512 772 2764 3364 -0.28 -0.28 8.33 true 376 RIGHT Normal true 90 3744 5616 650 282 51 3188 5180 3.85 143 1 3591 5404 143 3733 1 5404120 3736 0 5408595 3146 240 4627144 52 3540 5300 284 52 3188 5180 596 232 2560 4392 -0.02 -0.02 4.71 true 377 LEFT Normal true -90 3744 5616 650 218 148 3188 5180 5.89 1 1 3594 5473 1 3594 1 54730 3600 0 5472-1 -1 -1 395152 52 3520 5372 220 148 3188 5180 0 0 0 0 0.0 0.0 0.0 true 378 RIGHT Index true 90 3744 5616 650 266 179 3188 5180 2.64 113 3 3621 5407 113 3733 3 540996 3736 0 5408114 3191 501 4888132 56 3548 5304 268 180 3188 5180 536 500 2648 4384 0.11 0.11 6.35 true 379 LEFT Normal true -90 3744 5616 650 210 59 3188 5180 6.43 26 6 3563 5468 11 3588 6 54730 3600 0 5472499 3125 163 -176 56 3496 5364 212 60 3188 5180 492 148 2624 1828 0.25 0.25 6.56 true 380 RIGHT Index true 90 3744 5616 650 272 62 3188 5180 3.15 126 6 3607 5404 126 3732 6 5409104 3736 0 5408127 3204 496 4816128 56 3556 5300 272 64 3188 5180 536 496 2660 4312 0.22 0.22 9.35 true 381 LEFT Normal true -90 3744 5616 650 228 59 3188 5180 6.75 19 7 3563 5467 12 3581 7 54730 3592 0 5472487 3164 146 476972 60 3512 5364 228 60 3188 5180 484 140 2676 4620 0.27 0.27 5.63 true 382 RIGHT Normal true 90 3744 5616 650 210 0 3188 5180 2.82 114 9 3614 5401 114 3727 9 540988 3736 0 5408118 3141 82 4670116 60 3564 5300 204 60 3188 5180 464 96 2668 4572 0.34 0.34 5.56 false 383 LEFT Normal true -90 3744 5616 650 252 59 3188 5180 6.90 16 8 3563 5465 14 3578 8 54720 3592 0 5472508 3192 149 486268 60 3512 5364 252 60 3188 5180 508 140 2676 4716 0.31 0.31 5.61 true 384 RIGHT Normal true 90 3744 5616 650 216 59 3188 5180 2.30 104 8 3626 5402 104 3729 8 540980 3736 0 5408110 3153 107 4775132 60 3544 5300 216 60 3188 5180 472 104 2676 4672 0.28 0.28 7.13 true 385 LEFT Normal true -90 3744 5616 650 258 63 3188 5180 5.94 6 13 3563 5459 24 3568 13 54710 3592 0 5480506 3206 167 485756 64 3512 5356 260 64 3188 5180 504 148 2696 4712 0.53 0.53 6.76 true 386 RIGHT Normal true 90 3744 5616 650 244 59 3188 5180 4.34 131 20 3563 5357 131 3709 20 537688 3736 0 5384138 3192 99 4771132 72 3512 5256 252 72 3188 5180 504 104 2680 4660 0.78 0.78 7.32 true 387 LEFT Normal true -90 3744 5616 650 380 63 3188 5180 6.34 4 13 3563 5476 25 3566 13 54880 3592 0 5496801 3462 159 -156 64 3512 5372 380 64 3188 5180 792 168 2668 1704 0.56 0.56 4.29 true 388 RIGHT Normal true 90 3744 5616 650 314 108 3188 5180 2.30 108 1 3626 5393 108 3733 1 539388 3736 0 53921049 -1 622 -1132 52 3548 5292 316 108 3188 5180 1052 616 504 304 0.0 0.0 0.0 true LEFT Normal true -90 3744 5616 650 194 234 3188 5180 5.83 1 1 3589 5464 1 3589 1 54640 3592 0 54801355 -1 -1 545852 52 3520 5360 196 232 3188 5180 600 5236 2376 104 0.0 0.0 0.0 true RIGHT Normal true 90 3744 5616 650 314 95 3188 5180 3.78 138 1 3592 5367 138 3729 1 536788 3736 0 5368139 -1 -1 3060140 52 3540 5264 316 96 3188 5180 0 0 0 0 0.0 0.0 0.0 true LEFT Cover true -90 3744 5616 650 0 0 3520 5436 6.82 97 43 3496 5561 97 3592 40 56080 3592 40 5608107 3567 41 555556 96 3516 5460 52 96 3520 5436 0 0 0 0 1.17 1.17 10.79 false RIGHT Color Card false 90 3744 5616 650 0 0 1 1 1.65 96 464 3641 4057 96 3736 464 452096 3736 464 4520108 3735 465 4519132 60 3552 5276 0 0 0 0 0 0 0 0 -0.48 -0.48 4.47 true 333 Fig. 25. 1 vs 10101 dis crimina tor ^ 335 Fig. 26. Recognizer R(101001) 335 Fig. 27. A coded channel 336 Fig. 28. Arranging inputs and outputs of a coded channel 337 Fig. 29. Pulsers and decoders of a coded channel 340 Fig. 30. Cyclicity in a coded channel 341 Chapter 4 Fig. 31. The linear array L, the connecting loop Q, and the timing loopC 2 342 Fig. 32. Lengthening the timing loop C2 345 Fig. 33. Lengthening the connecting loop Q 346 Fig. 34. Shortening the timing loop C2 348 xii THEORY OF SELF- REPRODUCING AUTOMATA Fig. 35. Shortening the connecting loop d 350 Fig. 36. Writing "zero" in cell x n of the linear array L when lengthening Ci 352 Fig. 37. Tape unit with unlimited memory capacity 353 Fig. 38. The logical structure of the procedure followed by the memory control MC 354 Fig. 39. Read-write-erase unit RWE (Note: parts (a) and (b) constitute one continuous drawing 355 (a) Upper part of RWE (b) Lower part of RWE Fig. 40. Control organ CO 357 Fig. 41. Read-write-erase control RWEC (Note: parts (a)-(f) constitute one continuous drawing.) 358 (a) Control organs for lengthening C 2 (COi and C0 2 ) and for lengthening the lower part of Ci (C0 3 and C0 4 ) (b) PP(1) to store the fact that a "zero" is to be written in x n , and control organs for writing a zero and length- ening the upper part of Ci (c) PP(1) to store the fact that a "one" is to be written in x n , and control organs for leaving a one in cell x n and lengthening the upper part of Ci (d) Control organs for shortening C 2 (e) Control organs and PP(1) for shortening the lower part of Ci and writing in cell x n (f) Control organs for shortening the upper part of G Chapter 5 Fig. 42. Crossing organ 364 (a) Crossing organ (b) Initial state of clock Fig. 43. State organ SO of a finite automaton FA 365 Fig. 44. Constructing arm 366 Fig. 45. Horizontal advance of constructing arm 366 Fig. 46. Vertical advance of constructing arm 367 Fig. 47. Horizontal retreat of constructing arm with construction of 7 and 8 368 Fig. 48. Vertical retreat of constructing arm with construction of 7 and 8 369 Fig. 49. Injection of starting stimulus into the secondary autom- aton 370 Fig. 50. Operation of the constructing arm 371 LIST OF FIGURES xiii Fig. 51. New method of operating the linear array L 372 Fig. 52. Writing "one" in cell x n and lengthening the reading loop 373 Fig. 53. Static-dynamic converter 374 Fig. 54. The universal constructor M c * 375 Fig. 55. Self-reproduction 376 Fig. 56. Self -reproduction of a universal computer-constructor. . 377 PREFACE In the late 1940's John von Neumann began to develop a theory of automata. He envisaged a systematic theory which would be mathematical and logical in form and which would contribute in an essential way to our understanding of natural systems (natural auto- mata) as well as to our understanding of both analog and digital computers (artificial automata). To this end von Neumann produced five works, in the following order: (1) "The General and Logical Theory of Automata." Read at the Hixon Symposium in September, 1948; published in 1951. Col- lected Works 5.288-328. 1 (2) "Theory and Organization of Complicated Automata." Five lectures delivered at the University of Illinois in December, 1949. This is Part I of the present volume. (3) "Probabilistic Logics and the Synthesis of Reliable Organisms from Unreliable Components." Lectures given at the California Institute of Technology in January, 1952. Collected Works 5.329- 378. (4) "The Theory of Automata: Construction, Reproduction, Ho- mogeneity." Von Neumann started this manuscript in the fall of 1952 and continued working on it for about a year. This is Part II of the present volume. (5) The Computer and the Brain. Written during 1955 and 1956; published in 1958. The second and fourth of these were left at his death in a manuscript form which required extensive editing. As edited they constitute the two parts of the present volume, which thus concludes von Neu- mann's work on the theory of automata. As a background for this editorial work I made a study of all of von Neumann's contributions on computers, including the theory of automata. I have summarized his contributions in the "Introduction" to the present volume. Von Neumann was especially interested in complicated automata, such as the human nervous system and the tremendously large com- 1 Complete references are given in the bibliography. "Collected Works 5.288- 328" refers to pp. 288-328 of Vol. V of von Neumann's Collected Works. xv Xvi THEORY OF SELF-REPRODUCING AUTOMATA puters he foresaw for the future. He wanted a theory of the logical organization of complicated systems of computing elements and be- lieved that such a theory was an essential prerequisite to constructing very large computers. The two problems in automata theory that von Neumann concentrated on are both intimately related to com- plexity. These are the problems of reliability and self-reproduction. The reliability of components limits the complexity of the automata we can build, and self -reproduction requires an automaton of con- siderable complexity. Von Neumann discussed reliability at length in his "Probabilistic Logics and the Synthesis of Reliable Organisms from Unreliable Components. " His work on self -reproducing automata is found chiefly in the present volume. Part II, which constitutes the bulk of the present volume, treats the logical design of a self-reproducing cellular automaton. Though the shorter Part I is devoted to complicated automata in general, its high point is the kinematic model of self- reproduction (Fifth Lecture). It therefore seemed appropriate to use the title "Theory of Self-Reproducing Automata" for the whole work. It is unfortunate that, because of his premature death, von Neu- mann was unable to put in final form any of the research he was doing in automata theory. The manuscripts for both parts of the present volume were unfinished; indeed, they were both, in a sense, first drafts. There is one compensation in this: one can see von Neu- mann's powerful mind at work. Early drafts by a thinker of von Neumann's ability are not often available. For this reason, I have tried hard to preserve the original flavor of von Neumann's manu- scripts, while yet rendering them easily readable. So that the reader will know what the raw manuscripts are like, I will describe them and the editorial changes I have made in them. Von Neumann agreed to write a book on automata theory in con- nection with his five lectures at the University of Illinois in Decem- ber, 1949. A tape recording of the lectures was made to aid him in writing the book. Unfortunately, the recording and the typescript of it turned out badly, with gaps in the text, unintelligible passages, and missing words. Von Neumann himself never edited this typescript, but instead planned to use the manuscript "The Theory of Auto- mata: Construction, Reproduction, Homogeneity" for the promised book. The recording itself is not extant. Despite these circumstances, the Illinois lectures deserve publication, and the recorded version, highly edited of necessity, constitutes Part I of this volume. Von Neumann prepared a detailed outline of the lectures in advance PREFACE xvii of delivery, and the content of the lectures corresponded roughly to this outline. The outline bore the title "Theory and Organization of Complicated Automata/' and began with the following three lines: The logical organization and limitations of high-speed digital computers. Comparison of these and other complicated automata, both artificial and natural. Inference from the comparison of the nervous systems found in nature. There followed the title of each lecture and a list of topics to be cov- ered in that lecture; these are reproduced verbatim at the beginning of each lecture below, even though the lecture materials do not cor- respond exactly to the list of topics. Because of the state of the manuscript it has been necessary to do much editing. The typescript of the recording is especially poor in the more formal portions of the lectures, where von Neumann used the blackboard. For these portions, particularly, I have found two sets of notes taken at the lectures to be helpful. I have preserved von Neumann's phraseology where feasible, but I have frequently found it necessary to use my own words. I have sometimes felt it best to summarize what von Neumann was saying rather than to attempt to reconstruct the text. Several of the points von Neumann made in the Illinois lectures also appear in his published writings, or are well known, and in these cases I have often summarized what von Neu- mann said or given references to his published works. Where the writing is strictly my own, it appears in brackets. The reconstructed edition of von Neumann's words is not bracketed, but it should be kept in mind that much of this unbracketed text is heavily edited. The manuscript "The Theory of Automata: Construction, Repro- duction, Homogeneity" was in a much better state. It seems to have been a first draft, with the exception that there is an earlier outline (with figures) of the procedure whereby the memory control MC lengthens and shortens the connecting loop Ci and the timing loop C 2 under the direction of the constructing unit CU (cf. Sees. 4.1 and 4.2). Despite its being a first draft, the manuscript was publishable as it stood except for deficiencies of the following three types. (1) First, the manuscript lacked many of those simple mecha- nisms which make for easy reading. There were no figure titles. For- mulas, sections, and figures were referred to by number only, without explicit indication as to whether the item referred to is a formula, section, or figure. Section titles were listed on a separate sheet. Also xviii THEORY OF SELF-REPRODUCING AUTOMATA on a separate sheet, von Neumann gave only brief indications for the footnotes he planned. Organs were referred to by letter alone. For example, von Neumann merely used "A" and "B" to refer to the organs I have called the "constructing unit CU" and the "memory control MC," respectively. I have worked through the manuscript several times and each time I have been amazed at how von Neumann could keep track of what he was doing with so few mnemonic devices. In editing the manuscript I have endeavored to supply these de- vices. For example, where von Neumann wrote "CO" I often put "control organ CO." I have added titles to the figures and completed the footnote references. Von Neumann wrote some explanatory re- marks on the figures; these have been moved to the text. Similar and related changes have been made, all without any indication in the text. In addition, I have inserted footnotes, commentaries, explanations, and summaries at various places in the text, and have added a con- including chapter (Ch. 5). All such additions are in brackets. Von Neumann's brackets have been changed to braces, except for his usage "[0]" and "[1]" to refer to ordinary and special symbols. In connection with my bracketed additions, I have added Tables I and V and many figures. Figures 1-8, 16, 18, 19, 22, 24, 28-36, 38, 39, and 41 are von Neumann's; the remaining figures are mine. (2) Second, the manuscript "Theory of Automata: Construction, Reproduction, Homogeneity" contained many errors. These range from minor slips (which I have corrected without any specific indica- tion), through errors of medium significance (which I have corrected or commented on in bracketed passages), to major errors requiring considerable redesign (which I have discussed in Sections 5.1.1 and 5.1.2). All of these errors are correctable, but because organs designed in the earlier parts of the manuscript are used in later parts, many of these errors propagate and "amplify." In this connection, it should be kept in mind that the manuscript was an early draft, and that von Neumann was working out the design as he proceeded, leaving many design parameters for later specification. (3) Third, the manuscript "Theory of Automata: Construction, Reproduction, Homogeneity" is incomplete. The construction stops before the tape unit is quite finished. In Chapter 5 I show how to complete the design of von Neumann's self -reproducing automaton. The technical development of the manuscript is extremely com- plicated and involved. The deficiencies just mentioned add to its difficulty. In some respects it would have been editorially easier not to edit the manuscript after Chapter 2 and instead work out the PREFACE xix design of von Neumann's self -reproducing automaton along the lines he last envisaged. But this was not a real alternative because of the historical importance of the manuscript, and the opportunity it gives to observe a powerful mind at work. I have therefore endeavored to make corrections and add comments so as to preserve the original style of the manuscript while making it relatively easy to read. I am indebted to a number of people for their assistance. The late Mrs. Klara von Neumann-Eckardt gave me information about her husband's manuscripts. Several people who worked with von Neu- mann on computers gave me firsthand information: Abraham Taub, Herman Goldstine, the late Adele Goldstine, and especially Julian Bigelow and Stan Ulam; von Neumann often discussed his work on automata theory with Bigelow and with Ulam. John Kemeny, Pierce Ketchum, E. F. Moore, and Claude Shannon heard lectures by or had discussions with von Neumann on automata theory. Kurt GodePs letter at the end of the Second Lecture of Part I is reproduced with his kind permission. Thanks go to many of my graduate students and research associates for technical assistance, particularly Michael Faiman, John Hanne, James Thatcher, Stephen Hedetniemi, Frederick Suppe, and Richard Laing. Alice Finney, Karen Brandt, Ann Jacobs, and Alice R. Burks have provided editorial assistance. M. Elizabeth Brandt drew the figures. My editorial work was supported by the National Science Foundation. None of these share any responsibility for the editing. Arthur W. Burks Ann Arbor, 1965 EDITOR'S INTRODUCTION Von Neumann's Work on Computers John von Neumann was born on December 28, 1903 in Budapest, Hungary, and died in Washington, D.C., February 8, 1957. 1 He earned a doctorate in mathematics from the University of Budapest and an undergraduate chemistry degree from the Eidgenossische Technische Hochschule in Zurich, Switzerland. He became a Privat- docent at the University of Berlin in 1927 and a Privatdocent at the University of Hamburg in 1929. In 1930 he came to the United States as a visiting lecturer at Princeton University, where he was made full professor in 1931. In 1933 he joined the newly formed Institute for Advanced Study as a professor and retained that post for the rest of his life. 2 In later life, while retaining his theoretical interests and produc- tivity, von Neumann developed strong interests in the applications of mathematics. During the Second World War he became heavily involved in scientific research on problems of defense. He played a major role in the development of the atomic bomb, contributing particularly to the method of implosion. He was a consultant to many government laboratories and organizations and a member of many important scientific advisory committees. After the war he continued these consulting and advisory activities. Altogether he was involved in such diverse fields as ordnance, submarine warfare, bombing objectives, nuclear weapons (including the hydrogen bomb), military strategy, weather prediction, intercontinental ballistic mis- siles, high-speed electronic digital computers, and computing methods. In October, 1954, the President of the United States appointed him to the United States Atomic Energy Commission, a position he held at the time of his death. He received many awards and honors during his lifetime, including membership in the National Academy of Sci- ences, two Presidential Awards, and the Enrico Fermi Award of the Atomic Energy Commission. The latter was given especially for his 1 See Ulam, "John von Neumann," and Mrs. von Neumann's preface to The Computer and the Brain. 2 See his Collected Works, edited by A. Taub. An excellent summary of von Neumann's accomplishments is presented in the Bulletin of the American Mathematical Society, Vol. 64, No. 3, Part 2, May, 1958. 1 2 THEORY OF SELF-REPRODUCING AUTOMATA contributions to the development of electronic computers and their uses. Von Neumann the Mathematician. During the last years of his life John von Neumann devoted considerable effort to developing a theory of automata. The present volume, edited from two unfinished manuscripts, is his last work on this subject. Because of his premature death he was unable to finish a volume which would present a com- plete picture of what he wished to accomplish. It is therefore appro- priate to summarize here the main features of his projected theory of automata. Since his conception of automata theory arose out of his work in mathematics and computers, we will begin by describing that work. Von Neumann was a very great mathematician. He made many important contributions in a wide range of fields. Von Neumann himself thought his most important mathematical achievements , were in three areas: the mathematical foundations of quantum theory, the theory of operators, and ergodic theory. His contributions in other areas bear more directly on his computer work. In the late 1920 , s he wrote on symbolic logic, set theory, axiomatics, and proof theory. In the middle thirties he worked on lattice theory, continuous ge- ometry, and Boolean algebra. In a famous paper of 1928 and in a book of 1944 3 he founded the modern mathematical theory of games. Starting in the late thirties and continuing through and after the war he did much research in fluid dynamics, dynamics, problems in the mechanics of continua arising out of nuclear technology, and meteorology. During the war he became involved in computing and computers, and after the war this became his main interest. Von Neumann and Computing. Von Neumann was led into com- puting by his studies in fluid dynamics. Hydrodynamical phenomena are treated mathematically by means of non-linear partial differential equations. Von Neumann became especially interested in hydro- dynamical turbulence and the interaction of shock waves. He soon found that existing analytical methods were inadequate for obtaining even qualitative information about the solutions of non-linear partial differential equations in fluid dynamics. Moreover, this was so of non-linear partial differential equations generally. Von Neumann's response to this situation was to do computing. 4 During the war he found computing necessary to obtain answers to 3 "Zur Theorie der Gesellschaftsspiel." Theory of Games and Economic Be- havior, with Oskar Morgenstern. 4 See Ulam, "John von Neumann," pp. 7-8, 28 ff., and Birkhoff, Hydrody- namics, pp. 5, 25. editor's introduction 3 problems in other fields, including nuclear technology. Hence, when the new high-speed electronic digital general-purpose computers were developed during and after the war, he was quick to recognize their potentialities for hydrodynamics as well as other fields. In this con- nection he developed a general method for using computers which is of very great importance because it is applicable to a wide variety of problems in pure and applied mathematics. The procedure which he pioneered and promoted is to employ computers to solve crucial cases numerically and to use the results as a heuristic guide to theorizing. Von Neumann believed experi- mentation and computing to have shown that there are physical and mathematical regularities in the phenomena of fluid dynamics and important statistical properties of families of solutions of the non-linear partial differential equations involved. These regularities and general properties could constitute the basis of a new theory of fluid dynamics and of the corresponding non-linear equations. Von Neumann believed that one could discover these regularities and general properties by solving many specific equations and generaliz- ing the results. From the special cases one would gain a feeling for such phenomena as turbulence and shock waves, and with this quali- tative orientation could pick out further critical cases to solve nu- merically, eventually developing a satisfactory theory. See the First Lecture of Part I of this volume. This particular method of using computers is so important and has so much in common with other, seemingly quite different, uses of computers that it deserves extended discussion. It is of the essence of this procedure that computer solutions are not sought for their own sake, but as an aid to discovering useful concepts, broad prin- ciples, and general theories. It is thus appropriate to refer to this as the heuristic use of computers. 5 The heuristic use of computers is similar to and may be combined with the traditional hypothetical-deductive-experimental method of science. In that method one makes an hypothesis on the basis of the available information, derives consequences from it by means of mathematics, tests the consequences experimentally, and forms a new hypothesis on the basis of the findings; this sequence is iterated indefinitely. In using a computer heuristically one proceeds in the same way, with computation replacing or augmenting experimenta- tion. One makes an hypothesis about the equations under investiga- 5 See also Ulam, A Collection of Mathematical Problems, Ch. 8, ' 'Computing Machines as a Heuristic Aid." 4 THEORY OF SELF-REPRODUCING AUTOMATA tion, attempts to pick out some crucial special cases, uses a computer to solve these cases, checks the hypothesis against the results, forms a new hypothesis, and iterates the cycle. The computations may also be compared with experimental data. When this is done the heuristic use of computers becomes simulation. Computation in itself can only provide answers to purely mathe- matical questions, so when no comparison is made with empirical fact the heuristic use of computers contributes to pure mathematics. Von Neumann thought that the main difficulties in fluid dynamics stemmed from inadequate mathematical knowledge of non-linear partial differential equations, and that the heuristic use of computers would help mathematicians to construct an adequate and useful theory for this subject. He pointed out that while much progress had been made by means of wind tunnels, since the equations govern- ing the phenomena were known, these wind tunnels were being used as analog computers rather than as experimental apparatus. . . . many branches of both pure and applied mathematics are in great need of computing instruments to break the present stalemate created by the failure of the purely analytical approach to non-linear problems. . . . really efficient high-speed computing devices may, in the field of non-linear partial differential equations as well as in many other fields, which are now difficult or entirely denied access, provide us with those heuristic hints which are needed in all parts of mathematics for genuine progress. 6 Von Neumann's suggestion that powerful computers may provide the mathematician "with those heuristic hints which are needed in all parts of mathematics for genuine progress" is connected to his strong conviction that pure mathematics depends heavily on empirical science for its ideas and problems. . . the best inspirations of mod- ern mathematics . . . originated in the natural sciences. ,, 7 He recog- nized that mathematics is not an empirical science and held that the mathematician's criteria of selection of problems and of success are mainly aesthetical. I think that it is a relatively good approximation to truth — which is much too complicated to allow anything but approximations — that mathe- matical ideas originate in empirics, although the genealogy is sometimes long and obscure. But once they are so conceived, the subject begins to live a peculiar life of its own and is better compared to a creative one, governed by almost entirely aesthetical motivations, than to anything else and, in par- 6 Von Neumann and Goldstine, "On the Principles of Large Scale Comput- ing Machines," Collected Works 5.4. 7 "The Mathematician," Collected Works 1.2. The next quotation is from the same article, 1.9. editor's introduction 5 ticular, to an empirical science. There is, however, a further point which, I believe, needs stressing. ... at a great distance from its empirical source, or after much "abstract" inbreeding, a mathematical subject is in danger of degeneration. . . . whenever this stage is reached, the only remedy seems to me to be the rejuvenating return to the source: the reinjection of more or less directly empirical ideas. The role that empirical science plays in pure mathematics is a heuristic one: empirical science supplies problems to investigate and suggests concepts and principles for their solution. While von Neu- mann never said so, I think it likely that he thought the computations produced by the heuristic use of computers can play the same role in some areas of mathematics. In the First Lecture of Part I below he said that powerful methods in pure mathematics depend for their success on the mathematicians having an intuitive and heuristic understanding of them, and suggested that one can build this in- tuitive familiarity with non-linear differential equations by using computers heuristically. 8 It should be noted that in the heuristic use of computers the hu- man, not the machine, is the main source of suggestions, hypotheses, heuristic hints, and new ideas. Von Neumann wished to make the machine as intelligent as possible, but he recognized that human powers of intuition, spatial imagery, originality, etc., are far superior to those of present or immediately foreseeable machines. He wished to augment the ability of a skilled, informed, creative human by the use of a digital computer as a tool. This procedure would involve considerable interaction between man and the machine and would be facilitated by automatic programming and by input-output equip- ment designed for direct human use. Once he became interested in computing, von Neumann made important contributions to all aspects of the subject and its tech- nology. The extant methods of computation had been developed for hand computation and punched card machines and hence were not well suited to the new electronic computers, which were several orders of magnitude faster than the old. New methods were needed, and von Neumann developed many of them. He contributed at all levels. He devised algorithms and wrote programs for computations ranging from the calculation of elementary functions to the integration of non-linear partial differential equations and the solutions of games. 8 In view of von Neumann's emphasis on the role of intuition in mathemati- cal discovery it is of interest to note that von Neumann's own intuition was auditory and abstract rather than visual. See Ulam, "John von Neumann, 1903-1957," pp. 12, 23, and 38-39. THEORY OF SELF-REPRODUCING AUTOMATA He worked on general techniques for numerical integration and in- verting matrices. He obtained results in the theory of numerical stability and the accumulation of round-off errors. He helped develop the Monte Carlo method for solving integro-differential equations, inverting matrices, and solving linear systems of equations by ran- dom sampling techniques. 9 In this method the problem to be solved is reduced to a statistical problem which is then solved by computing the results for a sufficiently large sample of instances. Von Neumann also made important contributions to the design and programming of computers, and to the theory thereof. We will survey his work in these areas next. Logical Design of Computers. With his strong interest in computing and his background in logic and physics it was natural for von Neu- mann to become involved in the development of high-speed electronic digital computers. The first such computer was the ENIAC, designed and built at the Moore School of Electrical Engineering of the Uni- versity of Pennsylvania during the period 1943 to 1946. 10 Von Neu- mann had some contacts with this machine, and so a few words about it are in order. The idea of constructing a general purpose high-speed computer of electronic components originated with John Mauchly, who suggested to H. H. Goldstine of the Ordnance Department that the United States Army support the development and construction of such a machine, to be used primarily for ballistics computations. This sup- port was given, the Army being impressed especially with the great speed with which an electronic computer could prepare firing tables. The ENIAC was designed and constructed by a number of people, including the writer, under the technical direction of Mauchly and J. P. Eckert. Von Neumann came to visit us while we were building the ENIAC, and he immediately became interested in it. By this time the design of the ENIAC was already fixed, but after the ENIAC was completed von Neumann showed how to modify it so that it was much simpler to program. In the meantime he developed the logical design for a radically new computer, which we will describe later. The ENIAC was, of course, radically different from any earlier 9 Ulam, "John von Neumann,'' pp. 33-34. Von Neumann Collected Works 5.751-764. The method is described in Metropolis and Ulam, "The Monte Carlo Method." 10 See Burks, "Electronic Computing Circuits of the ENIAC" and "Super Electronic Computing Machine," Goldstine and Goldstine, "The Electronic Numerical Integrator and Computer (ENIAC)," and Brainerd and Sharpless, "The ENIAC." editor's introduction 7 computer, but interestingly enough, it was also quite different from its immediate successors. It differed from its immediate successors in two fundamental respects: the use of several semiautonomous com- puting units working simultaneously and semi-independently, and the exclusive reliance on vacuum tubes for high-speed storage. Both of these design features resulted from the electronic technology of the time. The basic pulse rate of the ENIAC circuits was 100,000 pulses per second. To obtain a high computation speed all 10 (or 20) decimal digits were processed in parallel, and, moreover, a large number of computing units were constructed, each with some local programming equipment, so that many computations could proceed simultaneously under the overall direction of a master programming unit. There were 30 basic units in the ENIAC: 20 accumulators (each of which could store and add a 10-digit number), 1 multiplier, 1 divider and square- rooter, 3 function table units, an input unit, an output unit, a master programmer, and 2 other units concerned with control. All of these basic units could operate at the same time. At that time the vacuum tube was the only reliable high-speed stor- age device — acoustic delay lines, electrostatic storage systems, mag- netic cores, etc., all came later — and so of necessity vacuum tubes were used for high-speed storage as well as for arithmetic and for logical control. This entailed a severe limitation on the high-speed store, as vacuum tubes are an expensive and bulky storage medium — the ENIAC contained 18,000 vacuum tubes as it was, a sufficient number for the skeptics to predict that it would never operate properly. The limited high speed storage of 20 10-digit decimal numbers was aug- mented by large quantities of low-speed storage of various types: elec- tromagnetic relays for input and output, hand operated mechanical switches controlling resistor matrices in the function table units for the storage of arbitrary numerical functions and of program informa- tion, and hand-operated mechanical switches and flexible plug-in cables for programming. A general purpose computer must be programmed for each particu- lar problem. This was done on the ENIAC by hand: by setting me- chanical switches of the program controls of each of the computing units used in the problem, interconnecting these program controls with cables, and setting the switches of the function tables. This program- ming procedure was long, laborious, and hard to check, and while it was being done the machine stood idle. After the ENIAC was com- pleted, von Neumann showed how to convert it into a centrally programmed computer in which all the programming could be done 8 THEORY OF SELF-REPRODUCING AUTOMATA by setting switches on the function tables. Each of the three function table units had a switch storage capacity of 104 entries, each entry consisting of 12 decimal digits and 2 sign digits. However, the pulses used to represent numbers were the same size and shape as the pulses used to stimulate program controls, so that the function table units could also be used to store program information. On von Neumann's scheme the outputs of the function tables were connected to the pro- gram controls of the other units through some special equipment and the master programmer, and the switches on the program controls of these units were set. All of this was done in such a way that it need not be changed from problem to problem. Programming was thus reduced to setting switches by hand on the function table units. In the meantime we were all concerned with the design of much more powerful computers. As mentioned earlier, the greatest weakness of the ENIAC was the smallness of its high-speed storage capacity, resulting from the technological fact that at the time the design of the ENIAC was fixed the vacuum tube was the only known reliable high-speed storage component. This limitation was overcome and the technology of computers changed abruptly when J. P. Eckert con- ceived of using an acoustic delay line as a high-speed storage device. Acoustic delay lines made of mercury had been used to delay pulses in war time radar equipment. Eckert's idea was to feed the output of a mercury delay line (through an amplifier and pulse reshaper) back into its input, thereby storing a large number of pulses in a circulat- ing memory. A circulating memory of, say, 1000 bits could be built with a mercury delay line and a few tubes, in contrast to the ENIAC where a double triode was required for each bit. In the ENIAC the few numbers being processed were stored in cir- cuits that could be changed both automatically and rapidly; all other numbers and the program information were stored in electromagnetic relays, switches, and cable interconnections. It now became possible to store all this information in mercury delay lines, where it would be quickly and automatically accessible. The ENIAC w r as a mixed syn- chronous, asynchronous machine. The use of pulses in the mercury delay lines made it natural to build a completely synchronous machine timed by a central source of pulses called the " clock.' ' Eckert and Mauchly designed circuits capable of operating at a pulse rate of 1 megacycle, 10 times the basic pulse rate of the ENIAC, and gave con- siderable thought to the design of a mercury delay line machine. Goldstine brought von Neumann in as a consultant, and we all par- ticipated in discussions of the logical design of such a machine. It was decided to use the binary system. Since the delay lines operated editor's introduction g serially, the simplest way to process the bits was seriatim. All of this made it possible to build a machine much smaller than the ENIAC and yet much more powerful than the ENIAC. The proposed machine was to be called the EDVAC. It was estimated that it could be built with about 3000 vacuum tubes. Von Neumann then worked out in considerable detail the logical design of this computer. The result appeared in his First Draft of a Report on the EDVAC, 11 which was never published. Since this report contained the first logical design of an electronic computer in which the program could be stored and modified electronically, I will sum- marize its contents. Of particular interest to us here are the following features of his design: the separation of logical from circuit design, the comparison of the machine to the human nervous system, the general organization of the machine, and the treatment of programming and control. Von Neumann based his construction on idealized switch-delay ele- ments derived from the idealized neural elements of McCulloch and Pitts. 12 Each such element has one to three excitatory inputs, possibly one or two inhibitory inputs, a threshold number (1, 2, 3), and a unit delay. It emits a stimulus at time t + 1 if and only if two conditions / are satisfied at time t: (1) no inhibitory input is stimulated, (2) the number of excitatory inputs stimulated is at least as great as the threshold number. 13 The use of idealized computing elements has two advantages. First, it enables the designer to separate the logical design from the circuit design of the computer. When designing the ENIAC, we developed logical design rules, but these were inextricably tied in with rules governing circuit design. With idealized computing elements one can distinguish the purely logical (memory and truth-functional) require- ments for a computer from the requirements imposed by the state of technology and ultimately by the physical limitations of the materials and components from which the computer is made. Logical design is the first step; circuit design follows. The elements for logical design 11 The initials abbreviate "Electronic Discrete Variable Automatic Com- puter." The machine of this name actually constructed at the Moore School of Electrical Engineering was built after the people mentioned above were no longer connected with the Moore School. The logical design of the Cam- bridge University EDSAC was based on this report. Wilkes, "Progress in High- Speed Calculating Machine Design" and Automatic Digital Computers. 12 "A Logical Calculus of the Ideas Immanent in Nervous Activity." 13 The threshold elements of "Probabilistic Logics and the Synthesis of Re- liable Organisms from Unreliable Components," Collected Works 5.332, are similar, but differ with respect to the operation of inhibitory inputs. 10 THEORY OF SELF-REPRODUCING AUTOMATA should be chosen to correspond roughly with the resultant circuits; that is, the idealization should not be so extreme as to be unrealistic. Second, the use of idealized computing elements is a step in the direction of a theory of automata. Logical design in terms of these elements can be done with the rigor of mathematical logic, whereas engineering design is necessarily an art and a technique in part. More- over, this approach facilitates a comparison and contrast between different types of automata elements, in this case, between computer elements on the one hand and neurons on the other. Von Neumann made such comparisons in First Draft of a Report on the EDVAC, not- ing the differences as well as the similarities. Thus he observed that the circuits of the EDVAC were to be synchronous (timed by a pulse clock) while the nervous system is presumably asynchronous (timed autonomously by the successive reaction times of its own elements). He also noted the analogy between the associative, sensory, and motor neurons of the human nervous system on the one hand, and the central part of the computer, its input, and its output, respectively. This comparison of natural and artificial automata was to become a strong theme of his theory of automata. The organization of the EDVAC was to be radically different from that of the ENIAC. The ENIAC had a number of basic units, all capable of operating simultaneously, so that many streams of com- putation could proceed at the same time. In contrast, the proposed EDVAC had only one basic unit of each kind, and it never performed two arithmetical or logical operations simultaneously. These basic units were a high-speed memory M, a central arithmetic unit CA, an outside recording medium R, an input organ I, an output organ O, and a central control CC. The memory M was to be composed of possibly as many as 256 delay lines each capable of storing 32 words of 32 bits each, together with the switching equipment for connecting a position of M to the rest of the machine. The memory was to store initial conditions and boundary conditions for partial differential equations, arbitrary numerical functions, partial results obtained during a computation, etc., as well as the program (sequence of orders) directing the com- putation. The outside recording medium R could be composed of punched cards, paper tape, magnetic wire or tape, or photographic film, or combinations thereof. It was to be used for input and output, as well as for auxiliary low-speed storage. The input organ I trans- ferred information from R to M; the output organ O transferred information from M to R. The notation of M was binary; that of R was decimal. The central arithmetic unit CA was to contain some auxiliary regis- editor's introduction 1 1 ters (one-word delay lines) for holding numbers. Under the direction of the central control CC it was to add, subtract, multiply, divide, compute square-roots, perform binary-decimal and decimal-binary conversions, transfer numbers among its registers and between its registers and M, and choose one of two numbers according to the sign of a third number. The last operation was to be used for transfer of control (jumping conditionally) from one order in the program to another. Numbers were processed in CA serially, the least significant bits being treated first, and only one operation was performed at a time. The first bit of each word was zero for a number, one for an order. Eight bits of an order were allotted to the specification of the operation to be performed and, if a reference to M was required, thirteen bits to an address. A typical sequence would go like this. Suppose an addition order with memory address x was located in position y of M, the addend in the next position y + 1, and the next order to be executed in the next position y + 2. The order at y would go into CC, the ad- dend at y + 1 into CA, and the augend would be found in CA; the sum would be placed in position x of M. The order at position y + 2 would be executed next. Normally orders were taken from the delay lines in sequence, but one order with address z provided for CC to take its next order from memory position z. When a number was transferred from CA to address w of M, account was taken of the contents of w; if w contained an order (i.e., a word whose first bit was one), then the 13 most sig- nificant bits of the result in CA were substituted for the 13 address bits located in w. The addresses of orders could be modified auto- matically by the machine in this way. ThisNpro vision, together with the order for shift of control to an arbitrary memory position w and the power of CA to choose one of two numbers according to the sign of a third, made the machine a fully automatic stored program computer. At the same time that he worked out the logical design of the EDVAC von Neumann suggested the development of a high-speed memory incorporating the principle of the iconoscope. 14 Information is placed on the iconoscope by means of light and sensed by an electron beam. Von Neumann suggested that information could also be placed on the inside surface of such a tube by means of an electron beam. The net result would be storage in the form of electrostatic charges on a dielectric plate inside a cathode-ray tube. He predicted that such a 14 First Draft of a Report on the EDVAC, Section 12.8. 12 THEORY OF SELF-REPRODUCING AUTOMATA memory would prove superior to the delay line memory. It soon be- came apparent that this was so, and von Neumann turned his atten- tion to an even more powerful computer based on such a memory. The new computer was to be much faster than any other machine under consideration, mainly for two reasons. First, in an electrostatic storage system each position is immediately accessible, whereas a bit or word stored in a delay line is not accessible until it travels to the end of the line. Second, it was decided to process all (40) bits of a word in parallel, thereby reducing the computation time. The logical design is given in Preliminary Discussion of the Logical Design of an Electronic Computing Instrument. 1 ^ The proposed computer was built at the Institute for Advanced Study by a number of engineers under the di- rection of Julian Bigelow, and was popularly known as the JONIAC. 16 While the machine was still under construction, its logical and cir- cuit design was influential on many computers constructed in the United States, including computers at the University of Illinois, Los Alamos National Laboratory, Argonne National Laboratory, Oak Ridge National Laboratory, and the Rand Corporation, as well as some machines produced commercially. The JONIAC played an im- portant role in the development of the hydrogen bomb. 17 Programming and Flow Diagrams. Von Neumann immediately recognized that these new computers could solve large problems so fast that new programming procedures would be needed to enable mathematicians and programmers to make full use of the powers of these machines. With the order code of the proposed Institute for Ad- vanced Study computer in mind he proceeded to develop new pro- gramming methods. The results were presented in the influential series of reports Planning and Coding of Problems for an Electronic Computing Instrument. 1 * One generally begins with a mathematical formulation of a problem and then decides what explicit computational methods he will em- ploy. These methods are almost always highly inductive, involving recursions within recursions many times over. What one has at this 15 This was written in collaboration with H. H. Goldstine and the present writer. It was typical of von Neumann that he wanted the patentable material in this report to belong to the public domain, and at his suggestion we all signed a notarized statement to this effect. 16 It is described by Estrin, ' 'The Electronic Computer at the Institute for Advanced Study." The original plan was to use the memory described by Rajchman in 'The Selectron — a Tube for Selective Electrostatic Storage,' ' but the actual memory consisted of cathode-ray tubes operated in the manner de- scribed by Williams in "A Cathode-Ray Digit Store." 17 New York Times, Feb. 9, 1957, p. 19. 18 Written in collaboration with H. H. Goldstine. editor's introduction 13 stage is a general description of the desired computation, expressed in the ordinary language and mathematical symbolism of the mathe- matician. The task is now to transform this description into a program expressed in machine language. This is not a simple, straightforward translation task, however, partly because of the generality of the description of the computation and partly because of the nature of recursive procedures. Recursive procedures, particularly when complicated, are better understood dynamically (in terms of their step by step effects) rather than statically (in terms of the static sequence of symbols defining them). The corresponding aspect of the machine language is that the effect of an order is dependent on the very computation which it itself is helping to direct: whether and how often an order is used and to what memory position it refers. All of these are a function of the whole program and the numbers being processed. Thus a program, though a static sequence of symbols, is usually best understood in terms of its dynamic effects, that is, its control of the actual sequential computa- tional process. To help bridge this gap between the mathematician's description of the desired computation in his own language and the corresponding program in the machine language, von Neumann invented the flow diagram. A flow diagram is a labeled graph composed of enclosures and points connected by lines. The enclosures are of various kinds: operation boxes (specifying non-recursive fragments of the computa- tion as symbolized in the box), alternative boxes (corresponding to conditional transfer of control orders and being labeled with the condition for transfer), substitution and assertion boxes (indicating the values of the indices of the recursions)^ storage boxes (giving the contents of crucial parts of the memory at certain stages of the com- putation), and labeled circles representing the beginning and terminus and interconnections. In executing the program corresponding to a given flow diagram, the computer in effect travels through the flow diagram, starting at the beginning circle, executing sequences of or- ders described in operation boxes, cycling back or branching off to a new part of the diagram according to the criteria stated in alternative boxes, leaving an exit circle in one part of the graph to enter an en- trance circle in another part of the graph, and finally stopping at the terminal circle. Direct lines are used to represent the direction of passage through the graph, converging lines meeting at points of the graph. An undirected line is used to connect a storage box to that point of the graph which corresponds to the stage of computation partly described by the contents of the storage box. 11 THEORY OF SELF-REPRODUCING AUTOMATA It is unnecessary for the programmer to prepare and code a com- plete flow diagram for a complicated problem. A problem of any con- siderable complexity is composed of many subproblems, and flow diagrams and subroutines can be prepared for these in advance. It was planned to code subroutines corresponding to a large number of basic algorithms employed in solving problems on a digital computer: decimal-to-binary and binary-to-decimal conversion, double precision arithmetics, various integration and interpolation methods, meshing and sorting algorithms, etc. These subroutines would be available in a library of tapes. To solve a particular problem, the programmer would merely write a "combining routine" which would direct the computer to take the proper subroutines from the tape and modify them ap- propriately to that particular problem. The use of combining routines and a library of subroutines was a first step in the direction of using a computer to help prepare programs for itself. Still, in this system, everything written by the programmer must be in the clumsy "machine language." A better procedure is to construct a "programmer's language" in which the programmer will write programs, and then to write a translation program in machine language which directs the machine to translate a program written in the programmer's language into a program stated in machine lan- guage. The programming language would be close to the natural and mathematical language normally used by mathematicians, scientists, and engineers, and hence would be easy for the programmer to use. This approach is currently being developed under the name of auto- matic programming. Von Neumann discussed it under the names "short code" (programmer's language) and "complete code" (machine language). 19 Von Neumann recognized that the idea of automatic programming is a practical application of Turing's proof that there exists a universal computing machine. A Turing machine is a finite automaton with an indefinitely expandable tape. Any general purpose computer, together with an automatic factory which can augment its tape store without limit, is a Turing machine. Turing's universal computer U has this property: for any Turing machine M there is a finite program P such that machine U, operating under the direction of P, will compute the same results as M. That is, U with P simulates (imitates) M. Automatic programming also involves simulation. Let U c be a com- puter which operates with a machine language inconvenient for the 19 The Computer and the Brain, pp. 70-73. editor's introduction 15 programmer to use. The programmer uses his more convenient pro- grammer's language. It is theoretically possible to build a machine which will understand the programmer's language directly; call this hypothetical computer M p . Let P t be the program (written in the language of machine U c ) which translates from the programmer's language to the machine language of U c . Then U c , operating under the direction of P t , will compute the same results as M p . That is, Uc with P t simulates M p , which is a special case of Turing's uni- versal U with P simulating M. Note that two languages are employed inside U c • a machine language which is used directly and a programmer's language which is used indirectly via the translation routine P t . Von Neumann re~ ferred to these as the "primary" and "secondary" language of the machine, respectively. The primary language is the language used for communication and control within the machine, while the secondary language is the language we humans use to communicate with the machine. Von Neumann suggested that by analogy there may be a primary and secondary language in the human nervous system, and that the primary language is very different from any language we know. Thus the nervous system appears to be using a radically different system of notation from the ones we are familiar with in ordinary arithmetics and mathematics. . . . . . . whatever language the central nervous system is using, it is characterized by less logical and arithmetical depth than what we are normally used to. Thus logics and mathematics in the central nervous system, when viewed as languages, must be structurally essentially different from those languages to which our common experience refers. ^ . . . when we talk mathematics, we may be discussing a secondary language, built on the primary language truly used by the central nervous system. 20 He thought that the primary language of the nervous system was statistical in character. Hence his work on probabilistic logics was relevant to this language. See his discussion of probabilistic logics and reliability in the Third and Fourth Lectures of Part I below and in "Probabilistic Logics and the Synthesis of Reliable Organisms from Unreliable Components." Computer Circuits. From the beginning von Neumann had an in- terest in the circuits and components of electronic digital computers. The Computer and the Brain, pp. 79-82. 10 THEORY OF SELF-REPRODUCING AUTOMATA He analyzed the basic physical and chemical properties of matter with the purpose of developing improved computer components. 21 In his lectures on automata theory he compared natural and artificial components with respect to speed, size, reliability, and energy dis- sipation, and he computed the thermodynamic minimum of energy required for a binary decision. See the Fourth Lecture of Part I of the present volume. His search for physical phenomena and effects that could be used for computing led to his invention of a new component. This is a subharmonic generator which is driven by an excitation (power) source at frequency nf (n = 2, 3, 4, • • •) and which oscillates at the subharmonic frequency f. 22 The subharmonic generator cir- cuit incorporates an inductance and capacitance circuit tuned to the frequency /. Either the capacitance or inductance is non-linear, and its value varies periodically under the influence of the exciting signal (of frequency nf). The oscillation at frequency / can occur in any of n distinct phases. Each oscillation phase is highly stable when estab- lished, but, when the oscillation begins, the choice of phase can easily be controlled by a small input signal of frequency / and of the desired phase. Modulating (turning off and on) the exciting source (of fre- quency nf) with a square wave (clock signal) of much lower frequency produces alternate passive and active periods, and an input of fre- quency / can select one of the n phases of oscillation as the exciting signal appears. To transfer the phase state of one subharmonic generator (a trans- mitter) to another (a receiver), the transmitter and receiver are coupled through a transformer. The square-wave modulations into transmitter and receiver are of the same frequency but of different phase, so that the transmitter is still on while the receiver is just 21 Most of his ideas in this area were only discussed with others and never published. A brief reference occurs in Preliminary Discussion of the Logical Design of an Electronic Computing Instrument, Collected Works 5.39. Booth, "The Future of Automatic Digital Computers," p. 341, mentions a supercon- ducting storage element discussed with von Neumann in 1947. Von Neumann also did some early work on the MASER. See Collected Works 5.420, Scientific American (February, 1963) p. 12, and Scientific American (April, 1963) pp. 14-15. 22 "Non-Linear Capacitance of Inductance Switching, Amplifying and Memory Devices." Von Neumann's ideas are also described by Wigington, "A New Concept in Computing." The parametron, invented independently by E. Goto, embodies essentially the same idea, but is far different in the suggested speed of implementation. See Goto, "The Parametron, a Digital Computing Element which Utilizes Para- metric Oscillation." The highest frequencies Goto reports are an exciting fre- quency (2/) of 6 X 10 6 cycles per second and a clock frequency of 10 5 cycles. According to Wigington, op. cit. y von Neumann estimated that an exciting fre- quency (20 of 5 X 10 10 and a clock rate of 10 9 were feasible. editor's introduction 17 coming on. As a result the receiver begins oscillating at frequency / in phase with the transmitter. The receiver can later transmit its state to another subharmonic generator, and so on down the line. One may use three clock signals, all of the same frequency but of three different phases, and, by exciting interconnected generators with the proper clock signals, transfer information around a system of generators. Each such generator then has an input and an output operating at frequency/, beside the exciting input of frequency nf; the phasing of the two different clock signals to two interconnected generators de- termines which generator is the receiver and which is the transmitter. The output signal (at/) has much more power than is required for the input signal (at /) to control the phase of the oscillation, and so the subharmonic generator is an amplifier at frequency /, the power for amplification coming from the exciting signal of frequency nf. Since the oscillation of the subharmonic generator is stable and con- tinues after the subharmonic input from another generator terminates, the device clearly has memory capacity. Switching can also be done with subharmonic generators, in the following way. Let n = 2; i.e., let there be two distinct phases of subharmonic oscillation at frequency /, so that the system is binary. Connect the outputs of three trans- mitting generators to the primary of a transmitter so that the voltages of these outputs add; connect a receiver generator to the secondary of this transformer. The voltage of the transformer secondary will then have the phase of the majority of the transmitting generators, so that the receiving generator will oscillate in this phase. This arrangement realizes a majority element, that is, a three-input switch with delay whose output state is "1" if and only if two or more inputs are in state "l". 23 A negation element may be realized by connecting the output of one generator to the input of another and reversing the direction of the transformer winding. The constants "0" and "1" are realized by sources of the two different phases of the signal of fre- quency /. The majority element, negation element, and the constant sources "0" and "1" are sufficient to do all computing, so that the central part of a computer can be completely constructed from sub- harmonic generators. 24 Von Neumann's Theory of Automata Introduction. On reviewing the preceding sketch of von Neumann's research accomplishments, one is immediately struck by the tremen- 23 "Probabilistic Logics and the Synthesis of Reliable Organisms from Un- reliable Components, " Collected Works 5.339. 24 Many computers are so constructed. See Goto, op. ext. 18 THEORY OF SELF-REPRODUCING AUTOMATA dous combination of breadth and depth revealed in those accomplish- ments. Particularly notable is the extent to which von Neumann's achievements range from the purely theoretical to the extremely practical. It should be added in the latter connection that he was among the first to recognize and promote the tremendous potentiali- ties in computers for technological revolution and the prediction and control of man's environment, such as the weather. Von Neumann was able to make substantial contributions to so many different fields because he possessed a rare combination of differ- ent abilities along with wide interests. His quick understanding and powerful memory enabled him to absorb, organize, retain, and use large quantities of information. His wide interests led him to work in and keep contact with many areas. He was a virtuoso at solving diffi- cult problems of all kinds and at analyzing his way to the essence of any situation. This wide range of interests and abilities was one of von Neumann's great strengths as a mathematician and made him an applied mathe- matician par excellence. He was familiar with the actual problems of the natural and engineering sciences, on the one hand, and the ab- stract methods of pure mathematics on the other. He was rare among mathematicians in his ability to communicate with scientists and engineers. This combination of theory and practice was deliberately cultivated by von Neumann. He was a careful student of the history * and nature of scientific method and its relation to pure mathematics 25 and believed that mathematics must get its inspiration from the empirical sciences. Given his background and type of mind, it was natural for von Neumann to begin to construct a general theory of computers. Being aware of the important similarities between computers and natural organisms, and of the heuristic advantages in comparing such different but related systems, he sought a theory that would cover them both. He called his proposed systematic theory the "theory of automata." This theory of automata was to be a coherent body of concepts and principles concerning the structure and organization of both natural and artificial systems, the role of language and information in such systems, and the programming and control of such systems. Von Neumann discussed the general nature of automata theory at several places in Part I and in Chapter 1 of Part II of the present volume. Von Neumann's early work on computer design and programming 25 See Chapter 1 of Theory of Games and Economic Behavior; ' 'The Mathe- matician," Collected Works 1.1-9; and "Method in the Physical Sciences," Collected Works 6.491-498. editor's introduction I!) led him to recognize that mathematical logic would play a strong role in the new theory of automata. But for reasons to be mentioned later, he thought that mathematical logic in its present form, though useful in treating automata, is not adequate to serve as "the" logic of auto- mata. Instead, he believed that a new logic of automata will arise which will strongly resemble and interconnect with probability theory, thermodynamics, and information theory. It is obvious from all this that von Neumann's theory of automata will, in the beginning at least, be highly interdisciplinary. Unfortunately, because of his premature death, von Neumann was unable to put in final form any of the research he was doing in auto- mata theory. In his last work on this subject he said that "it would be very satisfactory if one could talk about a 'theory' of such automata. Regrettably, what at this moment exists . . . can as yet be described only as an imperfectly articulated and hardly formalized 'body of experience'." 26 Von Neumann's accomplishments in this area were nevertheless substantial. He outlined the general nature of automata theory: its structure, its materials, some of its problems, some of its applications, and the form of its mathematics. He began a compara- tive study of artificial and natural automata. Finally, he formulated and partially answered two basic questions of automata theory: How can reliable systems be constructed from unreliable components? What kind of logical organization is sufficient for an automaton to be able to reproduce itself? The first of these questions is discussed in his "Probabilistic Logics and the Synthesis of Reliable Organisms from Unreliable Components." The second question is discussed in the Fifth Lecture of Part I and in Part II of the present volume. I do not know how von Neumann was led to these two problems, but on the basis of his interests and what he has written it is plausible that they arose out of his actual work with computers in the following way. The new electronic computers were revolutionary because in con- trast to earlier computing systems (humans, mechanical and electro- mechanical machines, and combinations thereof) they could do large quantities of computation automatically and rapidly. The advances through the ENIAC, the proposed EDVAC, and the Institute for Advanced Study computer, were all big steps in the direction of more powerful computers. His interest in solving non-linear partial differ- ential equations in general, and in the equations for predicting the weather in particular, would naturally lead him to desire ever more powerful machines and to look for and try to remove the basic limita- tions blocking the construction of such machines. As a consultant for 26 The Computer and the Brain, p. 2. 20 THEORY OF SELF-REPRODUCING AUTOMATA government and industry he was very influential in promoting the design and construction of larger computers. Von Neumann compared the best computers that could be built at the time with the most intelligent natural organisms and concluded that there were three fundamental factors limiting the engineer's abil- ity to build really powerful computers: the size of the available com- ponents, the reliability of these components, and a lack of a theory of the logical organization of complicated systems of computing ele- ments. Von Neumann's work on componentry was directed toward the first limitation, and his results on reliability and self -reproduction each contribute toward removing both the second and the third limita- tions. In his "Probabilistic Logics and the Synthesis of Reliable Or- ganisms from Unreliable Components, " he gave two methods of overcoming the unreliability of the components, not by making them more reliable, but by organizing them so that the reliability of the whole computer is greater than the reliability of its parts. He regarded his work on probabilistic logics as a step in the direction of the new logic of automata. His work on self -reproduction also belongs to the theory of complicated automata. He felt that there are qualitatively new principles involved in systems of great complexity and searched for these principles in the phenomenon of self -reproduction, which clearly depends on complexity. It is also to be expected that because of the close relation of self -reproduction to self -repair, results on self- reproduction would help solve the reliability problem. Thus von Neumann was especially interested in complex automata; he wanted a theory of the logical organization of complicated systems of computing elements. His questions about reliability and self-repro- duction are particularly germane to complex automata. Two further points are relevant. First, von Neumann believed that in starting a new science one should begin with problems that can be described clearly, even though they concern everyday phenomena and lead to well known results, for the rigorous theory developed to explain these phenomena can provide a base for further advances. 27 His prob- lems of reliability and self-reproduction are of this kind. Second, von Neumann believed that the lack of an adequate theory of complicated automata is an important practical barrier to building more powerful machines. He explicitly stated that until an adequate theory of auto- mata exists there is a limit in the complexity and capacity of the automata we can fabricate. 28 27 Theory of Games and Economic Behavior, Sees. 1.3 and 1.4. 28 ' 'The General and Logical Theory of Automata," Collected Works 5.302- 306. editor's introduction 21 Natural and Artificial Automata. The scope of the theory of auto- mata and its interdisciplinary character are revealed by a considera- tion of the two main types of automata: the artificial and the natural. Analog and digital computers are the most important kinds of artificial automata, but other man-made systems for the communication and processing of information are also included, for example, telephone and radio systems. Natural automata include nervous systems, self- reproductive and self -repairing systems, and the evolutionary and adaptive aspects of organisms. Automata theory clearly overlaps communications and control engineering on the one hand, and biology on the other. In fact, arti- ficial and natural automata are so broadly defined that one can legiti- mately wonder what keeps automata theory from embracing both these subjects. Von Neumann never discussed this question, but there are limits to automata theory implicit in what he said. Automata theory differs from both subjects in the central role played by mathe- matical logic and digital computers. Though it has important engi- neering applications, it itself is a theoretical discipline rather than a practical one. Finally, automata theory differs from the biological sciences in its concentration on problems of organization, structure, language, information, and control. Automata theory seeks general principles of organization, structure, language, information, and control. Many of these principles are applicable to both natural and artificial systems, and so a comparative study of these two types of automata is a good starting point. Their similarities and differences should be described and explained. Mathe- matical principles applicable to both types of automata should be developed. Thus truth-functional logic and delay logic apply to both computer components and neurons, as does von Neumann's proba- bilistic logic. See the Second and Third Lectures of Part I of the present volume. Similarly, von Neumann's logical design of a self- reproducing cellular automaton provides a connecting link between natural organisms and digital computers. There is a striking analogy with the theory of games at this point. Economic systems are natural; games are artificial. The theory of games contains the mathematics common to both economic systems and games, 29 just as automata theory contains the mathematics common to both natural and artifi- cial automata. Von Neumann himself devoted considerable attention to the com- 29 Theory of Games and Economic Behavior, Sees. 1.1.2 and 4.1.3. 22 THEORY OF SELF-REPRODUCING AUTOMATA parison of natural and artificial automata. 30 Scientific knowledge of the automata aspects of natural organisms has advanced very rapidly in recent years, and so there is now a much more detailed basis for the comparison than at the time von Neumann wrote, but his general ap- proach and conclusions are nevertheless of interest. We will outline his reflections under the following headings: (1) The analog-digital distinction, (2) the physical and biological materials used for com- ponents, (3) complexity, (4) logical organization, and (5) reliability. (1) Von Neumann discussed the analog-digital distinction at length and found it to be an illuminating guide in his examination of natural automata. See the First and Fourth Lectures of Part I. His most general conclusion was that natural organisms are mixed systems, in- L volving both analog and digital processes. There are many examples, of which two will suffice here. Truth -functional logic is applicable to neurons as a first approximation, but such neural phenomena as re- fraction and spatial summation are continuous rather than discrete. In complicated organisms digital operations often alternate with analog processes. For example, the genes are digital, while the enzymes they control function analogically. Influenced by his knowledge of natural automata von Neumann proposed a combined analog-digital computing scheme. 31 This is a good example of the effect of the study of natural systems on the design of artificial ones. (2) Von Neumann compared the components in existing natural and artificial automata with respect to size, speed, energy require- ments, and reliability, and he related these differences to such factors as the stability of materials and the organization of automata. Com- puter components are much larger and require greater energy than neurons, though this is compensated for in part by their much greater speed. These differences influence the organization of the system: natural automata are more parallel in operation, digital computers more serial. Part of the difference in size between a vacuum tube and a neuron can be accounted for in terms of the mechanical stability of the materials used. It is relatively easy to injure a vacuum tube and difficult to repair it. In contrast, the neuron membrane when injured is able to restore itself. Von Neumann calculated the thermodynamical minimum of energy that must be dissipated by a computing element and concluded that in theory computing elements could be of the 30 Norbert Wiener also made valuable comparisons of natural and artificial systems in his Cybernetics, though in a somewhat different way. The two men were aware of each others work — see Cybernetics (particularly the "Intro- duction") and von Neumann's review of it. 31 Sec. 12 of "Probabilistic Logics and the Synthesis of Reliable Organisms from Unreliable Components," Collected Works 5.372-377. editor's introduction 23 order of 10 10 times more efficient in the use of energy than neurons. See the Fourth Lecture of Part I. His comparison of natural and artificial components no doubt influenced his work on computer com- ponents. (3) Man is, inter alia, a natural automaton which is obviously very much more complex than any artificial automaton he has so far con- structed. Because of this complexity he understands the details of his own logical design much less than that of the largest computer he has built. Von Neumann thought that the chief problems of automata theory center around the concept of complexity. This very concept needs rigorous definition. Automata theory should relate the logical organization of complex automata to their behavior. A theory which did this would enable us to develop the logical design of artificial automata capable of carrying out some of the most difficult and ad- vanced functions performed by humans as well as many other complex functions that humans cannot perform, such as solving large systems of non-linear partial differential equations. The problem of reliability is especially crucial in complex systems. Von Neumann speculated that extremely complex systems involve new principles. He thought, for example, that below a certain level, complexity is degenerative, and self -reproduction is impossible. He suggested that, generally speaking, in the case of simple automata a symbolic description of the behavior of an automaton is simpler than the automaton itself, but that in the case of exceedingly complex automata the automaton is simpler than a symbolic description of its behavior. See the Second Lecture of Part I. (4) In discussing the relative speeds of natural and artificial com- ponents we noted that natural automata tend to be more parallel in operation and artificial automata tend to be more serial in operation. When planning an automaton or a computation, one can choose some- what the extent to which it is parallel or serial, but there are definite limits to this — e.g., in a serial computation a later operation may depend on an earlier one and hence cannot proceed simultaneously with it. Moreover, this choice affects other aspects of the machine, particularly the memory requirements, for a datum that is to be operated on later must be stored until it is needed. The memory of an artificial automaton is generally organized in a hierarchy, different levels of the hierarchy operating at different speeds. In a typical com- puter there are high-speed electronic registers, slower speed magnetic cores, and much slower magnetic tape units. In addition there is the wiring of the machine itself, which provides the unalterable organiza- tion of the system. Von Neumann discussed machine memory hier- 24 THEORY OP SELF-REPRODUCING AUTOMATA archies, and said that we should look for similar hierarchies in natural automata. Pulses circulating in neuron cycles, the change of neural thresholds with use, the organization of the nervous system, and the coding of the genes together constitute such a hierarchy. The organization of an automaton is to be distinguished from the organization of a particular computation in that automaton. When both are taken into account, the difference between natural and artificial automata with respect to serial vs. parallel operation seems to be accentuated. Von Neumann spoke in this connection of the "logical depth" of a computation. 32 A computation consists of a large number of basic logical steps (switching and delay), the result of each step depending on certain prior steps. We will call any sequence of steps, each of which depends critically on its predecessor, a "calcula- tion chain." The logical depth of a computation is the number of logical steps in its longest calculation chain. Because of their great speed, digital computers are used to perform computations of exceed- ingly great logical depth. For the final answer to be useful its error must be kept small, and this results in a very strong reliability require- ment on each logical step. This brings us to von Neumann's fifth and last main point of comparison between natural and artificial automata. (5) The first electronic digital computers had little equipment for the automatic detection of failure. They were designed and wired with extreme care and were constructed of components especially selected for great reliability. Programs were written with care and laboriously checked. Diagnostic programs were used to detect machine errors, and various procedures (e.g., differencing) were employed to check the computed results. Thus these machines were designed, built, and used in such a way that, hopefully, a single malfunction would be noted before a second occurred. The machine would then be stopped and the fault isolated by an analytic procedure. As von Neumann pointed out in the Fourth Lecture of Part I, this method of handling errors would obviously not be satisfactory for extremely complicated automata. The very design and construction of such large automata would result in many mistakes. Moreover, the large number of components would result in a very short mean free path between errors and make localization of failures too difficult. Natural automata are clearly superior to artificial ones in this regard, for they have strong powers of self -diagnosis and self -repair. For example, the human brain can suffer great damage from mechanical injury or disease and still continue to 32 The Computer and the Brain, pp. 27, 79. He also spoke of the logical depth of a language. See ibid., pp. 81-82 and the discussion of the primary language of the nervous system at p. 15 above. editor's introduction 25 function remarkably well. Natural and artificial automata are thus organized in very different ways for protection against errors. Von Neumann's work on reliability serves to link these two types of autom- ata in this respect. Mathematics of Automata Theory. Von Neumann intended the theory of automata to be highly mathematical and logical. The study of actual automata, both natural and artificial, and of their operation and interaction, provides the empirical source of this formal compo- nent of automata theory. This is in keeping with von Neumann's belief that mathematics derives inspiration and ideas from empirical subject matter. The close connection between mathematical logic and automata was well known to von Neumann when he wrote on automata theory. Kurt Godel had reduced mathematical logic to computation theory by showing that the fundamental notions of logic (such as well-formed formula, axiom, rule of inference, proof) are essentially recursive (effective). 33 Recursive functions are those functions which can be computed on Turing machines, and so mathematical logic may be treated from the point of view of automata. 34 Conversely, mathe- matical logic may be applied to the analysis and synthesis of autom- ata. The logical organization of an automaton can be represented by a structure of idealized switch-delay elements and then translated into logical symbolism. See the Second Lecture of Part I. Because of the intimate connection between automata and logic, logic will be at the heart of the mathematics of automata theory. Indeed, von Neumann often spoke of a "logical theory of automata" rather than merely a "theory of automata." Nevertheless, he felt that the mathematics of automata theory would also have some formal characteristics very different from those of logic. Roughly speaking, mathematics can be divided into the discrete and the continuous. Logic is a branch of discrete mathematics and is highly combinatorial. Von Neumann thought that automata mathematics should be closer to the continuous and should draw heavily on analysis. He thought that the specific problems of automata theory require this, and he felt that there is a general advantage in an analytical as opposed to a combinatorial approach in mathematics. There is an important topic in the theory of automata that requires 33 "Uber formal unentscheidbare Satze der Principia Mathematica und verwandter Systeme I." The notion of theoremhood is not in general recur- sive, but the theorems of a formal language are always recursively enumerable. 34 Turing, "On Computable Numbers, with an Application to the Entschei- dungsproblem" and "Computability and X-Definability." 26 THEORY OF SELF-REPRODUCING AUTOMATA a more analytical treatment than is usual in logic. Automata theory must encompass the probability of failure of a component. Mathe- matical logic treats only the perfect or deterministic operation of idealized switch-delay elements; it provides no theoretical treatment of error. Hence in using mathematical logic for actual design one must supplement it by considerations that lie outside the subject itself. Von Neumann wanted a probabilistic logic which would handle com- ponent malfunction as an essential and integral part of automata operation. While probability theory is strongly combinatorial, it also makes important contacts with analysis. Including the probability of failure in the logic of automata forces one to consider the size of a computation. The usual approach in mathematical logic is to consider whether or not something can be accomplished by an automaton in a finite number of steps, regardless of how large the number is. But on any realistic assumption about component failure, the larger the calculation the more likely the machine will err during it, and the less Hkely the result will be correct. This concern for the size of a computation also arises from our practi- cal interests in automata. Computers are built in order to produce certain results in the available time. Since many of the functions we desire computers to perform are now performed by humans, it should be kept in mind in this connection that man is a finite automaton, not a Turing machine. Von Neumann did not suggest how to construct a theory of the sizes of computations. Presumably this theory would be based on a quantitative notion of " amount of computation' ' which would take into account both the length of a calculation (the "logical depth" of p. 24 above) and its width (the amount of parallelism in it). Thus a theory of the quantity of computation and the likelihood of its being wrong must include continuous as well as discrete mathe- matics. All of this will lead to theories which are much less rigidly of an all-or-none nature than past and present formal logic. They will be of a much less com- binatorial, and much more analytical, character. In fact, there are numerous indications to make us believe that this new system of formal logic will move closer to another discipline which has been little linked in the past with logic. This is thermodynamics, primarily in the form it was received from Boltz- mann, and is that part of theoretical physics which comes nearest in some of its aspects to manipulating and measuring information. Its techniques are indeed much more analytical than combinatorial, which again illustrates the point that I have been trying to make above. 85 36 "The General and Logical Theory of Automata, " Collected Works 5.304. The next quotation is from the same article, 5.303. editor's introduction 21 Von Neumann also held that there is a methodological advantage in employing analysis in the mathematics of automata. Everybody who has worked in formal logic will confirm that it is one of the technically most refractory parts of mathematics. The reason for this is that it deals with rigid, all-or-none concepts, and has very little contact with the continuous concept of the real or of the complex number, that is, with mathe- matical analysis. Yet analysis is the technically most successful and best- elaborated part of mathematics. Thus formal logic is, by the nature of its approach, cut off from the best cultivated portions of mathematics, and forced onto the most difficult part of the mathematical terrain, into combina- torics. This comment is particularly significant since von Neumann made important contributions to discrete mathematics. In Theory of Games and Economic Behavior he stated that the mathematics to be developed for social theory should emphasize combinatorics and set theory rather than differential equations. 36 In his own work in automata theory von Neumann moved from the discrete toward the continuous. His probabilistic logic is an example. After presenting this logic, he proposed a mixed analog-digital com- puting system closely related to it. 37 His first models of self -reproduc- tion were discrete, but he hoped later to develop a continuous model of self -reproduction. See Section 1.1.2.3 of Part II of the present volume. We noted before that von Neumann often referred to his theory of automata as a "logical theory of automata. " He also called it "theory of automata and information'' and sometimes just "theory of infor- mation," indicating the strong role that he expected information theory to play in the subject. He divided the theory of control and information into two parts: a strict part and a probabilistic part. The rigorous or strict part includes mathematical logic as extended to cover finite automata and Turing machines. The statistical or proba- bilistic part includes the work of Shannon on information theory 38 and von Neumann's probabilistic logic. Von Neumann regarded his probabilistic logic as an extension of rigorous logic. There is a close connection between information theory and thermo- dynamics, both subjects employing the concept of probability in very much the same way. See the Third Lecture of Part I, especially the quotation from von Neumann's review of Wiener's Cybernetics. 36 Sec. 4.8.3. Cf. Sec. 1.2.5. 37 Sec. 12 of "Probabilistic Logics and the Synthesis of Reliable Organisms from Unreliable Components," Collected Works 5.372-377. 38 "A Mathematical Theory of Communication." 28 THEORY OF SELF-REPRODUCING AUTOMATA Von Neumann mentioned two further connections between ther- modynamics and automata theory. First, he found an analog of ther- modynamic degeneration in the theory of self -reproducing automata: below a certain minimum level, complexity and degree of organiza- tion are degenerative, but above that level they are not degenerative and may even increase. Second, he discussed the thermodynamic aspect of the concept of balance in computing machine design. The efficiency of a computer depends on the proper balance of its different parts with respect to speed and size. For example, in the memory hierarchy the different kinds of memory (e.g., transistor, core, tape) should be matched to one another in size and speed. A computer in which the arithmetic unit is too fast for the memory, or the memory is too small, is like a heat engine which is inefficient because large tem- perature differences exist between two parts of it. The efficiency of a computer must be defined relative to its environment (i.e., the prob- lems it is to solve), just as the efficiency of a heat engine depends on its environment. These problems of balance and matching are handled empirically by engineers. Von Neumann wanted a quantitative theory of balance akin to thermodynamics. To conclude, von Neumann thought that the mathematics of auto- mata theory should start with mathematical logic and move toward analysis, probability theory, and thermodynamics. When it is devel- oped, the theory of automata will enable us to understand automata of great complexity, in particular, the human nervous system. Mathe- matical reasoning is performed by the human nervous system, and the "primary" language in which mathematical reasoning takes place is analogous to the primary language of a computing machine (p. 15 above). It is thus quite possible that automata theory will affect logic and our fundamental concepts of mathematics. I suspect that a deeper mathematical study of the nervous system . . . will affect our understanding of the aspects of mathematics itself that are in- volved. In fact, it may alter the way in which we look on mathematics and logics proper. 39 Now logic lies at the foundation of mathematics; therefore, if von Neumann's suggestion is true, automata theory will move full circle: starting at the foundation of mathematics and ending there. Arthur W. Burks 39 The Computer and the Brain, p. 2; cf. pp. 70-82. See also Ulam, "John von Neumann, 1903-1957," p. 12. part One Theory and Organization of Complicated Automata EDITORIAL NOTE [The editor's writing is in brackets. The reconstructed edition of von Neumann's work is not bracketed, but much of the unbrack- eted text is heavily edited. See the Preface.] First Lecture COMPUTING MACHINES IN GENERAL Conceptual and numerical methods in mathematics. The role of the latter in applied mathematics and in mathematical physics. Their role in pure mathematics. The situation in analysis. Numerical procedures as heuristic tools. Various forms of the numerical approach: Analog and digital. The analog procedure : The use of the physical experiment as a substitute for computing. Analog computing machines. The digital procedure: Manual computing. Simple machines. Fully auto- matic computing. The present status of computing machines. Present roles of analog and digital machines. Questions of speed, programming, and precision. The concept of an elementary operation in a computing machine. Its role in analog machines and in digital machines. Observations on analog com- ponentry. Observations on digital componentry. The relay organ. Main forms: The electro-mechanical relay. The vacuum tube. Other possible relay organs. Measurement of the length or complexity of a numerical calculation. Logical and arithmetical operations. Linear and non-linear arithmetical operations. The role of the number of multiplications. Stability of the statis- tical characteristics of various parts of mathematics. The special role of analysis. Various characteristic levels of length or complexity. Characteristic problem lengths for automatic digital machines. Precision requirements. Memory requirements: Measurement of memory capacity. The decisive characteristics of a memory: Access time and capacity. Reasons for a hier- archic organization of memory. Actual memory requirements of an automatic digital machine. Input-output: Main available media. The concept of balance: Speed balance of various components. Balance between memory capacity in various stages of the hierarchy and speeds. Balance between speed and precision. Balance between speed, memory capacity, and programming capacity. Thermodynamical aspects of the concept of balance. Thermodynamical aspects of the memory capacity. Need for a quantitative theory, contrasted 31 32 THEORY OF SELF-REPRODUCING AUTOMATA with the present empirical procedures. Preliminary remarks on reliability and errors. Ladies and gentlemen, I wish to thank you for your very friendly welcome on my five occasions to talk to you, and I hope that I will be able to offer something for the variety of interests which are repre- sented here. I will talk about automata — the behavior of very com- plicated automata and the very specific difficulties caused by high complication. I shall discuss briefly the very plausible, very obvious analogies which come to mind between artificial automata and or- ganisms, which within a certain limit of their functioning are natural automata. We must consider the similarities, the dissimilarities, the extent to which the dissimilarities are due to our skill or clumsiness (the latter being the more normal phenomenon), and the extent to which these dissimilarities are really matters of principle. Today I will talk chiefly about artificial automata, and specifically about one variety of artificial automata, namely, computing machines. I will talk about their role in the near past and present and about what to expect from them in the future. I am talking about computing machines partly because my interests in the subject of automata are mathematical and, from the mathe- matical point of view, computing machines are the most interesting and most critical automata. But quite apart from this ex parte argu- ment from the mathematical side, there is the important question of automata of very, very high complexity. Of all automata of high com- plexity, computing machines are the ones which we have the best chance of understanding. In the case of computing machines the complications can be very high, and yet they pertain to an object which is primarily mathematical and which we understand better than we understand most natural objects. Therefore, by considering computing machines, we can discuss what we know and what we do not know, what is right and what is wrong, and what the limitations are, much more clearly than if we discussed other types of automata. You will see that our discussion of complex automata is very far from perfect and that one of our main conclusions is that we need very badly a theory which we do not at this moment possess. Let me first say something from the properly mathematical side, namely, the role which computing machinery has played or might play in mathematics and in adjacent subjects. Speaking of numerical com- puting in general, it is not necessary to discuss what role it can play in many applications of mathematical methods. It's perfectly clear that numerical computing plays a large role in engineering. If more COMPUTING MACHINES 33 computing and faster computing could be done, one would have even more uses for computing in engineering. Let me come now to the less obvious things. In physics, particularly in theoretical physics, it is clear that mathematical methods play a great role and that to a large extent they are of the same variety as pure mathematics, that is, they are abstract and analytical. However, effective computing plays a role in physics which is larger than the role one would expect it to have in mathematics proper. For instance, there are large areas of modern quantum theory in which effective iterative computing could play a large role. A considerable segment of chemistry could be moved from the laboratory field into the purely theoretical and mathematical field if one could integrate the applicable equations of quantum theory. Quantum mechanics and chemistry offer a continuous spectrum of problems of increasing difficulty and increasing complexity, treating, for example, atoms with increasing numbers of electrons and molecules with increasing numbers of valence electrons. Almost any improvement in our standards of com- puting would open important new areas of application and would make new areas of chemistry accessible to strictly theoretical methods. However, I will not go into great detail on this subject either but would like to give you a brief indication of what role this kind of com- puting might play in mathematics proper, that is, in pure mathe- matics. In pure mathematics the really powerful methods are only effective when one already has some intuitive connection with the subject, when one already has, before a proof has been carried out, some intuitive insight, some expectation which, in a majority of cases, proves to be right. In this case one is already ahead of the game and suspects the direction in which the result lies. A very great difficulty in any new kind of mathematics is that there is a vicious circle: you are at a terrible disadvantage in applying the proper pure mathe- matical methods unless you already have a reasonably intuitive heuristic relation to the subject and unless you have had some sub- stantive mathematical successes in it already. In the early stages of any discipline this is an enormous difficulty; progress has an auto- catalytic feature. This difficulty may be overcome by some excep- tionally lucky or exceptionally ingenious performance, but there are several outstanding instances where this has failed to happen for two, three, or four generations. One of these areas which has been conspicuous for some time is the area of non-linear problems. The great successes of the nineteenth century, as well as of modern analysis, were in linear problems. We have much less experience with non-linear problems, and we can say 34 THEORY OF SELF-REPRODUCING AUTOMATA practically nothing about the majority of non-linear partial differ- ential equations. We have never been successful with these at all, and therefore we have absolutely no idea what the difficulties are. In those few domains where some progress had been made it was usually for different reasons, for instance, because some very usual physical phenomenon was tied up with the mathematical problems and therefore one had a non-mathematical, physical approach. In these domains scientists discovered the most surprising types of sin- gularities, which have absolutely no analogs in the linear domain we know so well, that is, absolutely no analogs in those parts of mathe- matical analysis like complex number theory, and so on. These experi- ences make a fairly convincing case that completely new methods will be needed for non-linear problems. The classical example for this is a non-linear partial differential equation for compressible, non-viscous flow, which led to the discovery of the phenomenon of shocks. In a problem in which it seemed that only continuous solutions should exist, discontinuous solutions suddenly play an enormous role, and without proper regard for these one cannot prove the uniqueness or the existence of solutions. Furthermore, these irregular solutions be- have in a very peculiar manner and violate a number of the regulari- ties which we had reason to believe, from other forms of analysis, were well established. Another good example is the phenomenon of turbulence in the viscous case, where one suddenly discovers that the really important solutions to a problem which has very high symmetry do not possess that symmetry. From a heuristic point of view, the important thing is not to find the simplest solution of the problem, but rather to ana- lyze statistically certain large families of solutions which have nothing in common with each other except certain statistical traits. These prevalent statistical traits are the real roots of the problem and cause very peculiar singularities in many individual solutions. In all these cases there is reason to believe that we will have great difficulty in making analytical progress. The problem of turbulence has been around for 60 years, and analytical progress in solving it has been very small. 1 Almost all of the correct mathematical surmises in this area have come in a very hybrid manner from experimentation. If one could calculate solutions in certain critical situations like those we have mentioned, one would probably get much better heuristic ideas. I 1 [ See further von Neumann's Collected Works 5.2-5 and Birkhoff's Hydro- dynamics.] COMPUTING MACHINES 35 will try to give some indications of this later, but I wanted to point out that there are large areas in pure mathematics where we are blocked by a peculiar inter-relation of rigor and intuitive insight, each of which is needed for the other, and where the unmathematical process of experimentation with physical problems has produced almost the only progress which has been made. Computing, which is not too mathematical either in the traditional sense but is still closer to the central area of mathematics than this sort of experimentation is, might be a more flexible and more adequate tool in these areas than experimentation. Let me come to the subject proper and first say a few things about the general traits of computing processes and computing machines. As you probably know, the main types of computing machines exist- ing or being discussed or planned at this moment fall into two large classes: super-analog devices and digital devices. Let me first de- scribe the analog devices or the wider class, inasmuch as a proper definition is usually given for the digital class, and analogs are essen- tially everything else. Roughly speaking, an analog calculation is one in which you look at some physical process which happens to have the same mathemati- cal equations as the process you're interested in, and you investigate this physical process physically. You do not take the physical process which you are interested in, because that is your whole reason to cal- culate. You always look for something which is like it but not exactly the same thing. The smallest modification you may make is to use a different scale, which is possible in certain problems. A slightly larger modification is to use a different scale and also change certain things which are not exactly scales. For instance, when you try an aerodynamical experi- ment in a wind tunnel you scale it, but you scale not only the linear dimensions but also the velocity of sound. The only way to scale the velocity of sound is to go to a lower temperature, and there you really need insight. You must know that the phenomenon you're concerned with does not depend on temperature. You then discover that it is easier to try it at a lower temperature and with much smaller dimen- sions than to carry out the actual process you are interested in. In this way a wind tunnel for aerodynamical experimentation is in a sense an analog computing device. This is not a completely fair comparison because a wind tunnel does a good deal beside computing, but still in a large range of application (which is certainly not much less than 50 per cent) it is just an analog computing device. You come very quickly then to cases in which you will not do 36 THEORY OF SELF-REPRODUCING AUTOMATA exactly this because it is not possible or not convenient to find a physi- cal process which has exactly the same equations as the problem you are interested in. But you may still find, for example, three different processes which have the same equations as three parts of the problem and which can be aligned in such a manner that if you perform one after the other you get the complete answer. From this there is a con- tinuous transition to situations where you actually break up the problem mathematically into the elementary operations of arithmetic: multiplication, addition, subtraction, and division. [ Von Neumann next discussed physical analog processes for adding, multiplying, subtracting, and dividing. He covered both electrical and mechanical analog processes, and the way numbers are represented in each. He said, "What passes in any axiomatic treatment of mathe- matics for the elementary operations of arithmetic, the four species of arithmetical addition, subtraction, etc., need not be the elementary operations of a computing machine, specifically of an analog com- puting machine." He explained how a differential analyzer multiplies two constants by integrating and subtracting. See The Computer and the Brain 8-5, Collected Works 5.293.] [ Von Neumann then took up digital machines. He remarked that in the last 10 years purely digital devices had become relatively much more important than analog devices. He discussed the components of digital machines (toothed wheels, electromechanical relays, vacuum tubes, and nerve cells), the speeds of these components (including both response time and recovery time), and the need for power amplifica- tion in these components. He stressed the role of the basic logical operations (such as sensing a coincidence) in control mechanisms, including "the most elaborate control mechanism known, namely, the human nervous system." See The Computer and the Brain 7-10, 30, 39-47. He next turned to the problem of measuring the complexity of automata.] It is not completely obvious how to measure the complexity of an automaton. For computing machines, probably the reasonable way is to count how many vacuum tubes are involved. This is somewhat ambiguous, because certain current types of vacuum tubes are in reality two vacuum tubes inside one envelope, in which case one is never quite sure which one of the two he is talking about. Another reason is that a great deal enters into computing machine circuitry aside from vacuum tubes: electrical equipment like resistors, capaci- tances, and possibly inductances. Nevertheless, the ratio of these to the vacuum tubes is tolerably constant, and therefore the number of tubes is probably a reasonable measure of complexity. COMPUTING MACHINES 37 The largest calculating machine ever used to date contains twenty thousand vacuum tubes. 2 Now the design of this machine is very different from what any vacuum tube machine of the future is likely to be like, and so this machine is not quite typical. The computing machines which most people are thinking about as the computing machines of the immediate future are to be smaller than this, probably having 2 to 5 thousand tubes. So, roughly speaking, the order of magnitude of the complexity of these machines is 10 thousand. To give you a comparison with natural organisms, the number of nerve cells in a natural organism can be very different from this. The number of nerve cells in the human central nervous system has been estimated to be 10 billion. This number is so large that of course one has absolutely no experience with such orders of magnitude. It's terribly difficult to form any reasonable guess as to whether things which are as complex as the behavior of a human being can or cannot be administered by 10 billion switching organs. No one knows exactly what a human being is doing, and nobody has seen a switching organ of 10 billion units; therefore one would be comparing two unknown objects. Let me say a few things which relate more specifically to computing machines. If you can repeat an elementary act like switching with a vacuum tube 1 million times per second, that does not mean of course that you will perform anything that is mathematically relevant 1 million times per second. In estimating how fast a computing machine can operate, there are all kinds of standards. There's a reasonable agreement that one ought to count the number of multiplications performed in a second. By multiplications I mean the multiplication of two full sized numbers with the precision with which the machine is running. There is good reason to believe that the precision with which these things ought to run is of the order of 10, 12, or 14 decimal digits. A machine of reasonable design in which the elements have a speed of about 1 million per second will probably multiply somewhere in the neighborhood of 1 millisecond. No matter how you organize a computing machine, you simply cannot count on using it to get 100 per cent efficiency. By that I mean that it's impossible, with our present information on the subject, to organize the machine in such a manner that a multiplier which can multiply in one thousandth of a second will really be fed the necessary 2 [ This is the ENIAC, which is described in Burks, "Electronic Computing Circuits of the ENIAC" and "Super Electronic Computing Machine," Gold- stine and Goldstine, "The Electronic Numerical Integrator and Computer (ENIAC)," and Brainerd and Sharpless, "The ENIAC."] 38 THEORY OF SELF-KEPRODUCI N( J AUTOMATA data for multiplication. There is a good deal else to be done, namely, making up your mind what numbers you want, getting the numbers, disposing of the result, deciding whether you should do the same thing once again or whether you should do something else, and so on. This is remarkably similar to paper pushing and adding on paper directly, except that what you're pushing is not on paper. From a logical point of view the efficiency is probably of the order of 1 in 10 or a little better. By that I mean that in any reasonable, logical description of what you are doing, in a code which corresponds to prevalent procedures in formal logics, somewhere between a fifth and a tenth of the orders will be orders to multiply. Since multiplica- tion is somewhat slower than the other operations, in what most people think is a well integrated, well balanced machine, you will probably spend something like one quarter to one half of the time multiplying. So, if you have a multiplier which can effect a multiplica- tion in 1 millisecond, you are doing fairly well to get 500 multiplica- tions per second. In human computing aided by a desk machine the same number will be perhaps, 2 multiplications per minute. So the discrepancy, the acceleration factor could probably be pushed to 100 thousand or something like that. But to get out of this range we'll probably have to depart from present techniques quite radically. From the mathematical point of view the question arises whether anything could be done with this speed if one had it. I would like to point out very emphatically that there are very good reasons for asking for anything the traffic will bear, for this speed, 10 times more, a hundred times, a thousand times, or a million times. Problems there are good reasons to solve would justify a great deal more speed than anyone can think of at this moment. [Von Neumann gave as examples quantum mechanical calculations on atomic and molecular wave functions (where the combinatorial difficulties go up very fast as the number of electrons goes up) , and the problem of turbulence.] Although it doesn't belong strictly to the subject, let me point out that we will probably not want to produce vast amounts of numerical material with computing machines, for example, enormous tables of functions. The reason for using a fast computing machine is not that you want to produce a lot of information. After all, the mere fact that you want some information means that you somehow imagine that you can absorb it, and, therefore, wherever there may be bottlenecks in the automatic arrangement which produces and processes this information, there is a worse bottleneck at the human intellect into which the information ultimately seeps. COMPUTING MACHINES 39 The really difficult problems are of such a nature that the number of data which enter is quite small. All you may want to know is a few numbers, which give a rough curve, or one number. All you may want in fact is a "yes" or a "no," the answer as to whether something is or is not stable, or whether turbulence has or has not set in. The point is that you may not be able to get from an input of, say, 80 numbers to an output of 20 numbers without having, in the process, produced a few billion numbers in which nobody is interested. But the process is such that the volume of numerical material handled first expands and then contracts again, and, while it starts on a low level, say with 100 numbers, and ends on a low level, say with 10 numbers, its maximum in between is large, say a few thousand, and the number of successive generations is large, so that you have handled 10 billion numbers before you are through. These figures are quite realistic; it would be easy to find problems which have about this numerical makeup. You may have noticed that I have already introduced one distinc- tion, namely, the total numerical material produced in a process. The other thing which matters is how much you need simultaneously. This is probably the most vexing problem in modern computing machine technology. It's also quite a problem from the point of view of the human organism, namely, the problem of memory. You see, all these automata really consist of two important parts: the general switching part (an active part which affects the logical operations the automaton is supposed to perform), and the memory (which stores information, chiefly intermediate results which are needed for a while and are then discarded and replaced by others) . In computing machines, the methods to do the active part, the arithmetical and control circuits, have been well known for years. The memory questions were much more critical and much more open throughout the last decade, and are even more critical and more open now. In the human organism, we know that the switching part is composed of nerve cells, and we know a certain amount about their functioning. As to the memory organs, we haven't the faintest idea where or what they are. We know that the memory requirements of the human organism are large, but on the basis of any experience that one has with the subject, it's not likely that the memory sits in the nervous system, and it's really very obscure what sort of thing it is. 3 Hence in both the computer and the human nervous system, the dynamic part (the switching part) of the automaton is simpler than the memory. 3 [ This point is discussed further in The Computer and the Brain 63-69.] 40 THEORY OF SELF-REPRODUCING AUTOMATA [ Von Neumann next discussed how to measure memory capacity. He suggested using the logarithm (to the base two) of the configura- tion number (i.e., the number of alternatives). See Collected Works 5.341-342. He then estimated the memory capacity of an ordinary printed page to be about 20 thousand units, and remarked that this is about the memory capacity of the digital computers under con- sideration at that time.] This shows where the real price of speed lies. A large modern com- puting machine is a very expensive object, an object which it takes a long time to build and which is a very tricky thing after you've got it. Yet it is supposed to get along on a memory which is equivalent to a printed page! When such a machine is properly run it will, in half an hour, do the work which a computing group of 20 people would do in about 2 or 3 years. Yet it's supposed to get along on a memory of one printed page. Imagine that you take 20 people, lock them up in a room for 3 years, provide them with 20 desk multipliers, and in- stitute this rule: during the entire proceedings all of them together may never have more than one page written full. They can erase any amount, put it back again, but they are entitled at any given time only to one page. It's clear where the bottleneck of this process lies. The planning may be difficult, input and output may be cumbersome, and so on, but the main trouble is that it has a phenomenally low memory for the computing to be done. The whole technique of com- puting will be completely distorted by this modus operandi. This is an extremely abnormal economy. By going to high speed,'- you cut yourself off from the efficient methods of storing information and push yourself into an inefficient one. A thousand-number com- puter memory is a very large object, an object which it took years to develop ; all existing types are very much in an experimental stage at this moment, and none of them are small or cheap. Yet they are the equivalent of one printed page. The reason why one is forced to use these memories is this. [Each multiplication requires certain num- bers from the memory and the product often goes to the memory. There are other arithmetic operations, and these require access to the memory. The orders to control these arithmetic operations come from the memory.] One probably needs anywhere between five and eight accesses to the memory for each multiplication. Thus it is un- reasonable to get a millisecond multiplier unless you have a memory to which you can get access in something of the order of a tenth of a millisecond. Now to get access to a printed book takes seconds, to get access to anything punched or written on paper takes a fraction of a second. Since one needs an access time of something like one COMPUTING MACHINES II ten-thousandth of a second, one is forced out of these efficient tech- niques of storing information, into a highly inefficient and expensive technique. In comparing artificial with natural automata there is one very important thing we do not know: whether nature has ever been sub- ject to this handicap, or whether natural organisms involve some much better memory device. That the secondary memory devices which humans have developed, namely, libraries, etc., are so vastly more efficient than this, is some reason to suspect that natural mecha- nisms for memory may also be quite as clumsy as the high speed memories with which we think we have to operate. But we know practically nothing about this. Let me, before I close today, mention one more thing. In any fast machine the memory you need is characterized by two data: the capacity and the access time. [Von Neumann said that at that mo- ment there was no technique for building a memory with both an adequate capacity and a sufficiently good access time. What is done is to construct a hierarchy of memories. The first memory has the required speed and is as large as you can make it, but it is not large enough. The second memory is much larger, but slower. Numbers are transferred from the second memory to the first memory when needed. There may be a third memory which is larger but slower, and so on. An electrostatic memory tube, a magnetic tape, and a card file would constitute such a hierarchy of memories. See The Computer and the Brain 33-37.] Second Lecture RIGOROUS THEORIES OF CONTROL AND INFORMATION Theory of information: The strict part. The concept of information. The corresponding mathematical-logical concept of sets and partitions. Close connection with formal logics. Alternative approach by way of model-automata. Common traits in these two approaches: All-or-none char- acter. The work which connects these two approaches. Methods of describing automata : Syntheses from components or treatment as a whole. Synthetic approach: Nature of the element-organs. Their similarity to neurons. The program of McCulloch and Pitts: Formal neural networks. Their main result. Treatment as a whole: Turing's theory of automata. The relationship of an automaton and of the mathematical problems that can be solved with its help. The concept of a universal automaton. Turing's main result. Limitations of the McCulloch-Pitts and Turing automata. Input and out- put organs. Generalizations of these. Interpretation as sensor and motor organs. [ Von Neumann said that there are two parts to information theory: the rigorous and the probabilistic. The probabilistic part is probably more important for modern computing machinery, but the rigorous part is a necessary preliminary to it. The rigorous part of information theory is just a different way of dealing with formal logics.] [ He then explained some of the basic ideas of formal logics. He dis- cussed briefly truth-functional connectives such as "and," "not," "if . . . then," and "not both," and their interdefinability. He ex- plained the idea of a variable and the quantifiers "all" and "some." He concluded: "If you have this machinery you can express anything that is dealt with in mathematics; or that is dealt with in any subject, for that matter, as long as it's dealt with sharply."] I am not going to go into this subject, because in order to make a theory of information, another machinery which is quite closely re- lated to this but looks somewhat different, is more cogent. This is con- 42 RIGOROUS THEORIES OF CONTROL AND INFORMATION 43 nected with the work of McCulloch and Pitts, 1 on the one hand, and the logician Turing on the other. 2 Both endeavors in the subject replace formal logics as indicated here and as classically pursued, by the discussion of certain fictitious mechanisms or axiomatic paper automata, which are merely outlined, but which nobody is particularly concerned to build. Both of them show that their fictitious mecha- nisms are exactly co-extensional with formal logics; in other words, that what their automata can do can be described in logical terms and, conversely, that anything which can be described rigorously in logical terms can also be done by automata. [Von Neumann was as- suming that a finite McCulloch-Pitts neuron net is supplied with an infinite blank tape. The result to which he referred is the equivalence of Turing computability, X-definability, and general recursiveness. See Turing's "Computability and X-Definability."] I'm going to describe both the work of McCulloch and Pitts and the work of Turing, because they reflect two very important ways to get at the subject: the synthetic way, and the integral way. McCulloch and Pitts described structures which are built up from very simple elements, so that all you have to define axiomatically are the ele- ments, and then their combination can be extremely complex. Turing started by axiomatically describing what the whole automaton is supposed to be, without telling what its elements are, just by describ- ing how it's supposed to function. The work of McCulloch and Pitts was definitely meant as a simple mathematical, logical model to be used in discussions of the human nervous system. That it wound up with something which is actually an equivalent of formal logics is very remarkable and was part of the point McCulloch and Pitts wanted to drive home, but only part of that point. Their model also has a meaning which concerns me at this moment a little less, but about which I will tell, without immediately stating where it ties in to formal logics. They wanted to discuss neurons. They took the position that they did not want to get tied up in the physiological and chemical complexities of what a neuron really is. They used what is known in mathematics as the axiomatic method, stating a few simple postulates and not being concerned with how nature manages to achieve such a gadget. 1 [ McCulloch and Pitts, "A Logical Calculus of the Ideas Immanent in Nervous Activity. " See also Sees. 1-7 of von Neumann, "Probabilistic Logics and the Synthesis of Reliable Organisms from Unreliable Components, " Burks and Wright, "Theory of Logical Nets," and Kleene, "Representation of Events in Nerve Nets and Finite Automata."] 2 [ Turing, "On Computable Numbers, with an Application to the Entschei- dungsproblem."] it THEORY OF SELF-REPRODUCING AUTOMATA They went one step further. This has been emphasized very strongly by those who criticize their work, although it seems to me that the extent to which they went further can be justified. They said that they did not want to axiomatize the neuron as it actually exists, but they wanted to axio matize an i de alized neuron, w hictiJs-jjiu ch simple r than the_rSxl,_one. They believed that the extremely amputated, simplified, idealized object which they axiomatized possessed the essential traits bf the neuron, and that all else are incidental complica- tions, which in a first analysis are better forgotten. Now, I am quite sure that it will be a long t ime before this point is generally agreed to by everybody, if ever; namely, whether or not what one overlooks in this simplification had really better be forgotten or not. But it's certainly true that one gets a quick understanding of a part of the subject by making this idealization. The definition of what we call a neuron is this. One should perhaps call it a formal neuron, because it certainly is not the real thing, though it has a number of the essential traits of the real thing. A neuron will be symbolically designated by a circle, which symbolizes the body of the neuron, and a line branching out from the circle, which symbolizes the axon of the neuron. An arrow is used to indicate that the axon of one neuron is incident on the body of another. A neuron has two states: it's excited or not. As to what excitation is, one need not tell. Its main characteristic is its operational characteristic and that has a 3ertain circularity about it: its main trait is that it can excite other neurons. Somewhere at the end of an involved network of neurons the excited neuron excites something which is not a neuron. For instance, it excites a muscle, which then produces physical motion; or it excites a gland which can produce a secretion, in which case you get a chemical change. So, the ultimate output of the excited state really produces phenomena which fall outside our present treatment. These phenomena will, for the sake of the present discussion, be entirely disregarded. [Von Neumann stated the axioms governing the interaction of neurons. Following McCulloch and Pitts he assumed a uniform delay and for the time being disregarded "the important phenomenon of fatigue, the fact that after a neuron has been excited it is not usable for a while. " Fatigue plays an important role in the functioning of an organism (see p. 48 below), but in spite of fatigue one can get con- tinuous action by using a chain of neurons, each feeding its successor. Von Neumann defined the threshold of a neuron and introduced inhibitory synapses, symbolized by a circle (instead of an arrowhead).] [Von Neumann next presented what he called "the important RIGOROUS THEORIES OF CONTROL AND INFORMATION 45 result of McCulloch and Pitts." Imagine a black box with a number of inputs and a single output. Select two times, ti and t 2 . Specify which patterns of inputs from time t\ to time U are to produce an output and which are not.] No matter how you formulate your condi- tions, you can always put a neural network in the box which will realize these conditions, which means that the generality of neural systems is exactly the same as the generality of logics. The fact that something has been done with the system means not more and not less than you know what you are talking about, that you can state it in a finite number of words unambiguously and rigorously. I will not attempt to give the proof, which like all proofs in formal logics is not quite easy to render. [We will sketch the proof very briefly. It follows from the construction that von Neumann referred to that every switch- ing function (truth function, Boolean function) can be realized by a neural network with some fixed delay. A cyclic neural memory of arbitrary finite capacity can be attached to this switching net. When this composite network is augmented by an infinite tape, the result is a Turing machine. Moreover, corresponding to each Turing machine M , there is a network of this kind which computes the same number as M.] [ Von Neumann showed how to construct a few sample networks. The first is a network in which a dominates 6, b dominates c, but c dominates a, shown in Figure 1. Each neuron is excited (fires) if it is stimulated on its excitatory input (the input with an arrow) but is not stimulated on its inhibitory input (the input with a small circle) . Hence if a and b are both stimulated, output a will be active but not /5; if b and c are both stimulated, output 0 will be active but not 7; while if a and c are both stimulated, output 7 will be active but not a. Von Neumann used this network to illustrate a certain point. People have made statements about the non-quantitative character of human behavior, which statements seem to imply that in any quanti- tative mechanism, if a is stronger than b and b is stronger than c, then a is stronger than c. But in the above neural network a is stronger than b and b is stronger than c, while c is stronger than a.] [ Von Neumann then synthesized a number of other networks : simple memories, counters, and an elementary learning circuit. These are approximately the circuits of Collected Works 5.342-345. The learning circuit has two inputs, a and 6. It counts the number of times a stimulus of a is followed by a stimulus of b. When this number reaches 256, the circuit emits a pulse whenever b is stimulated, inde- pendently of whether a is stimulated or not.] You see that you can produce circuits which look complicated, but which are actually quite 4G THEORY OF SELF-REPRODUCING AUTOMATA simple from the point of view of how they are synthesized and which have about the same complexity that they should have, namely, the complexity that grammar has. It is no more difficult to make this drawing up than to make up a sentence which describes what you want, and the essence of the result of McCulloch and Pitts is that there really isn't much difference between the two things. The rigorous verbal description is co-extensive with the description in terms of relay organs. May I point out what follows from this from a philosophical point of view, and what does not follow. It certainly follows that anything that you can describe in words can also be done with the neuron method. And it follows that the nerves need not be supernaturally clever or complicated. In fact, they needn't be quite as clever and complicated as they are in reality, because an object which is a con- siderably amputated and emasculated neuron, which has many fewer attributes and responds in a much more schematic manner than a neuron, already can do everything you can think up. What is not demonstrated by the McCulloch and Pitts result is equally important. It does not prove that any circuit you are designing in this manner really occurs in nature. It does not follow that the other functions of the nerve cell which have been dropped from this descrip- tion are not essential. It does not follow that there is not a considerable problem left just in saying what you think is to be described. Let me try to put this in another way. If you consider certain activities of the human nervous system, you find that some of them are such that all parts of them can be described, but one is flabbergasted by the totality of what has to be described. Suppose you want to describe the fact that when you look at a triangle you realize that it's a triangle, and you realize this whether it's small or large. It's relatively simple to describe geometrically what is meant: a triangle is a group of three lines arranged in a certain manner. Well, that's fine, except that you also recognize as a triangle something whose sides are curved, and a situation where only the vertices are indicated, and something where the interior is shaded and the exterior is not. You can recognize as a triangle many different things, all of which have some indication of a triangle in them, but the more details you try to put in a description of it the longer the description becomes. In addition, the ability to recognize triangles is just an infinitesimal fraction of the analogies you can visually recognize in geometry, which in turn is an infinitesimal fraction of all the visual analogies you can recognize, each of which you can still describe. But with respect RIGOROUS THEORIES OF CONTROL AND INFORMATION 47 to the whole visual machinery of interpreting a picture, of putting something into a picture, we get into domains which you certainly cannot describe in those terms. Everybody will put an interpretation into a Rorschach test, but what interpretation he puts into it is a function of his whole personality and his whole previous history, and this is supposed to be a very good method of making inferences as to what kind of a person he is. In fine, now, all of this may seem a little arbitrary and accidental, but the basic fact involved is this, that our brains are exceedingly complicated. About one fifth of the brain is a visual brain, which, as far as we know, does nothing except make decisions about visual analogies. So, using the figures we have, which are not very good, but which are probably all right for an orientation, we conclude that ap- parently a network of about 2 billion relays does nothing but deter- mine how to organize a visual picture. It is absolutely not clear a priori that there is any simpler description of what constitutes a visual analogy than a description of the visual brain. Normally, a literary description of what an automaton is supposed to do is simpler than the complete diagram of the automaton. It is not true a priori that this will always be so. There is a good deal in formal logics to indicate that the description of the functions of an automaton is simpler than the automaton itself, as long as the auto- maton is not very complicated, but that when you get to high compli- cations, the actual object is simpler than the literary description. I am twisting a logical theorem a little, but it's a perfectly good logical theorem. It's a theorem of Godel that the next logical step, the description of an object, is one class type higher than the object and is therefore asymptotically [?] infinitely longer to describe. I say that it's absolutely necessary; it's just a matter of complication when you get to this point. I think that there is a good deal of reason to suspect that this is so with things which have this disagreeably vague and fluid impression (like "What is a visual analogy?"), where one feels that one will never get to the end of the description. They may easily be in this condition already, where doing a thing is quicker than describing it, where the circuit is more quickly enumerated than a total description of all its functions in all conceivable conditions. The insight that a formal neuron network can do anything which you can describe in words is a very important insight and simplifies matters enormously at low complication levels. It is by no means certain that it is a simplification on high complication levels. It is perfectly possible that on high complication levels the value of the theorem is in the reverse direction, that it simplifies matters because 48 THEORY OF SELF-REPRODUCING AUTOMATA it guarantees the reverse, namely, that you can express logics in terms of these efforts and the converse may not be true. [Von Neu- mann returned to this point on p. 51, after his discussion of Turing machines.] [ Von Neumann next discussed two cases in which circuits of ideal- ized neurons do not seem to provide an explanation of how the nervous system actually performs a given function. The first case concerns the transmission by a nerve of a continuous number which represents some quantity such as blood pressure. The nerve does this by emitting pulses at a frequency which is a monotone function of the blood pres- sure. This behavior is explained in terms of neural fatigue: after a neuron responds, it is unable to respond for a certain period, called the refractory period, and the stronger the next stimulus the sooner it responds. He then raised the question "Why has the digital notation never been used in nature, as far as we know, and why has this pulse notation been used instead ?" and said that this was the kind of ques- tion he was interested in. He suggested an answer: that the frequency modulation scheme is more reliable than the digital scheme. See Section 1.1.2.3 below, The Computer and the Brain 77-79, and Collected Works 5.306-308 and 5.375-376.] [ The second case in which circuits of idealized neurons do not seem to provide an explanation of how the nervous system actually per- forms a given function concerns memory. Von Neumann had earlier synthesized memory circuits from idealized neurons and he remarked that such memory circuits could be arbitrarily large. But he thought it probable that this is not the major mechanism used for memory in the nervous system.] This is not the way to make a memory for the simple reason that to use a switching organ like a neuron, or six to a dozen switching organs, as you actually would have to use because of fatigue, in order to do as small a thing as remember one binary digit, is a terrible waste, because a switching organ can do vastly more than store. In computing machines the classical example of a machine in which the switching organs were used to remember numbers is the ENIAC, an enormous gadget which has about 20 thousand vacuum tubes in it. The ENIAC is about five times larger than later machines which will presumably be far more efficient; it is an excellent machine in many ways, but it has one phenomenal shortcoming, namely, a very small memory. It has only a memory of 20 decimal numbers at points where it matters; in spite of this it is enormous. The reason is that vacuum tubes, in other words, switching organs, are used for that memory. All improvements on this machine postulate that some other components than standard vacuum tubes will be used for memory. RIGOROUS THEORIES OF CONTROL AND INFORMATION The memory requirements of the human nervous system are prob- ably very large. Estimates have been made and they are of the order of 10 16 binary digits. I will not attempt to justify this estimate; a great deal can be said for any count. I think there is a good deal of reason to believe that 10 10 switching organs, which is about what we have, is probably not the right order of magnitude for storing the kind of memory that we use, and that it's probably best to admit that we simply do not know where the memory is. One can make all kinds of statements about it. One can surmise thai the memory consists in a change of the synapses of the nerve cells, which is not decided by design. I don't know whether there is any good evidence for this, but I rather think there is not. You may suspect that the nerve cells contain a lot else than the switching trait, and that the memory sits there. It may be so, but I think that we simply know nothing. It may well be that the memory organs are of a completely different nature than the neurons. The main difficulty with the memory organ is that it appears to be nowhere in particular. It is never very simple to locate anything in the brain, because the brain has an enormous ability to re-organize. Even when you have localized a function in a particular part of it, if you remove that part, you may discover that the brain has reorganized itself, reassigned its responsibilities, and the function is again being performed. The flexibility of the brain is very great, and this makes localization difficult. I suspect that the memory function is less localized than anything else. [Cf. The Computer and the Brain 63-68.] I wanted to mention these two things [fatigue and memory] as very obvious lacunae in the McCulloch and Pitts approach to the nervous system. I want to talk next about the approach of Turing. In the McCulloch and Pitts theory the conclusion was that actual automata, properly described and axiomatized, are equivalent to formal logics. In Turing's theory the conclusion is the reverse. Turing was interested in formal logics, not in automata. He was concerned to prove certain theorems about an important problem of formal logics, the so-called Entscheidungsproblem, th e, problem of decision . The problem is to determine, for a class of logical expressions or propositions, whether there is a mechanical method for deciding whether an expression of this class is true or false. Turing's discussion of automata was really a formal, logical trick to deal with this problem in a somewhat more transparent and more consistent way than it had been dealt with before. [ Von Neumann then outlined Turing's definition of an automaton. Whereas McCulloch and Pitts started with components or elements, Turing started with states. At any time the automaton is in one of a 50 THEORY OF SELF-REPRODUCING AUTOMATA finite number of states. "The outside world" is a tape. The automaton senses one square of the tape, and it can change the contents of the square and move the tape one square to the left or right. A dictionary specifies, for each state and each tape symbol, what the next state will be and what will be done to the tape. The tape has a distinguished square. A finite program may be placed on the tape initially. The binary number computed by the automaton is recorded in alternate squares, starting with the distinguished square.] [ Von Neumann next described Turing's result concerning universal automata. There is a universal automaton A with the following prop- erties: For each automaton A there is a sequence of instructions I A such that for any sequence of instructions /, A supplied with both instructions I A and / computes the same number as is computed by A supplied with instructions /.] A is able to imitate any automaton, even a much more complicated one. Thus a lesser degree of complexity in an automaton can be compensated for by an appropriate increase ot complexity of the instructions. The importance of Turing's research is just this: that if you construct an automaton right, then any addi- tional requirements about the automaton can be handled by suffi- ciently elaborate instructions. This is only true if A is sufficiently complicated, if it has reached a certain minimum level of complexity. In other words, a simpler thing will never perform certain operations, no matter what instructions you give it; but there is a very definite finite point where an automaton of this complexity can, when given suitable instructions, do anything that can be done by automata at all. [Von Neumann then explained how the universal automaton A simulates an arbitrary automaton A. The instructions I A contain a representation of the automaton A in the form of a dictionary, which tells, for each state of A and each tape symbol, the next state of A and what is to be done to the tape. The universal automaton A has the power to read any such dictionary and act on it. A writes on its tape, in sequence, the successive states of A and what is produced on the tape of A.] I will not go further in giving the details of this. I have gone into it to the point to which I did in order to point out that here, for the first time, one deals with something which has the at- tribute of universality, which has the ability to do anything that any- body can do. You also see that there is no vicious circle in it, because of the manner in which the extra complexity is brought in (by giving more elaborate instructions). You also see that the operation which ultimately leads to universality is connected with a rigorous theory of RIGOROUS THEORIES OF CONTROL ANJ) INFORMATION 51 how one describes objects and a rigorous routine of how to look up statements in a dictionary and obey them. The formal logical investigations of Turing went a good deal further than this. Turing proved that there is something for which you cannot construct an automaton; namely, you cannot construct an automaton which can predict in how many steps another automaton which can solve a certain problem will actually solve it. So, you can construct an automaton which can do anything any automaton can do, but you cannot construct an automaton which will predict the behavior of any arbitrary automaton. In other words, you can build an organ which can do anything that can be done, but you cannot build an organ which tells you whether it can be done. This is connected with the structure of formal logics and is specifi- cally connected with a feature which I will not attempt to discuss, but which I would like to mention in the proper jargon for those of you who are familiar with modern formal logics. It is connected with the theory of types and with the results of Godel. The feature is just this, that you can perform within the logical type that's involved everything that's feasible, but the question of whether something is feasible in a type belongs to a higher logical type. It's connected with the remark I made earlier (pp. 47-48) : that it is characteristic of objects of low complexity that it is easier to talk about the object than produce it and easier to predict its properties than to build it. But in the com- plicated parts of formal logic it is always one order of magnitude harder to tell what an object can do than to produce the object. The domain of the validity of the question is of a higher type than the question itself. [ This is the end of von Neumann's Second Lecture. I will add a commentary on his last two paragraphs, beginning with some general remarks on Turing machines. A Turing machine is basically a finite automaton with an indefi- nitely extendible tape. But there are many different ways of using a Turing machine. Let the squares of the tape be numbered 0, 1, 2, 3, • • • , with even numbered squares reserved for working space and odd numbered squares reserved for the program or problem statement (if there is one) and for the answer. Let the answer symbols be zero and one, in addition to the blank; these could, of course, be coded sequences of two basic symbols, a blank and a mark. Assume finally that the machine writes the answer digits (zero and one) in successive answer squares. A "concrete Turing machine" is a Turing machine which has a UNIVERSITY OP ItUNOiS LIBRARY 52 THEORY OF SELF-REPRODUCING AUTOMATA finite "program" or problem statement on its tape initially. An "abstract Turing machine' ' is the class of all concrete Turing machines which contain a given finite automaton. We can think of an abstract Turing machine as a finite automaton with an indefinitely extendible blank tape on which any program or problem may be placed initially. Concrete Turing machines may be divided into two classes: the circular and the circle-free. A "circular" machine prints a finite se- quence of binary digits and "halts." A "circle-free" machine continues to print binary digits in alternate squares forever; we will speak of it as computing an infinite sequence. Von Neumann discussed above a universal Turing machine which consists of a finite automaton A and an indefinitely extendible tape. A universal Turing machine is an abstract Turing machine which computes every sequence computed by Turing machines. More precisely, for each concrete Turing machine with finite automaton A and program /, there is a program I A such that machine A with pro- grams I A and / computes the sequence computed by machine A with program /. A universal Turing machine can be characterized in another way. Let r be the class of finite and infinite sequences com- puted by concrete Turing machines. Then, every sequence of T is computed by the abstract Turing machine A + I A + /, where I A and / vary over all programs. Since the concatenation of two programs is a program, every sequence of r is computed by the abstract Turing machine A + /, where / varies over all programs. A "decision machine" for a given class of questions is an abstract Turing machine which, when given a question of that class, prints a one if the answer to the question is "yes" and a zero if the answer to the question is "no." The "halting problem" is the problem of deciding whether an arbitrary concrete Turing machine is circular (will halt sometime) or circle-free. Turing showed that the halting problem is undecidable, that is, that there is no decision machine for halting. 3 The proof is given below in Sec. 1.6.3.2 of Part II. Turing proved as a corollary to this that there is no decision machine for deciding whether an arbi- trary concrete Turing machine will ever print a given symbol (for example, a zero). Since both halting and printing a given symbol are aspects of the behavior of a Turing machine, it follows from Turing's results that automata behavior is not completely predictable by automata. As von Neumann put it above, "you cannot construct an 3 [ Turing, "On Computable Numbers, with an Application to the Entschei- dungsproblem," Sec. 8.] RIGOROUS THEORIES OF CONTROL AND INFORMATION 53 automaton which will predict the behavior of any arbitrary autom- aton." Concrete Turing machines can be enumerated and thereby placed in one to one correspondence with the non-negative integers. Consider all these machines and let the variable "t" range over the integers representing them. We define the number theoretic function n(t) as the number of steps which machine t takes to print its first zero. If machine t never prints a zero, then n(t) is defined to be zero. Note that a sequence of n ones followed by a zero can be inter- preted as the integer n. This leads to the question: Is there an abstract Turing machine which can compute n(t) for any t? It follows immedi- ately from Turing's corollary that there is not, for if we could com- pute n(t) we could decide whether or not machine t ever prints a zero. I think that this is what von Neumann had in mind when he said "Turing proved that you cannot construct an automaton which can predict in how many steps another automaton which can solve a certain problem will actually solve it." In the last paragraph of his Second Lecture von Neumann referred to a theorem of Godel "that you can perform within the logical type that's involved everything that's feasible, but the question of whether something is feasible in a type belongs to a higher logical type." Since I knew of no such theorem by Godel I found this reference puzzling, as well as the earlier reference to Godel (p. 47) and a related reference in von Neumann's Hixon Symposium paper, "The General and Logical Theory of Automata" (Collected Works 5.310-311). I wrote Professor Kurt Godel to see whether he could throw any light on it. His answer gives, I think, the most plausible explanation of the reference, and so I include the relevant parts of our correspondence, with minor editing. I wrote to Professor Godel as follows: "I am engaged in editing two of John von Neumann's uncompleted manuscripts on the theory of automata. In one of these, a series of lectures he delivered at the University of Illinois in 1949, he makes a reference to your work which I have been unable to figure out. Since there is the possibility he may have discussed this point with you, I am taking the liberty of writing you about it. "The story begins with Johnny's Hixon Symposium talk at Pasa- dena in 1948. He discusses there the problem of giving a rigorous description of a visual analogy. In recognizing visual patterns, the human eye and nervous system function as a finite automaton with a certain behavior. Von Neumann seems to suggest that possibly the simplest way to describe the behavior of this finite automaton is to 54 THEORY OF SELF-REPRODUCING AUTOMATA describe the structure of the automaton itself. This is certainly plausi- ble. But he then expresses the point in a way I do not understand: 'It is not at all certain that in this domain a real object might not constitute the simplest description of itself. That is, any attempt to describe it by the usual literary or formal-logical method may lead to something less manageable and more involved. In fact, some results in modern logic would tend to indicate that phenomena like this have to be expected when we come to really complicated entities/ The under- lined passage seems to refer to your work. I enclose a copy of the full context. "In his Illinois lectures, given in 1949, Johnny seems to be making the same point, namely, that the simplest way to describe accurately what constitutes a visual analogy is to specify the connections of the visual part of the brain. He then proceeds to say that there is a good deal in formal logic which indicates that when an automaton is not very complicated the description of the functions of that automaton is simpler than a description of the automaton itself but that the situation is reversed with respect to complicated automata. His reference to you then appears explicitly. He says, T am a little twisting a logical theorem, but it's a perfectly good logical theorem. It's a theorem of Godel that the next logical step, the description of an object, is one class type higher than the object and is therefore asymp- totically [?] infinitely longer to describe.' "He returns to this point later after discussing Turing machines and mentioning Turing's result about the undecidability of the halting problem. He then says that all of this is connected with the theory of types and with your results. The recording transcript is mangled at this point and I will reconstruct it as best I can. Tt is connected with the theory of types and with the results of Godel. The feature is just this, that you can perform within the logical type that's involved everything that's feasible but the question of whether something is feasible in a type belongs to a higher logical type. It's connected with the remark I made earlier: that it is characteristic of objects of low complexity that it is easier to talk about the object than produce it and easier to predict its properties than to build it. But in the com- plicated parts of formal logic it is always one order of magnitude harder to tell what an object can do than to produce the object. The domain of the validity of the question is of a higher type than the question itself.' I enclose copies of the relevant pages of the Illinois lectures. "It is easy to regard the description of an object as of one type level higher than the object itself, but beyond this I do not see what von RIGOROUS THEORIES OF CONTROL AND INFORMATION 55 Neumann has in mind. Two possibilities occured to me but both give a result opposite to that which Johnny needs. One may regard a Godel number as a description of a formula. However, in some cases at least, the Godel number of a formula may be described in fewer symbols than the formula, else the self-referring undecidable formula could not exist. 4 The other possibility concerns a theorem in your 1936 paper, "tlber die Lange der Beweise. ,, Given a system S and a larger system Si . The theorem says that for every recursive function F, there exists a sentence which is provable in both systems and such that the shortest proofs in these two systems satisfy the inequality that the Godel number of the proof in the smaller system is larger than the recursive function F applied to the Godel number of the proof in the larger system. This fits everything that von Neumann says except that the result seems to go in the opposite direction: namely, the higher the type the shorter the proof. "I would appreciate very much any light that you could throw on these puzzling passages of von Neumann." Professor Godel replied as follows. "I have some conjecture as to what von Neumann may have had in mind in the passages you quote, but since I never discussed these matters with him it is only a guess. "I think the theorem of mine which von Neumann refers to is not that on the existence of undecidable propositions or that on the lengths of proofs but rather the fact that a complete epistemological descrip- tion of a language A cannot be given in the same language A , because the concept of truth of sentences of A cannot be defined in A . It is this theorem which is the true reason for the existence of undecidable prop- ositions in the formal systems containing arithmetic. I did not, how- ever, formulate it explicitly in my paper of 1931 but only in my Prince- ton lectures of 1934. 5 The same theorem was proved by Tarski in his paper on the concept of truth published in 1933 in Act. Soc. Sci. Lit. Vars., translated on pp. 152-278 of Logic, Semantics, and Metamaihe- matics* "Now this theorem certainly shows that the description of what a mechanism is doing in certain cases is more involved than the descrip- 4 [ See Godel's "Uber formal unentscheidbare Satze der Principia Mathema- tica und verwandter Systeme I." The undecidable formula has the Godel number n and says 'The formula whose Godel number is n is not a theorem.' ' Thus, via Godel's coding, the undecidable formula refers to itself. It is un- decidable in the sense that neither it nor its negation is a theorem of the system Godel is studying.] 5 [ Godel, "On Undecidable Propositions of Formal Mathematical Sys- tems. "] 6 [ The exact reference to Tarski 's paper was added later.] 56 THEORY OF SELF-REPRODUCING AUTOMATA tion of the mechanism, in the sense that it requires new and more abstract primitive terms, namely higher types. However, this implies nothing as to the number of symbols necessary, where the relationship may very well be in the opposite direction, as you rightly remark. "However, what von Neumann perhaps had in mind appears more clearly from the universal Turing machine. There it might be said that the complete description of its behavior is infinite because, in view of the non-existence of a decision procedure predicting its be- havior, the complete description could be given only by an enumera- tion of all instances. Of course this presupposes that only decidable descriptions are considered to be complete descriptions, but this is in line with the finitistic way of thinking. The universal Turing machine, where the ratio of the two complexities is infinity, might then be con- sidered to be a limiting case of other finite mechanisms. This immedi- ately leads to von Neumann's conjecture. ,, ] Third Lecture STATISTICAL THEORIES OF INFORMATION Theory of information: Probabilistic part. Relationship of strict and of probabilistic logics. Keynes' interpretation of probability theory. Exemplifi- cation of the relationship of logics to strict classical mechanics on the one hand, and to statistical mechanics on the other. Corresponding situation in quantum mechanics. The mathematical aspects of the transition from strict to probabilistic logics. Analysis and combinatorics. The thermodynamical aspect: Information and entropy. The theory of Szilard. The theory of Shannon. Additional remarks on the thermodynamical nature of the internal balance of a computing machine. I conclude my remarks about strict and rigorous questions of infor- mation at this point and pass on to statistical considerations involving information. That this is the important thing in dealing with automata and their functions is fairly evident, for two reasons at least. The first of these reasons may seem somewhat extraneous and accidental, al- though I think it is not, but the second reason is certainly not. The first reason is that in no practical way can we imagine an autom- aton which is really reliable. If you axiomatize an automaton by telling exactly what it will do in every completely denned situation you are missing an important part of the problem. The axiomatization of automata for the completely denned situation is a very nice exercise for one who faces the problem for the first time, but everybody who has had experience with it knows that it's only a very preliminary stage of the problem. The second reason for the importance of statistical considerations in the theory of automata is this. If you look at automata which have been built by men or which exist in nature you will very frequently notice that their structure is controlled only partly by rigorous re- quirements and is controlled to a much larger extent by the manner 57 58 THEORY OF SELF-REPRODUCING AUTOMATA in which they might fail and by the (more or less effective) precau- tionary measures which have been taken against their failure. And to say that they are precautions against failure is to overstate the case, to use an optimistic terminology which is completely alien to the sub- ject. Rather than precautions against failure, they are arrangements by which it is attempted to achieve a state where at least a majority of all failures will not be lethal. There can be no question of eliminat- ing failures or of completely paralyzing the effects of failures. All we can try to do is to arrange an automaton so that in the vast majority of failures it can continue to operate. These arrangements give pallia- tives of failures, not cures. Most of the arrangements of artificial and natural automata and the principles involved therein are of this sort. To permit failure as an independent logical entity means that one does not state the axioms in a rigorous manner. The axioms are not of the form: if A and B happen, C will follow. The axioms are always of this variety: if A and B happen, C will follow with a certain specified probability, D will follow with another specified probability, and so on. In other words, in every situation several alternatives are per- mitted with various probabilities. Mathematically it is simplest to- say that anything can follow upon anything in accordance with a probability matrix. You may put your question in this manner: If A and B have happened, what is the probability that C will follow? This probability pattern gives you a probabilistic system of logics. Both artificial and natural automata should be discussed in this system as soon as there is any degree of involvement. 1 I will come later to the question as to why it is just complexity which pushes one into this kind of axiomatization instead of a strict one. 2 Now this inclines one to view probability as a branch of logics, or rather, to view logics affected with probability as an extension of ordinary rigorous logics. The view that probability is an extension of logics is not trivial, is not generally accepted, and is not the major interpretation of probability. It is, however, the classical interpreta- tion. The competing interpretation is the frequency interpretation, the attitude that logic is completely rigorous, and with respect to phenomena about which you are not completely informed, you can only make statements of frequencies. This distinction was, I think, quite clear to Laplace, who pointed 1 [ See von Neumann's "Probabilistic Logics and the Synthesis of Reliable Organs from Unreliable Components" for a detailed treatment of automata from this point of view.] 2 [ For a given probability of malfunction of a component, the more complex the automaton the more likely it is that a lethal failure will occur.] STATISTICAL THEORIES OF INFORMATION 59 out that there are two possible attitudes toward probability: the fre- quency and the logical. 3 In more recent times the distinction was em- phasized strongly and made the basis of a system by the economist Keynes, who wrote his thesis on probability. 4 He analyzed this ques- tion in considerable detail and showed that, aside from the more con- ventional frequency viewpoint about probability, the logical one also exists. But he made no attempt to separate strict logics and probabil- ity and simply said that, if you view a sequence of events A and B, they have the quantitative characteristic, "the probability with which B follows A" The only tie to strict logics is that when the probability is one you have an implication, and when the probability is zero you have an exclusion, and when the probability is close to one or close to zero you can still make those inferences in a less rigorous domain. There are undeniable weaknesses of the logical position. In some ways of looking at probability it is opportune not to identify zero probability with absurdity. Also, it is not quite clear in what sense a low probability means that one might expect that the thing will not happen. However, Keynes produced a self -consistent axiomatic sys- tem. There's a great deal in other modern theories, for instance, in quantum mechanics, which inclines one very strongly to take this philosophical position, although the last word about this subject has certainly not been said and is not going to be said for a long time. Anyway, one is also tempted in the case of quantum mechanics to modify one's outlook on logics and to view probability as intrinsically tied to logics. 5 [ Von Neumann discussed next two theories of probability and in- formation "which are quite relevant in this context although they are not conceived from the strictly logical point of view." The first is the theory of entropy and information in thermodynamics; the second is Shannon's information theory. In connection with entropy and information von Neumann referred to Boltzmann, Hartley, and Szilard. He explained at length the para- 3 [ A Philosophical Essay on Probabilities.] 4 [ A Treatise on Probability.] 5 [ In his "Quantum Logics (Strict-and-Probability-Logics)", von Neumann concluded: ^Probability logics cannot be reduced to strict logics, but constitute an essentially wider system than the latter, and statements of the form P(a, b) = (0 < < 1) are perfectly new and sui generis aspects of physical reality. "So probability logics appear as an essential extension of strict logics. This view, the so-called 'logical theory of probability' is the foundation of J. M. Keynes's work on this subject." Compare von Neumann and Birkhoff, "The Logic of Quantum Mechanics," and von Neumann and Morgenstern, Theory of Games and Economic Behavior, Sec. 3.3.3.] 60 THEORY OF SELF-REPRODUCING AUTOMATA dox of Maxwell's demon and how Szilard resolved it by working out the relation of entropy to information. 6 Von Neumann said that Shannon's theory is a quantitative theory of measuring the capacity of a communications channel. He explained and illustrated the con- cept of redundancy. He pointed out that redundancy makes it possible to correct errors, for example, to read proof. Redundancy "is the only thing which makes it possible to write a text which is longer than, say, ten pages. In other words, a language which has maximum compres- sion would actually be completely unsuited to conveying information beyond a certain degree of complexity, because you could never find out whether a text is right or wrong. And this is a question of principle. It follows, therefore, that the complexity of the medium in which you work has something to do with redundancy." In his review of Wiener's Cybernetics von Neumann made an ex- tended statement about entropy and information which it is appro- priate to quote here. "Entropy for the physicist is a concept belonging to the discipline of thermodynamics where the transformations among the various forms of energy are studied. It is well known that the total energy of a complete, closed system is always conserved : energy is neither created nor lost but only transformed. This constitutes the first fundamental theorem of thermodynamics, or the energy theorem. There is, however, in addition, the second fundamental theorem of thermodynamics, or entropy theorem, which states that a hierarchy exists among the forms of energy: mechanical (kinetic or potential) energy, constituting the highest form, thermal energies constituting under it a decreasing hierarchical sequence in the order of decreasing temperature, and all other forms of energy permitting a complete classification relative to the gradations of this schema. It states, fur- thermore, that energy is always degraded, that is, that it always moves spontaneously from a higher form to a lower one, or if the opposite should happen in a part of the system, a compensatory degradation will have to take place in some other part. The bookkeeping that is required to account for this continuing overall degradation is effected by a certain well defined physical quantity, the entropy, which meas- ures the hierarchic position held or the degree of degradation suffered by any form of energy. "The thermodynamical methods of measuring entropy were known in the mid-nineteenth century. Already in the early work on statistical physics (L. Boltzmann, 1896) it was observed that entropy was closely 6 [ There is a good exposition of the work of Szilard, as well as that of Shan- non and Hamming, in Brillouin's Science and Information Theory.] STATISTICAL THEORIES OF INFORMATION 61 connected with information: Boltzmann found entropy to be propor- tional to the logarithm of the number of alternatives which are possi- ble for a physical system after all the information that one possesses about that system macroscopically (that is, on the directly, humanly observable scale) has been recorded. 7 In other words, it is proportional to the logarithm of the amount of missing information. This concept was elaborated further by various authors for various applications: H. Nyquist and R. V. L. Hartley, for transmission of information in the technical communication media (Bell System Technical Journal, Vol. 3, 1924, and Vol. 7, 1928) ; L. Szilard, for information in physics in general (Zschr. f. Phys., Vol. 53, 1929) ; and the reviewer, for quan- tum mechanics and elementary particle physics (Mathematical Foundations of Quantum Mechanics, Berlin, 1932, Chapter V). "The technically well-equipped reader is advised to consult at this point some additional literature, primarily L. Szilard's work, referred to above, which also contains a particularly instructive analysis of the famous thermodynamical paradox of "Maxwell's demon, " and C. E. Shannon's very important and interesting recent work on the "Theory of Information, " "Artificial Languages," "Codes," etc. (Bell System Technical Journal, Vol. 27, 1948). There is reason to believe that the general degeneration laws, which hold when entropy is used as a measure of the hierarchic position of energy, have valid analogs when entropy is used as a measure of information. On this basis one may suspect the existence of connections between thermody- namics and new extensions of logics." In the Illinois lectures von Neumann next discussed Hamming's work on error-detecting and error-correcting codes. He then showed how the digital system with a base (binary, decimal, etc.) is an applica- tion of information theory. "Digitalization is just a very clever trick to produce extreme precision out of poor precision. By writing down 30 binary digits with 30 instruments, each of which is only good enough that you can distinguish two states of it (with intrinsic errors maybe on the 10 per cent level), you can represent a number to ap- proximately one part in a billion. The main virtue of the digital sys- tem is that we know no other trick which can achieve this. From the information point of view it is clear that this can be done, because the entropy in 30 binary instruments is 30 units, and something which 7 [ Vorlesungen iiber Gastheorie, Vol. I, Sec. 6. Boltzmann's result appeared originally in 1877 in "Uber die Beziehung zwischen dem zweiten Hauptsatze der mechanischen Warmetheorie und der Wahrscheinlichkeitsrechnung respek- tive den Satzen iiber das Warrnleichgewicht," Wissenschaftliche Abhandlungen , Vol. II, pp. 164-223.] G2 THEORY OF SELF-REPRODUCING AUTOMATA is known to one part in a billion has an entropy of the logarithm of a billion (to the base two), or about 30 units." He then pointed out that while organisms use mixed analog-pulse systems for transmitting information, they never (to the best of our knowledge) use a coded digital system with a base. Rather, when "the nervous system transmits a number, it transmits it by what is essentially a frequency modulation trick and not as a coded digital aggregate. " He suggested that the reason for this is that the frequency modulation method is more reliable than the digital system.] I have been trying to justify the suspicion that a theory of informa- tion is needed and that very little of what is needed exists yet. Such small traces of it which do exist, and such information as one has about adjacent fields indicate that, if found, it is likely to be similar to two of our existing theories: formal logics and thermodynamics. It is not surprising that this new theory of information should be like formal logics, but it is surprising that it is likely to have a lot in com- mon with thermodynamics. Though this new theory of information will be similar to formal logics in many respects, it will probably be closer to ordinary mathe- matics than formal logics is. The reason for this is that present day formal logics has a very un-analytical, un-mathematical charac- teristic: it deals with absolutely all-or-none processes, where every- thing that either does or does not happen is finitely feasible or not finitely feasible. These all-or-none processes are only weakly connected to analysis, which is the best developed and best known part of mathe- matics, while they are closely eonnected to combinatorics, that part of mathematics of which we know the least. There is reason to believe that the kind of formal logical machinery we will have to use here will be closer to ordinary mathematics than present day logics is. Specifically, it will be closer to analysis, because all axioms are likely to be of a probabilistic and not of a rigorous character. Such a phe- nomenon has taken place in the foundations of quantum mechanics. Thermodynamical concepts will probably enter into this new theory of information. There are strong indications that information is similar to entropy and that degenerative processes of entropy are paralleled by degenerative processes in the processing of information. It is likely that you cannot define the function of an automaton, or its efficiency, without characterizing the milieu in which it works by means of sta- tistical traits like the ones used to characterize a milieu in thermo- dynamics. The statistical variables of the automaton's milieu will, of course, be somewhat more involved than the standard thermody- namical variable of temperature, but they will probably be similar in character. STATISTICAL THEORIES OP INFORMATION 63 Also, it is quite clear from the practice of building computing ma- chines that the decisive properties of computing machines involve balance: balances between the speeds of various parts, balances be- tween the speed of one part and the sizes of other parts, even balances between the speed ratio of two parts and the sizes of other parts. I mentioned this in the case of the hierarchic structure of memory [p. 41]. All of these requirements look like the balance requirements one makes in thermodynamics for the sake of efficiency. An automaton in which one part is too fast for another part, or where the memory is too small, or where the speed ratio of two memory stages is too large for the size of one, looks very much like a heat engine which doesn't run properly because excessively high temperature differences exist between its parts. I will not go into the details of this, but I would like to emphasize that this thermodynamical link is probably quite a close one. Fourth Lecture THE ROLE OF HIGH AND OF EXTREMELY HIGH COMPLICATION Comparisons between computing machines and the nervous systems. Estimates of size for computing machines, present and near future. Estimates for size for the human central nervous system. Excursus about the "mixed" character of living organisms. Analog and digital elements. Observations about the "mixed" character of all componentry, artificial as well as natural. Interpretation of the position to be taken with respect to these. Evaluation of the discrepancy in size between artificial and natural auto- mata. Interpretation of this discrepancy in terms of physical factors. Nature of the materials used. The probability of the presence of other intellectual factors. The role of complication and the theoretical penetration that it requires. Questions of reliability and errors reconsidered. Probability of individual errors and length of procedure. Typical lengths of procedure for computing machines and for living organisms — that is, for artificial and for natural automata. Upper limits on acceptable probability of error in individual operations. Compensation by checking and self -correcting features. Differences of principle in the way in which errors are dealt with in artificial and in natural automata. The "single error" principle in artificial automata. Crudeness of our approach in this case, due to the lack of adequate theory. More sophisticated treatment of this problem in natural automata : The role of the autonomy of parts. Connections between this autonomy and evolution. After the broad general discussions of the last two lectures I would like to return to the subject of the specific automata which we know. I would like to compare artificial automata, specifically computing machines, with natural automata, particularly the human nervous system. In order to do this, I must say a few things in both cases about components and I must make certain comparisons of sizes. As I mentioned before, in estimating the size of the human nervous system one is limited to a figure which is not very well established, but which is probably right in its order of magnitude. This is the 64 ROLE OF HIGH COMPLICATION 65 statement that there are 10 10 neurons in the human brain. The number of nerves present elsewhere in the human organism is proba- bly much smaller than this. Also, a large number of these other nerves originate in the brain anyway. The largest aggregation of nerves of the periphery is on the retina, and the optic nerve going from the retina to the brain is part of the brain. Compared to this, the number of vacuum tubes involved in the computing machines we know of is very small, a million times smaller. The largest existing computing machine, the ENIAC, has 2 X 10 4 vacuum tubes. Another large computing machine, the SSEC, which belongs to the IBM Company, contains a mixture of vacuum tubes and relays, about 10 thousand of each. The fastest computing ma- chines now under construction are designed to have several thousand vacuum tubes, perhaps 3 thousand. The reason for this difference in size between the ENIAC and the fast machines now under con- struction is a difference in the treatment of memory, which I will discuss later. So the human nervous system is roughly a million times more - complicated than these large computing machines. The increase in complexity from these computing machines to the central nervous system is more than the increase in complexity from a single vacuum tube to these computing machines. Even measuring complexity on a logarithmic scale, which is highly generous, we have not yet come half the way. I think that in any sensible definition of complexity, it would be much less than half way. There is, however, a factor in favor of these machines: they're faster than the human brain. The time in which a human nerve can respond is about \ millisecond. However, that time is not a fair measure of the speed of the neuron, because what matters is not the time in which the neuron responds, but the time in which it recovers, the time from one response to the next potential response. That time is, at best, 5 milliseconds. In the case of a vacuum tube it's difficult to estimate the speed, but present designs call for repetition rates which are not much in excess of a million per second. Thus the nervous system has a million times as many components as these machines have, but each component of the machine is about 5 thousand times faster than a neuron. Counting what can be done, hour by hour, the nervous system outperforms the machine by a factor of roughly 200. This estimate, however, favors the automaton, because an n-fold increase in size brings much more than an n-fold increase in what can be done. What can be done is a matter of the interrelationships between the components, and the number of 66 THEORY OF SELF-REPRODUCING AUTOMATA interrelationships increases with the square of the number of com- ponents. And apart from this, what can be done depends on certain minima. Below a certain minimum level of complexity you cannot do a certain thing, but above this minimum level of complexity you can do it. [Von Neumann next compared the human nervous system and computers with respect to volume. The decisive factor is the space in which the control and amplifying functions are performed. In the case of the vacuum tube this is essentially the space between the cathode and the control grid, which is of the order of magnitude of a millime- ter. In the case of the nerve cell it is the thickness of the nerve mem- brane, which is of the order of 1 micron. The ratio in size is about 1000 to 1, and this is also the ratio in voltage, so that the intensity of the field which is used for control and amplification is about the same in the vacuum tube and the nerve cell. This means that differences in total energy dissipation are mainly due to differences in size. "A dis- crepancy of 10 3 in linear size means a discrepancy of 10 9 in volume, and probably a not very different discrepancy in energy." See also Collected Works 5.299-302 and The Computer and the Brain 44-52. He then calculated the energy which is dissipated "per elementary act of information, that is, per elementary decision of a two-way alternative and per elementary transmittal of 1 unit of information." He did this for three cases: the thermodynamical minimum, the vacuum tube, and the neuron. In the third lecture he said that thermodynamical information is measured by the logarithm, to the base two, of the number of alterna- tives involved. The thermodynamical information in the case of two alternatives is thus one, "except that this is not the unit in which you measure energy. Entropy is energy only if you specify the tempera- ture. So, running at low temperature you can say what energy should be dissipated." He then computed the thermodynamical minimum of energy per elementary act of information from the formula kT \og e N ergs, where k is Boltzmann's constant (1.4 X 10~ 16 ergs per degree), T is the temperature in absolute units, and N is the number of alternatives. For a binary act N = 2, and taking the temperature to be about 300 degrees absolute, he obtained 3 X 10~~ 14 ergs for the thermodynamical minimum. Von Neumann then estimated that the brain dissipates 25 watts, has 10 10 neurons, and that on the average a neuron is activated about 10 times per second. Hence the energy dissipation per binary act in a nerve cell is roughly 3 X 10~ 3 ergs. He estimated that a vacuum tube dissipates 6 watts, is activated about 100,000 times per second, and thus dissipates 6 X 10 2 ergs per binary act.] ROLE OF HIGH COMPLICATION 67 So our present machinery is about 200 thousand times less efficient than the nervous system is. Computing machines will be improved in the next few years, perhaps by replacing vacuum tubes with amplify- ing crystals, but even then they will be of the order of 10 thousand times less efficient than the nervous system. The remarkable thing, however, is the enormous gap between the thermodynamical mini- mum (3 X 10~ 14 ergs) and the energy dissipation per binary act in the neuron (3 X 10~ 3 ergs). The factor here is 10 11 . This shows that the thermodynamical analysis is missing a large part of the story. Meas- ured on a logarithmic scale, the gap between our instrumentation, which is obviously amateurish, and the procedures of nature, which show a professional touch, is about half the gap between the best devices we know about and the thermodynamical minimum. What this gap is due to I don't know. I suspect that it's due to something like a desire for reliability of operation. Thus, for an elementary act of information, nature does not use ,„ what, from the point of view of physics, is an elementary system with two stable states, such as a hydrogen atom. All the switching organs used are much larger. If nature really operated with these elementary systems, switching organs would have dimensions of the order of a few angstroms, while the smallest switching organs we know have di- mensions of the order of thousands or tens of thousands of angstroms. There is obviously something which forces one to use organs several orders of magnitude larger than is required by the strict thermo- dynamical argument. Thus, though the observation that information is entropy tells an important part of the story, it by no means tells the whole story. There is a factor of 10 11 still to be accounted for. [Von Neumann then discussed memory components. Vacuum tubes, which are switching organs, may be used for memory. But since the standard circuit for storing a binary digit has two tubes, and additional tubes are needed for transmitting the information in and out, it is not feasible to build a large memory out of vacuum tubes. "The actual devices which are used are of such a nature that the store is effected, not in a macroscopic object like a vacuum tube, but in something which is microscopic and has only a virtual existence.' ' Von Neumann describes two devices of this sort: acoustic delay line storage and cathode ray tube storage. An acoustic delay line is a tube which is filled with a medium such as mercury and which has a piezo-electric crystal at each end. When the transmitting crystal is stimulated electrically, it produces an acoustic wave that travels through the mercury and causes the re- ceiving crystal to produce an electrical signal. This signal is amplified, reshaped, and retimed and sent to the transmitting crystal again. 68 THEORY OF SELF-REPRODUCING AUTOMATA This acoustic-electrical cycle can be repeated indefinitely, thereby providing storage. A binary digit is represented by the presence or absence of a pulse at a given position at a given time, and since the pulses circulate around the system, the digit is not stored in any fixed position. "The thing which remembers is nowhere in particular." Information may be stored in a cathode ray tube in the form of electric charges on the inside surface of the tube. A binary digit is represented by the charge stored in a small area. These charges are deposited and sensed by means of the electron beam of the cathode ray tube. Since the area associated with a given binary digit must be recharged frequently, and since this area may be moved by changing the position of the electron beam, this memory is also virtual. "The site of the memory is really nowhere organically, and the mode of control produces the memory organ in a virtual sense, because no permanent physical changes ever occur."] There's therefore no reason to believe that the memory of the central nervous system is in the switching organs (the neurons). The size of the human memory must be very great, much greater than 10 10 binary units. If you count the impressions which a human gets in his life or other things which appear to be critical, you obtain numbers like 10 15 . One cannot place much faith in these estimates, but I think it likely that the memory capacity of the human nervous system is greater than 10 10 . I don't know how legitimate it is to transfer our experience with computing machines to natural systems, but if our experience is worth anything it is highly unlikely that the natural memory should be in switching organs or should consist of anything as unsophisticated and crude as the modification of a switching organ. It has been suggested that memory consists in a change of threshold at a synapse. I don't know if this is true, but the memory of comput- ing machines does not consist of bending a grid. A comparison be- tween artificial automata and the central nervous system makes it probable that the memory of the latter is more sophisticated and more virtual than this. Therefore, I think that all guesses about what the memory of the human organism is, and where it sits, are premature. Another thing of which I would like to talk is this. I have been talking as if a nerve cell were really a pure switching organ. It has been pointed out by many experts in neurology and adjacent fields that the nerve cell is not a pure switching organ but a very delicate continuous organ. In the lingo of computing machinery one would say it is an analog device that can do vastly more than transmit or not transmit a pulse. There is a possible answer to this, namely, that vacuum tubes, electromechanical relays, etc. are not switching devices ROLE OF HIGH COMPLICATION 69 either, since they have continuous properties. They are all charac- terized by this, however, that there is at least one way to run them where they have essentially an all-or-none response. What matters is how the component runs when the organism is functioning normally. Now nerve cells do not usually run as all-or-none organs. For in- stance, the method of translating a stimulus intensity into a fre- quency of response depends on fatigue and the time of recovery, which is a continuous or analog response. However, it is quite clear that the all-or-none character of a neuron is a very important part of the story. The human organism is not a digital organ either, though one part of it, the nervous system, is essentially digital. Almost all the nervous stimuli end in organs which are not digital, such as a contracting muscle or an organ which causes secretions to produce a chemical. To control the production of a chemical and rely on the diffusion rate of a chemical is to employ a much more sophisticated analog pro- 6edure than we ever use in analog computing machines. The most important loops in the human system are of this nature. A system of nervous stimuli goes through a complicated network of nerves and then controls the operation of what is essentially a chemical factory. The chemicals are distributed by a very complicated hydrodynamical system, which is completely analog. These chemicals produce nerv- ous stimuli which travel in a digital manner through the nervous system. There are loops where this change from digital into analog occurs several times. So the human organism is essentially a mixed system. But this does not decrease the necessity for understanding the digital part of it. Computing machines aren't purely digital either. The way we run them now, their inputs and outputs are digital. But it's quite clear that we need certain non-digital inputs and outputs. It's frequently desirable to display the result, not in digits, but, say, as a curve on an oscilloscope screen. This is an analog output. Moreover, I think that the important applications of these devices will come when you can use them to control complicated machinery, for example, the flight of a missile or of a plane. In this case the inputs will come from an analog source and the outputs will control an analog process. This whole trans-continuous alternation between digital and analog mechanisms is probably characteristic of every field. The digital aspect of automata should be emphasized at the present time, for we now have some logical tools to deal with digital mech- anisms, and our understanding of digital mechanisms is behind our understanding of analog mechanisms. Also, it appears that digital 70 THEORY OF SELF-KEPKODUCING AUTOMATA mechanisms are necessary for coin plicated functions. Pure analog mechanisms are usually not suited for very complicated situations. The only way to handle a complicated situation with analog mech- anisms is to break it up into parts and deal with the parts separately and alternately, and this is a digital trick. Let me now come to the following question. Our artificial automata are much smaller than natural automata in what they do and in the number of components they have, and they're phenomenally more expensive in terms of space and energy. Why is this so? It's mani- festly hopeless to produce a true answer at the present time: We can hardly explain why two objects are different if we understand one a little and the other not at all. However, there are some obvious dis- crepancies in the tools with which we operate, which make it clear that we would have difficulty in going much further with these tools. The materials which we are using are by their very nature not well suited for the small dimensions nature uses. Our combinations of metals, insulators, and vacuums are much more unstable than the materials used by nature; that they have higher tensile strengths is completely incidental. If a membrane is damaged it will reconstruct itself, but if a vacuum tube develops a short between its grid and cathode it will not reconstruct itself. Thus the natural materials have some sort of mechanical stability and are well balanced with respect to mechanical properties, electrical properties, and reliability requirements. Our artificial systems are patchworks in which we achieve desirable electrical traits at the price of mechanically unsound things. We use techniques which are excellent for fitting metal to metal but are not very good for fitting metal to vacuum. To obtain millimeter spacings in an inaccessible vacuum space is a great me- chanical achievement, and we will not be able to decrease the size by large factors here. And so the differences in size between artificial and natural automata are probably connected essentially with quite radical differences in materials. [ Von Neumann proceeded to discuss what he thought was a deeper cause of the discrepancy in size between natural and artificial auto- mata. This is that many of the components of the natural system \/ serve to make the system reliable. As he noted in the third lecture, actual computing elements function correctly with a certain proba- bility only, not with certainty. In small systems the probability that the whole system will behave incorrectly is relatively small and may often be neglected, but this is not the case with large systems. Thus error considerations become more important as the system be- comes more complex. Von Neumann made some very rough calculations to justify this ROLE OF HIGH COMPLICATION 71 conclusion. Assuming that the system is designed in such a way that the failure of a single element would result in failure of the whole system, he calculated the error probability required for a given mean free path between system errors. For the human nervous system he used the following figures: 10 10 neurons; each neuron activated 10 times per second on the average; a mean free path between fatal errors of 60 years (the average life span). Since 60 years is about 2 X 10 9 seconds, the product of these numbers is 2 X 10 20 . Hence an error probability of 0.5 X 10~ 20 for each activation of an element is required under these assumptions. For a digital computer he used the figures: 5 X 10 3 vacuum tubes, 10 5 activations per tube per sec- ond, and a desired mean free path between system errors of 7 hours (about 2 X 10 4 seconds). An error probability of 10~ 13 per tube ac- tivation is required for this degree of reliability. Compare the calcu- lations at Collected Works 5.366-367. He pointed out that vacuum tubes, and artificial components generally, do not have an error probability as low as 10~ 13 , and that neurons probably do not either. We try to design computing ma- chines so that they will stop when they make an error and the opera- tor can then locate it and correct it. For example, a computer may perform a certain operation twice, compare the results, and stop if the results differ.] It's very likely that on the basis of the philosophy that every error has to be caught, explained, and corrected, a system of the complexity of the living organism would not run for a millisecond. Such a system is so well integrated that it can operate across errors. An error in it does not in general indicate a degenerative tendency. The system is sufficiently flexible and well organized that as soon as an error shows up in any part of it, the system automatically senses whether this error matters or not. If it doesn't matter, the system continues to operate without paying any attention to it. If the error seems to the system to be important, the system blocks that region out, by-passes it, and proceeds along other channels. The system then analyzes the region separately at leisure and corrects what goes on there, and if correction is impossible the system just blocks the region off and by- passes it forever. The duration of operability of the automaton is determined by the time it takes until so many incurable errors have occurred, so many alterations and permanent by-passes have been made, that finally the operability is really impaired. This is a com- pletely different philosophy from the philosophy which proclaims that the end of the world is at hand as soon as the first error has occurred. To apply the philosophy underlying natural automata to artificial THEORY OF SELF-REPRODUCING AUTOMATA automata we must understand complicated mechanisms better than we do, we must have more elaborate statistics about what goes wrong, and we must have much more perfect statistical information about the milieu in which a mechanism lives than we now have, ^m^iiaiigm^ ■ ^akm can not be sep arated fro m the milieu to which it respon ds. By that 1 mean that it's meaningless to say that an automaton is good or bad, fast or slow, reliable or unreliable, without telling in what milieu it operates. The characteristics of a human for survival are well defined on the surface of the earth in its present state, though for most types of humans you must actually specialize the situation a little further than this. But it is meaningless to argue how the hu- man would survive on the bottom of the ocean or in a temperature of 1000 degrees centigrade. Similarly, in discussing a computing machine it is meaningless to ask how fast or how slow it is, unless you specify what type of problems will be given to it. It makes an enormous difference whether a computing machine is designed, say, for more or less typical problems of mathematical analysis, or for number theory, or combinatorics, or for translating a text. We have an approximate idea of how to design a machine to handle the typical general problems of mathematical analysis. I doubt that we will produce a machine which is very good for number theory except on the basis of our present knowledge of the statistical properties of number theory. I think we have very little idea as to how to design good machines for combinatorics and translation. What matters is that the statistical properties of problems of mathematical analysis are reasonably well known, and as far as we know, reasonably homogeneous. Consider some problems in mathe- matical analysis which look fairly different from each other and which by mathematical standards are very different: finding the roots of an equation of the tenth order, inverting a matrix of the twentieth order, solving a proper value problem, solving an integral equation, or solving an integral differential equation. These problems are sur- prisingly homogeneous with respect to the statistical properties which matter for a computing machine: the fraction of multiplications to other operations, the number of memory references per multiplication, and the optimal hierarchic structure of the memory with respect to access time. There's vastly less homogeneity in number theory. There are viewpoints under which number theory is homogeneous, but we don't know them. So, it is true for all these automata that you can only assign them a value in combination with the milieu which they have to face. Natural automata are much better suited to their milieu than any ROLE OF HIGH COMPLICATION 73 artifacts we know. It is therefore quite possible that we are not too far from the limits of complication which can be achieved in artificial automata without really fundamental insights into a theory of in- formation, although one should be very careful with such statements because they can sound awfully ridiculous 5 years later. [ Von Neumann then explained why computing machines are de- signed to stop when a single error occurs. The fault must be located and corrected by the engineer, and it is very difficult for him to lo- calize a fault if there are several of them. If there is only one fault he can often divide the machine into two parts and determine which part made the error. This process can be repeated until he isolates the fault. This general method becomes much more complicated if there are two or three faults, and breaks down when there are many faults.] The fact that natural organisms have such a radically different attitude about errors and behave so differently when an error occurs is probably connected with some other traits of natural organisms, which are entirely absent from our automata. The ability of a natural organism to survive in spite of a high incidence of error (which our artificial automata are incapable of) probably requires a very high flexibility and ability of the automaton to watch itself and reorganize itself. And this probably requires a very considerable autonomy of parts. There is a high autonomy of parts in the human nervous sys- tem. This autonomy of parts of a system has an effect which is ob- servable in the human nervous system but not in artificial automata. When parts are autonomous and able to reorganize themselves, when there are several organs each capable of taking control in an emer- gency, an antagonistic relation can develop between the parts so that they are no longer friendly and cooperative. It is quite likely that all these phenomena are connected. Fifth Lecture RE - EVALUATION OF THE PROBLEMS OF COMPLICATED AUTOMATA- PROBLEMS OF HIERARCHY AND EVOLUTION Analysis of componentry and analysis of integration. Although these parts have to appear together in a complete theory, the present state of our informa- tion does not justify this yet. The first problem: Reasons for not going into it in detail here. Questions of principle regarding the nature of relay organs. The second problem: Coincides with a theory of information and of auto- mata. Reconsideration of the broader program regarding a theoretical discus- sion of automata as indicated at the end of the second lecture. Synthesis of automata. Automata which can effect such syntheses. The intuitive concept of "complication." Surmise of its degenerative character: In connection with descriptions of processes by automata and in connection with syntheses of automata by automata. Qualifications and difficulties regarding this concept of degeneracy. Rigorous discussion: Automata and their "elementary" parts. Definition and listing of elementary parts. Synthesis of automata by automata. The problem of self -reproduction. Main types of constructive automata which are relevant in this connection : The concept of a general instruction. The general constructive automaton which can follow an instruction. The general copying automaton. The self- reproducing combination. Self-reproduction combined with synthesis of other automata: The en- zymatic function. Comparison with the known major traits of genetic and mutation mechanisms. The questions on which IVe talked so far all bear on automata whose operations are not directed at themselves, so that they produce results which are of a completely different character than themselves. This is obvious in each of the three cases I have referred to. It is evident in the case of a Turing automaton, which is a box with a finite number of states. Its outputs are modifications of another entity, which, for the sake of convenience, I call a punched tape. 74 COMPLICATED AUTOMATA — HIERARCHY AND EVOLUTION 75 This tape is not itself an object which has states between which it can move of its own accord. Furthermore, it is not finite, but is as- sumed to be infinite in both directions. Thus this tape is qualitatively completely different from the automaton which does the punching, and so the automaton is working into a qualitatively different me- dium. This is equally true for the automata discussed by McCulloch and Pitts, which are made of units, called neurons, that produce pulses. The inputs and outputs of these automata are not the neurons but the pulses. It is true that these pulses may go to peripheral organs, thereby producing entirely different reactions. But even there one primarily thinks, say, of feeding the pulses into motor or secretory organs, so it is still true that the inputs and outputs are completely different from the automaton itself. finally, it is entirely true for computing machines, which can be thought of as machines which are fed, and emit, some medium like punched tape. Of course, I do not consider it essentially different whether the medium is a punched card, a magnetic wire, a magnetized metal tape with many channels on it, or a piece of film with points photographed on it. In all these cases the medium which is fed to the automaton and which is produced by the automaton is completely different from the automaton. In fact, the automaton doesn't produce any medium at all; it merely modifies a medium which is completely different from it. One can also imagine a computing machine with an output of pulses which are fed to control completely different entities. But again, the automaton is completely different from the electrical pulses it emits. So there's this qualitative difference. A complete discussion of automata can be obtained only by taking a broader view of these things and considering automata which can have outputs something like themselves. Now, one has to be careful what one means by this. There is no question of producing matter out of nothing. Rather, one imagines automata which can modify objects similar to themselves, or effect syntheses by picking up parts and putting them together, or take synthesized entities apart. In order to discuss these things, one has to imagine a formal set-up like this. Draw up a list of unambiguously defined elementary parts. Imagine that there is a practically unlimited supply of these parts floating around in a large container. One can then imagine an autom- aton functioning in the following manner: It also is floating around in this medium ; its essential activity is to pick up parts and put them together, or, if aggregates of parts are found, to take them apart. This is an axiomatically shortened and simplified description of 76 THEORY OF SELF-REPRODUCING AUTOMATA what an organism does. It's true that this view has certain limitations, but they are not fundamentally different from the inherent limitations of the axiomatic method. Any result one might reach in this manner will depend quite essentially on how one has chosen to define the elementary parts. It is a commonplace of all axiomatic methods that it is very difficult to give rigorous rules as to how one should choose the elementary parts, so that whether the choice of the elements was reasonable is a matter of common sense judgment. There is no rigorous description of what choice is reasonable and what choice is not. First of all, one may define parts in such numbers, and each of them so large and involved, that one has defined the whole problem away. If you chose to define as elementary objects things which are analogous to whole living organisms, then you obviously have killed the problem, because you would have to attribute to these parts just those functions of the living organism which you would like to describe or to understand. So, by choosing the parts too large, by attributing too many and too complex functions to them, you lose the problem at the moment of defining it. One also loses the problem by defining the parts too small, for instance, by insisting that nothing larger than a single molecule, single atom, or single elementary particle will rate as a part. In this case one would probably get completely bogged down in questions which, while very important and interesting, are entirely anterior to our problem. We are interested here in organizational questions about complicated organisms, and not in questions about the struc- ture of matter or the quantum mechanical background of valency chemistry. So, it is clear that one has to use some common sense criteria about choosing the parts neither too large nor too small. Even if one chooses the parts in the right order of magnitude, there are many ways of choosing them, none of which is intrinsically much better than any other. There is in formal logics a very similar diffi- culty, that the whole system requires an agreement on axioms, and that there are no rigorous rules on how axioms should be chosen, just the common sense rules that one would like to get the system one is interested in and would not like to state in his axioms either things which are really terminal theorems of his theory or things which belong to vastly anterior fields. For example, in axiomatizing geometry one should assume theorems from set theory, because one is not interested in how to get from sets to numbers, or from numbers to geometry. Again, one does not choose the more sophisticated theorems of analytic number theory as axioms of geometry, because one wants to cut in at an earlier point. COMPLICATED AUTOMATA — HIERARCHY AND EVOLUTION 77 Even if the axioms are chosen within the common sense area, it is usually very difficult to achieve an agreement between two people who have done this independently. For instance, in the literature of formal logics there are about as many notations as there are authors, and anybody who has used a notation for a few weeks feels that it's more or less superior to any other. So, while the choice of notations, of the elements, is enormously important and absolutely basic for an application of the axiomatic method, this choice is neither rigor- ously justifiable nor humanly unambiguously justifiable. All one can do is to try to submit a system which will stand up under common sense criteria. I will give an indication of how one system can be constructed, but I want to emphasize very strongly how relatively I state this system. I will introduce as elementary units neurons, a "muscle," entities which make and cut fixed contacts, and entities which supply energy, all defined with about that degree of superficiality with which the formal theory of McCulloch and Pitts describes an actual neuron. If you describe muscles, connective tissues, "disconnecting tissues," and means of providing metabolic energy, all with this degree of schematization, you wind up with a system of elements with which you can work in a reasonably uncomplicated manner. You probably wind up with something like 10 or 12 or 15 elementary parts. By axiomatizing automata in this manner, one has thrown half of the problem out the window, and it may be the more important half. One has resigned oneself not to explain how these parts are made up of real things, specifically, how these parts are made up of actual elementary particles, or even of higher chemical molecules. One does not ask the most intriguing, exciting, and important ques- tion of why the molecules or aggregates which in nature really occur in these parts are the sort of things they are, why they are essentially very large molecules in some cases but large aggregations in other cases, why they always lie in a range beginning at a few microns and ending at a few decimeters. This is a very peculiar range for an ele- mentary object, since it is, even on a linear scale, at least five powers of ten away from the sizes of really elementary entities. These things will not be explained; we will simply assume that elementary parts with certain properties exist. The question that one can then hope to answer, or at least investigate, is: What prin- ciples are involved in organizing these elementary parts into func- tioning organisms, what are the traits of such organisms, and what are the essential quantitative characteristics of such organisms? I will discuss the matter entirely from this limited point of view. 78 THEORY OF SELF-REPRODUCING AUTOMATA [At this point von Neumann made the remarks on information, logic, thermodynamics, and balance which now appear at the end of the Third Lecture. They are placed there because that is where von Neumann's detailed outline located them. Those remarks are relevant to the present discussion because the concept of complication which von Neumann introduced next belongs to information theory.] There is a concept which will be quite useful here, of which we have a certain intuitive idea, but which is vague, unscientific, and imper- fect. This concept clearly belongs to the subject of information, and quasi-thermodynamical considerations are relevant to it. I know no adequate name for it, but it is best described by calling it "complica- tion." It is effectivity in complication, or the potentiality to do things. I am not thinking about how involved the object is, but how involved its purposive operations are. In this sense, an object is of the highest degree of complexity if it can do very difficult and involved things. I mention this because when you consider automata whose normal function is to synthesize other automata from elementary parts (living organisms and such familiar artificial automata as machine tools), you find the following remarkable thing. There are two states of mind, in each of which one can put himself in a minute, and in each of which we feel that a certain statement is obvious. But each of these two statements is the opposite or negation of the other! Anybody who looks at living organisms knows perfectly well that they can produce other organisms like themselves. This is their nor- mal function, they wouldn't exist if they didn't do this, and it's plausible that this is the reason why they abound in the world. In other words, living organisms are very complicated aggregations of elementary parts, and by any reasonable theory of probability or thermodynamics highly improbable. That they should occur in the world at all is a miracle of the first magnitude; the only thing which removes, or mitigates, this miracle is that they reproduce themselves. Therefore, if by any peculiar accident there should ever be one of them, from there on the rules of probability do not apply, and there will be many of them, at least if the milieu is reasonable. But a reason- able milieu is already a thermodynamically much less improbable thing. So, the operations of probability somehow leave a loophole at this point, and it is by the process of self -reproduction that they are pierced. Furthermore, it's equally evident that what goes on is actually one degree better than self -reproduction, for organisms appear to have gotten more elaborate in the course of time. Today's organisms are phylogenetically descended from others which were vastly simpler COMPLICATED AUTOMATA — HIERARCHY AND EVOLUTION 79 than they are, so much simpler, in fact, that it's inconceivable how any kind of description of the later, complex organism could have existed in the earlier one. It's not easy to imagine in what sense a gene, which is probably a low order affair, can contain a description of the human being which will come from it. But in this case you can say that since the gene has its effect only within another human or- ganism, it probably need not contain a complete description of what is to happen, but only a few cues for a few alternatives. However, this is not so in phylogenetic evolution. That starts from simple entities, surrounded by an unliving amorphous milieu, and produces something more complicated. Evidently, these organisms have the ability to produce something more complicated than themselves. The other line of argument, which leads to the opposite conclusion, arises from looking at artificial automata. Everyone knows that a machine tool is more complicated than the elements which can be made with it, and that, generally speaking, an automaton A, which can make an automaton B, must contain a complete description of B and also rules on how to behave while effecting the synthesis. So, one gets a very strong impression that complication, or productive potentiality in an organization, is degenerative, that an organization which synthesizes something is necessarily more complicated, of a higher order, than the organization it synthesizes. This conclusion, arrived at by considering artificial automata, is clearly opposite to our early conclusion, arrived at by considering living organisms. I think that some relatively simple combinatorial discussions of artificial automata can contribute to mitigating this dilemma. Ap- pealing to the organic, living world does not help us greatly, because we do not understand enough about how natural organisms function. We will stick to automata which we know completely because we made them, either actual artificial automata or paper automata described completely by some finite set of logical axioms. It is possible in this domain to describe automata which can reproduce themselves. So at least one can show that on the site where one would expect complication to be degenerative it is not necessarily degenerative at all, and, in fact, the production of a more complicated object from a less complicated object is possible. The conclusion one should draw from this is that complication is degenerative below a certain minimum level. This conclusion is quite in harmony with other results in formal logics, to which I have re- ferred a few times earlier during these lectures. 1 We do not now know 1 [ See the end of the Second Lecture.] 80 THEORY OF SELF-REPRODUCING AUTOMATA what complication is, or how to measure it, but I think that' some- thing like this conclusion is true even if one measures complication by the crudest possible standard, the number of elementary parts. There is a minimum number of parts below which complication is degenerative, in the sense that if one automaton makes another the second is less complex than the first, but above which it is possible for an automaton to construct other automata of equal or higher complexity. Where this number lies depends upon how you define the parts. I think that with reasonable definitions of parts, like those I will partially indicate later, which give one or two dozen parts with simple properties, this minimum number is large, in the millions. I don't have a good estimate of it, although I think that one will be produced before terribly long, but to do so will be laborious. There is thus this completely decisive property of complexity, that there exists a critical size below which the process of synthesis is degenerative, but above which the phenomenon of synthesis, if properly arranged, can become explosive, in other words, where syntheses of automata can proceed in such a manner that each autom- aton will produce other automata which are more complex and of higher potentialities than itself. Now, none of this can get out of the realm of vague statement until one has defined the concept of complication correctly. And one cannot define the concept of complication correctly until one has seen in greater detail some critical examples, that is, some of the constructs which exhibit the critical and paradoxical properties of complication. There is nothing new about this. It was exactly the same with conservation and non-conservation properties in physics, with the concepts of energy and entropy, and with other critical concepts. The simplest mechanical and thermodynamic systems had to be discussed for a long time before the correct concepts of energy and entropy could be abstracted from them. [ Von Neumann only briefly described the kinds of elements or parts he planned to use. There are neurons like those of McCulloch and Pitts. There are elements "that have absolutely no function except that they are rigid and produce a geometrical tie between their ends. ,, Another kind of element is called a "motor organ' ' and a "muscle-like affair"; it contracts to zero length when stimulated. There is an organ which, when pulsed, "can either make or break a connection." He said that less than a dozen kinds of elements are needed. An automaton composed of these parts can catch other parts which accidentally come in contact with it; "it is possible to invent a system by which it can sense" what part it has caught. COMPLICATED AUTOMATA HIERARCHY AND EVOLUTION 81 In June of 1948 von Neumann gave three lectures on automata at the Institute for Advanced Study to a small group of friends. He probably did this in preparation for the Hixon Symposium which took place in September of that year. 2 These lectures contained the most detailed description of the parts of his self -reproducing autom- aton that I know of. For this reason, I have attempted to recon- struct, from the notes and memories of the audience, what he said about these parts and how they would function. Von Neumann described eight kinds of parts. All seem to have been symbolized with straight lines; inputs and outputs were in- dicated at the ends and/or the middle. The temporal reference frame was discrete, each element taking a unit of time to respond. It is not clear whether he intended this list to be complete; I suspect that he had not yet made up his mind on this point. Four of the parts perform logical and information processing opera- tions. A stimulus organ receives and transmits stimuli; it receives them disjunctively, that is, it realizes the truth-function u p or q." A coincidence organ realizes the truth-function u p and q." An in- hibitory organ realizes the truth-function u p and not-g." A stimuli producer serves as a source of stimuli. The fifth part is a rigid member, from which a rigid frame for an automaton can be constructed. A rigid member does not carry any stimuli; that is, it is an insulated girder. A rigid member may be connected to other rigid members as well as to parts which are not rigid members. These connections are made by a fusing organ which, when stimulated, welds or solders two parts together. Presumably the fusing organ is used in the following way. Suppose point a of one girder is to be joined to point b of another girder. The active or out- put end of the fusing organ is placed in contact with points a and b. A stimulus into the input end of the fusing organ at time t causes points a and b to be welded together at time t + 1. The fusing organ can be withdrawn later. Connections may be broken by a cutting organ which, when stimulated, unsolders a connection. The eighth part is a muscle, used to produce motion. A muscle is normally rigid. It may be connected to other parts. If stimulated at time t it will contract to length zero by time t + 1, keeping all its connections. It will remain contracted as long as it is stimulated. Presumably muscles can be used to move parts and make connections in the following way. Suppose that muscle 1 lies between point a of 2 [ "The General and Logical Theory of Automata.' ' Collected Works 5.288- 328. It will be recalled that the Illinois lectures were delivered in December of 1949.] 82 THEORY OF SELF-REPRODUCING AUTOMATA one girder and point 6 of another girder, and muscle 2 lies between point a and the active end c of a fusing organ. When both muscles are stimulated, they will contract, thereby bringing points a, 6, and c together. When the fusing organ is stimulated, it will weld points a and b together. Finally, when the stimuli to the muscles are stopped, the muscles will return to their original length, at least one end of muscle 1 separating from the point ab. Von Neumann does not seem to have discussed the question of how the connections between mus- cles and other parts are made and broken. Von Neumann conceived of an automaton constructing other automata in the following manner. The constructing automaton floats on a surface, surrounded by an unlimited supply of parts. The constructing automaton contains in its memory a description of the automaton to be constructed. Operating under the direction of this description, it picks up the parts it needs and assembles them into the desired automaton. To do this, it must contain a device which catches and identifies the parts that come in contact with it. The June, 1948 lectures contain only a few remarks on how this device might operate. Two stimulus units protrude from the constructing automaton. When a part touches them tests can be made to see what kind of part it is. For example, a stimulus organ will transmit a sig- nal; a girder will not. A muscle might be identified by determining that it contracts when stimulated. Von Neumann intended to disregard the fuel and energy problem in his first design attempt. He planned to consider it later, perhaps by introducing a battery as an additional elementary part. Except for this addition, von Neumann's early model of self -reproduction deals with the geometrical-kinematic problems of movement, contact, positioning, fusing, and cutting, and ignores the truly mechanical and chemical questions of force and energy. Hence I call it his kine- matic model of self -reproduction. This early model is to be contrasted with his later cellular model of self -reproduction, which is presented in Part II of the present work. In his June, 1948 lectures von Neumann raised the question of whether kinematic self -reproduction requires three dimensions. He suspected that either three dimensions or a Riemann surface (multi- ply-connected plane) would be needed. We will see in Part II that only two dimensions are required for self -reproduction in von Neu- mann's cellular model. This is a strong indication that two dimensions are sufficient for kinematic self -reproduction. We return now to the Illinois lectures. Von Neumann discussed the general design of a self -reproducing automaton. He said that it COMPLICATED AUTOMATA — HIERARCHY AND EVOLUTION 83 is in principle possible to set up a machine shop which can make a copy of any machine, given enough time and raw materials. This shop would contain a machine tool B with the following powers. Given a pattern or object X, it would search over X and list its parts and their connections, thereby obtaining a description of X. Using this description, the tool B would then make a copy of X, "This is quite close to self-reproduction, because you can furnish B with itself."] But it is easier, and for the ultimate purpose just as effective, not to construct an automaton which can copy any pattern or specimen given to it, but to construct an automaton which can produce an object starting from a logical description. In any conceivable method ever invented by man, an automaton which produces an object by copying a pattern will go first from the pattern to a description and then from the description to the object. It first abstracts what the thing is like, and then carries it out. It's therefore simpler not to extract from a real object its definition, but to start from the defini- tion. To proceed in this manner one must have axiomatic descriptions of automata. You see, I'm coming quite close to Turing's trick with universal automata, which also started with a general formal descrip- tion of automata. If you take those dozen elements I referred to in a rather vague and general way and give exact descriptions of them (which could be done on two printed pages or less), you will have a formal language for describing automata unambiguously. Now any notation can be expressed as a binary notation, which can be recorded on a punched tape with a single channel. Hence any automaton description could be punched on a piece of tape. At first, it is better not to use a description of the pieces and how they fit together, but rather a description of the consecutive steps to be used in building the automaton. [ Von Neumann then showed how to construct a binary tape out of rigid elements. See Figure 2. A binary character is represented at each intersection of the basic chain; "one" is represented by an at- tached rigid element, "zero" by the absence of a side element. Writing and erasing are accomplished by adding and removing side elements.] I have simplified unnecessarily, just because of a purely mathe- matical habit of trying to do things with a minimum of notation. Since I'm using a binary notation, all I'm attaching here is no side chain, or a one-step side chain. Existing languages and practical notations use more symbols than the binary system. There is no difficulty in using more symbols here; you simply attach more com- plex side chains. In fact, the very linearity of our logical notation is 84 THEORY OF SELF-REPRODUCING AUTOMATA completely unnecessary here. You could use more complicated looped chains, which would be perfectly good carriers for a code, but it would not be a linear code. There is reason to suspect that our predilection for linear codes, which have a simple, almost temporal sequence, is chiefly a literary habit, corresponding to our not particularly high level of combinatorial cleverness, and that a very efficient language would probably depart from linearity. 3 There is no great difficulty in giving a complete axiomatic account of how to describe any conceivable automaton in a binary code. Any such description can then be represented by a chain of rigid elements like that of Figure 2. Given any automaton X, let 0(X) designate the chain which represents X. Once you have done this, you can design a universal machine tool A which, when furnished with such a chain 0(X), will take it and gradually consume it, at the same time building up the automaton X from the parts floating around freely in the sur- rounding milieu. All this design is laborious, but it is not difficult in principle, for it's a succession of steps in formal logics. It is not quali- tatively different from the type of argumentation with which Turing constructed his universal automaton. Another thing which one needs is this. I stated earlier that it might be quite complicated to construct a machine which will copy an automaton that is given it, and that it is preferable to proceed, not from original to copy, but from verbal description to copy. I would like to make one exception; I would like to be able to copy linear chains of rigid elements. Now this is very easy. For the real reason it is harder to copy an existing automaton than its description is that the existing automaton does not conform with our habit of linearity, its parts being connected with each other in all possible directions, and it's quite difficult just to check off the pieces that have already been described. 4 But it's not difficult to copy a linear chain of rigid elements. So I will assume that there exists an automaton B which has this property: If you provide B with a description of anything, it consumes it and produces two copies of this description. Please consider that after I have described these two elementary steps, one may still hold the illusion that I have not broken the prin- ciple of the degeneracy of complication. It is still not true that, start- ing from something, I have made something more subtle and more 3 [ The programming language of flow diagrams, invented by von Neumann, is a possible example. See p. 13 of the Introduction to the present volume.] 4 [ Compare Sec. 1.6.3 of Part II, written about 3 years later. Here von Neumann gives a more fundamental reason for having the constructing auto- maton work from a description of an automaton rather than from the automa- ton itself.] COMPLICATED AUTOMATA — HIERARCHY AND EVOLUTION 85 involved. The general constructive automaton A produces only X when a complete description of X is furnished it, and on any reason able view of what constitutes complexity, this description of X is as complex as X itself. The general copying automaton B produces two copies of (X), but the juxtaposition of two copies of the same thing is in no sense of higher order than the thing itself. Furthermore, the extra unit B is required for this copying. Now we can do the following thing. We can add a certain amount of control equipment C to the automaton A + B. The automaton C dominates both A and 5, actuating them alternately according to the following pattern. The control C will first cause B to make two copies of (X). The control C will next cause A to construct X at the price of destroying one copy of (X). Finally, the control C will tie X and the remaining copy of {X) together and cut them loose from the complex (A + B + C). At the end the entity X + 4>(X) has been produced. Now choose the aggregate (A + B + C) for X. The automaton (A + B + C) + 4>(A + B + C) will produce (A + B + C) + (A + B + C). Hence auto-reproduction has taken place. [ The details are as follows. We are given the universal constructor (A + B + C), to which is attached a description of itself, (A + B + C). Thus the process of self -reproduction starts with (A + B + C) + {A + B + C). Control C directs B to copy the description twice; the result is (A + B + C) + (A + B + C) + (A + B + C). Then C directs A to produce the automaton A + B + C from one copy of the description; the result is (A + B + C) + (A + B + C) + (A + B + C). If B were to copy the description thrice, the process would start with one copy of (A + B + C) + (A + B + C) and terminate with two copies of this automaton. In this way, the universal constructor reproduces itself.] This is not a vicious circle. It is quite true that I argued with a variable X first, describing what C is supposed to do, and then put something which involved C for X. But I defined A and B exactly, before I ever mentioned this particular X, and I defined C in terms which apply to any X. Therefore, in defining A, B, and (7, I did not make use of what X is to be, and I am entitled later on to use an X which refers explicitly to A, B, and C. The process is not circular. The general constructive automaton A has a certain creative ability, the ability to go from a description of an object to the object. Like- 80 THEORY OF SELF-REPRODUCING AUTOMATA wise, the general copying automaton B has the creative ability to go from an object to two copies of it. Neither of these automata, however, is self -reproductive. Moreover, the control automaton C is far from having any kind of creative or reproductive ability. All it can do is to stimulate two other organs so that they act in certain ways, tie cer- tain things together, and cut these things loose from the original system. Yet the combination of the three automata A, B, and C is auto-reproductive. Thus you may break a self -reproductive system into parts whose functioning is necessary for the whole system to be self -reproductive, but which are not themselves self -reproductive. You can do one more thing. Let XbeA + B + C + D, where D is any automaton. Then (A + B + C) + (A + B + C + D) pro- duces {A + B + C + D) + (A + B + C + D). In other words, our constructing automaton is now of such a nature that in its normal operation it produces another object D as well as making a copy of itself. This is the normal function of an auto-reproductive organism: it creates byproducts in addition to reproducing itself. The system (A + B + C + D) can undergo processes similar to the process of mutation. One of the difficulties in defining what one means by self -reproduction is that certain organizations, such as growing crystals, are self -reproductive by any naive definition of self-reproduction, yet nobody is willing to award them the distinc- tion of being self -reproductive. A way around this difficulty is to say that self -reproduction includes the ability to undergo inheritable mutations as well as the ability to make another organism like the original. Consider the situation with respect to the automaton (A + B + C + D) + (A + B + C + D). By a mutation I will simply mean a random change of one element anywhere. If an element is changed at random in one of the automata A, B, or C, the system will usually not completely reproduce itself. For example, if an element is changed in C, C may fail to stimulate A and B at the proper time, or it may fail to make the connections and disconnections which are required. Such a mutation is lethal. If there is a change in the description (A + B + C + D), the system will produce, not itself, but a modification of itself. Whether the next generation can produce anything or not depends on where the change is. If the change is in A, B, or (7, the next generation will be sterile. If the change occurs in Z), the system with the mutation is exactly like the original system, except that D has been replaced by D'. This system can reproduce itself, but its by-product will be COMPLICATED AUTOMATA — HIERARCHY AND EVOLUTION 87 D' rather than D. This is the normal pattern of an inheritable muta- tion. So, while this system is exceedingly primitive, it has the trait of an inheritable mutation, even to the point that a mutation made at random is most probably lethal, but may be non-lethal and inherita- ble. PART Two The Theory of Automata: Construction, Reproduction, Homogeneity EDITORIAL NOTE [The editor's insertions, commentaries, explanations, sum- maries, and Chapter 5 are in brackets. The figures are at the end of the volume. The reader who wishes a general view of the contents of this part should examine Sections 1.1.2.3, 1.3.3.5, 2.8.2, 2.8.3, 4.1.1, 4.3.1, and 5.3.] Chapter 1 GENERAL CONSIDERATIONS 1.1 Introduction 1.1.1 J The theory of automata. The formalistic study of automata is a subject lying in the intermediate area between logics, communica- tion theory, and physiology. It implies abstractions that make it an imperfect entity when viewed exclusively from the point of view of any one of the three above disciplines — the imperfection being prob- ably worst in the last mentioned instance. Nevertheless an assimila- tion of certain viewpoints from each one of these three disciplines seems to be necessary for a proper approach to that theory. Hence it will have to be viewed synoptically, from the combined point of view of all three, and will probably, in the end, be best regarded as a separate discipline in its own right. 1 1.1.1.2 The constructive method and its limitations. The present paper deals with a particular and limited phase of the formalistic theory of automata. The decisive limitation is that we will establish certain existence theorems, without, however, being able to prove that the constructions on which they are based are in any sense op- tima^ or that the postulates that they use are in any sense minimal. These questions of optimality and minimality could presumably be treated only if the methods for the formation of invariant quantitative concepts, and for their measurement, their evaluation, and the like, had been much further evolved in this subject of automata, control, and organization, than they are at present. We believe that such a development is possible and to be expected, and that it will to an important extent follow the patterns and the concept formations of thermodynamics. 2 The methods that will be used in this paper con- 1 [Von Neumann here referred to Wiener. See Wiener's Cybernetics and von Neumann's review of it.] 2 Von Neumann, "The General and Logical Theory of Automata" and ' 'Probabilistic Logics and the Synthesis of Reliable Organisms from Unreliable Components." [See also the Third and Fourth Lectures of Part I of the present volume.] 91 92 THEORY OF SELF-REPRODUCING AUTOMATA tribute, however, only very partially to the effort that is needed in that direction, and, at any rate, we will limit ourselves at this occasion to the establishing of certain existences (by suitable, ad hoc construc- tions) in the sense outlined above. 1.1.2.1 The main questions: (A)-(E). Within the above limitations, however, we will deal with problems that are rather central — at least for the initial phases of the subject. We will investigate automata under two important, and connected, aspects: those of logics and of construction. We can organize our considerations under the headings of five main questions : (A) Logical universality. When is a class of automata logically universal, i.e., able to perform all those logical operations that are at all performable with finite (but arbitrarily extensive) means? Also, with what additional — variable, but in the essential respects standard — attachments is a single automaton logically universal? (B) Construe tibility. Can an automaton be constructed, i.e., as- sembled and built from appropriately defined "raw materials/' by another automaton? Or, starting from the other end and extending the question, what class of automata can be constructed by one, suitably given, automaton? The variable, but essentially standard, attachments to the latter, in the sense of the second question of (A), may here be permitted. (C) Construction-universality. Making the second question of (B) more specific, can any one, suitably given, automaton be construction- universal, i.e., be able to construct in the sense of question (B) (with suitable, but essentially standard, attachments) every other auto- maton? (D) Self -reproduction. Narrowing question (C), can any auto- maton construct other automata that are exactly like it? Can it be made, in addition, to perform further tasks, e.g., also construct cer- tain other, prescribed automata? (E) Evolution. Combining questions (C) and (D), can the con- struction of automata by automata progress from simpler types to increasingly complicated types? Also, assuming some suitable defini- tion of "efficiency," can this evolution go from less efficient to more efficient automata? 1.1.2.2 The nature of the answers to be obtained. The answer to question (A) is known. 3 We will establish affirmative answers to 3 Turing, "On Computable Numbers, with an Application to the Entschei- dungsproblem." [See the discussion of Turing machines and universal Turing machines at p. 49 ff . above. The indefinitely extendible tape of a Turing ma- chine is the ' 'variable, but essentially standard, attachment" von Neumann referred to in questions (A) and (B) above.] GENERAL CONSIDERATIONS 93 questions (B)-(D) as well. 4 An important limitation to the relevance of a similar answer to question (E) lies in the need for a more unam- biguous formulation of the question, particularly of the meaning of "efficiency." In addition, we will be able to treat questions (A)-(E) in this sense with much more rigid determinations as to what consti- tutes an automaton, namely with the imposition of what is best de- scribed as a crystalline regularity. In fact, this further result would seem to be at least as essential and instructive as the ability to answer questions (A)-(D) {and, to some extent question (E); cf. above} affirmatively. In the balance of Chapter 1 we carry out a heuristic and preliminary discussion of questions (A)-(E). In Chapter 2 we develop a specific model, within the terms of which we can and will deal in full detail and rigorously with questions (A)-(D). Chapter 3 contains the analy- sis of another more natural, but technically more refractory, model. Chapter 4 is devoted to further heuristic considerations, which are more conveniently made after (and to some extent presuppose) the detailed constructions of Chapters 2 and 3. [ 1.1.2.3 Von Neumann's models of self -reproduction. The preceding paragraph gives the plan von Neumann had in mind when he wrote the present chapter. Unfortunately, von Neumann was only able to carry out his intention through part of the planned Chapter 2. To understand the plan, and the several references he makes to it in the balance of the present chapter, one must know something about the various models of self -reproduction he considered. We will describe these models briefly in the present subsection. Of necessity, much of what we say here is based on personal communications from people with whom von Neumann discussed his models of self-reproduction. Altogether, von Neumann considered five models of self-reproduc- tion. We will call these the kinematic model, the cellular model, the excitation-threshold-fatigue model, the continuous model, and the probabilistic model. The kinematic model deals with the geometric -kinematic problems of movement, contact, positioning, fusing, and cutting, but ignores problems of force and energy. The primitive elements of the kinematic model are of the following kinds : logical (switch) and memory (delay) elements, which store and process information; girders, which provide structural rigidity; sensing elements, which sense objects in the en- vironment; kinematic (muscle-like) elements, which move objects around; and joining (welding) and cutting elements, which connect 4 Von Neumann, "The General and Logical Theory of Automata," Collected Works 5.315-318. 94 THEORY OF SELF-REPRODUCING AUTOMATA and disconnect elements. The kinematic model of self -reproduction is described in the Fifth Lecture of Part I of the present work. As indicated there, von Neumann was thinking about it at least by 1948. Von Neumann's second model of self -reproduction is his cellular model. It was stimulated by S. M. Ulam, who suggested during a dis- cussion of the kinematic model that a cellular framework would be more amenable to logical and mathematical treatment than the frame- work of the kinematic model. 5 In the cellular model, self -reproduction takes place in an indefinitely large space which is divided into cells, each cell containing the same finite automaton. Von Neumann spoke of this space as a "crystalline regularity, " a "crystalline medium/' a "granular structure, " and as a "cellular structure. " 6 We will use the term cellular structure. There are many possible forms of cellular structure which may be used for self-reproduction. Von Neumann chose, for detailed develop- ment, an infinite array of square cells. Each cell contains the same 29 -state finite automaton. Each cell communicates directly with its four contiguous neighbors with a delay of at least. 1 unit of time. Von Neumann developed this model in a manuscript entitled "Theory of Automata: Construction, Reproduction, Homogeneity/ ' which con- stitutes the present Part II of the present volume. In a letter to me, Mrs. Klara von Neumann said of her husband's manuscript: "I am quite positive that it was started by him in late September 1952 and that he continued working on it until sometime in mid late 1953." As far as I can tell, von Neumann did little or nothing with the manu- script after 1953. The manuscript as left by von Neumann had two completed chap- ters and a long but incomplete third chapter. Chapter 1 of the manu- script is the present chapter. Chapter 2 of the manuscript states the transition rule governing the 29-state cellular system; this is Chapter 2 below. The incomplete Chapter 3 of the manuscript carries out the fundamental steps in the design of a cellular self -reproducing auto- maton; it appears below as Chapters 3 and 4. Von Neumann never completed the design of his cellular self -reproducing automaton; I indicate how to do this in Chapter 5 below. Von Neumann's cellular model of self -reproduction should be com- pared with some work of Ulam on cellular automata. In his A Collec- 6 [See footnote 17 of Sec. 1.3.1.2 below. In his "Random Processes and Trans- formations," presented in 1950, Ulam described a cellular framework briefly and stated that it had been considered by von Neumann and him.] 6 [Moore, "Machine Models of Self -Reproduction," suggested the name "tessellation model."] GENERAL CONSIDERATIONS 95 tion of Mathematical Problems, Ulam formulated a matrix problem arising out of the cellular model. In his "On Some Mathematical Problems Connected with Patterns of Growth of Figures" and "Elec- tronic Computers and Scientific Research, " Ulam studied the growth of figures in cellular automata with simple transition rules. He also \_y studied the evolution of successive generations of individuals with simple properties, each generation producing its successor in accord- ance with a simple, but non-linear recursive transformation. Under a covering letter dated October 28, 1952, von Neumann sent a copy of the present Chapter 1 to H. H. Goldstine. This letter elabo- rates the plan of Section 1.1.2.2 above. This is the introduction — or " Chapter 1" — that I promised you. It is tentative and incomplete in the following respects particularly: (1) It is mainly an introduction for "Chapter 2" which will deal with a model where every cell has about 30 states. It refers only very incompletely to "Chapter 3" in which a model with excitation-threshold-fatigue mecha- nisms alone will be discussed, and to "Chapter 4" where I hope to say some- thing about a "continuous" rather than "crystalline" model. There, as far as I can now see, a system of non-linear partial differential equations, essen- tially of the diffusion type, will be used. (2) The write-up is still in a quite "unliterary" form, i.e., there are no footnotes (only their places are indicated), references, explanations of the motivation, origin of the ideas, etc. It is clear that when von Neumann wrote the present Chapter 1 he had the following plan in mind. Chapter 2 was to contain a complete development of the cellular model of self -reproduction. Chapter 3 was to treat an excitation-threshold-fatigue model of self-reproduc- tion. Finally, Chapter 4 was to discuss a continuous model of self- reproduction. Von Neumann finished the essential steps in the design of the cellular model and then stopped. Unfortunately, he never found time to finish the cellular model or write about the other two models. Von Neumann delivered the Vanuxem Lectures at Princeton University on March 2 through 5, 1953. There were four lectures, entitled "Machines and Organisms. " The fourth was devoted to self- reproduction; the kinematic model, the cellular model, the excitation- threshold-fatigue model, and the continuous model were all men- tioned. Since he had already agreed to give the manuscript "Theory of Automata: Construction, Reproduction, Homogeneity" to the University of Illinois Press, von Neumann did not himself want to write up these lectures separately. Instead, it was arranged that John Kemeny should write an article based on these lectures and the first two chapters of the manuscript. This was published in 1955 under the title "Man Viewed as a Machine." Much of the material of the 90 THEORY OF SELF-REPRODUCING AUTOMATA first three Vanuxem Lectures appeared later in The Computer and the Brain. The excitation-threshold-fatigue model? of self -reproduction was to be based on the cellular model. Each cell of the infinite structure of the cellular model contains a 29-state automaton. Von Neumann's idea was to construct this 29-state automaton out of a neuron-like element which had a fatigue mechanism as well as a threshold. Since fatigue plays an important role in the operation of neurons, an excitation- \/ threshold-fatigue model would be closer to actual systems than the cellular model. Von Neumann never discussed how an idealized neuron with fatigue would work, but we can design one by combining what he said about idealized neurons without fatigue with his account of the absolute and relative refractory periods of an actual neuron (cf. pp. 44-48 above and Collected Works 5.375-376). a„ An idealized excitation-threshold-fatigue neuron has a designated threshold and a designated refractory period. The refractory period is divided into two parts, an absolute refractory period and a relative refractory period. If a neuron is not fatigued, it becomes excited when- ever the number of active inputs equals or exceeds its threshold. When the neuron becomes excited two things happen: it emits an output signal after a specified delay, and the refractory period begins. The neuron cannot be excited at all during the absolute refractory period; it can be excited during the relative refractory period, but only if the number of active inputs equals or exceeds a threshold which is higher than the normal threshold. When an excitation-threshold-fatigue neuron becomes excited, it must remember this fact for the length of the refractory period and use this information to prevent input stimuli from having their normal effect on itself. Hence this kind of neuron combines switching, delay of output, and an internal memory with feedback to control the effect of incoming signals. Such a device is, in fact, a small finite automaton, that is, a device with inputs and outputs and a finite number of inter- nal states. In the fourth Vanuxem Lecture von Neumann suggested that a neuron with threshold 2 and fatigue period 6 might supply most of the states of the 29-state finite automaton needed in each cell of his cellular framework. 7 [It should be noted that the phenomenon that von Neumann calls fatigue is more often called refractoriness. In this more common usage, fatigue is a phenomenon involving many refractory periods. The absolute refractory period of a neuron determines a maximum rate at which it can be fired. Repeated firing of a neuron at, or close to, this rate produces an increase in threshold, making it more difficult to fire the neuron. This increase in threshold is the phenomenon commonly called "fatigue."] GENERAL CONSIDERATIONS 97 The fourth model of self -reproduction which von Neumann con- sidered was a continuous model. He planned to base this on a system of non-linear partial differential equations of the type which govern diffusion processes in a fluid. Von Neumann had worked on non- linear partial differential equations, and wanted to use automata heuristically to solve theoretical problems about such equations (cf. pp. 33-35 above). In the case of the continuous model of self -reproduc- tion, he planned to proceed in the reverse direction, using non-linear partial differential equations to solve a problem of automata theory: the logical and mathematical nature of the process of self-reproduc- tion. This was part of von Neumann's general plan to employ the techniques and results of that branch of mathematics known as y analysis to solve problems in automata theory (cf. pp. 25-28 above). The physics, chemistry, biology, and logic of a self -reproducing system are very complex, involving a large number of factors; for example, mass, entropy, kinetic energy, reaction rates, concentration of enzymes and hormones, transport processes, coding, and control. All the essential properties of the self -reproducing system must be represented in the equations by functions or dependent variables. Von Neumann recognized that a system of simultaneous non-linear partial differential equations adequate to account for self -reproduction would be much more complex than the systems usually studied. Von Neumann had been trained as a chemical engineer and was therefore familiar with complex chemical reactions. He had also applied mathematics to complex physical systems of various kinds. He probably thought of the differential equations of self -reproduction in connection with his proposed excitation-threshold-fatigue model of self -reproduction. Assume that the cellular model is reduced to the excitation-threshold-fatigue model. The task then becomes that of formulating the differential equations governing the excitation, threshold, and fatigue properties of a neuron. The following processes are involved in neural activity. 8 The neuron is stimulated by inputs 1 from other neurons. When the aggregate of these inputs reaches the threshold of the neuron, it excites the neuron by triggering a flow of sodium ions from the outside to the inside of the cell body. The flow or diffusion of ions causes the cell body to become depolarized. This diffusion and depolarization is then transmitted down the axon and constitutes the firing of the neuron. The firing is followed by a diffu- y sion of potassium ions from the inside of the neuron to the outside, 8 [For a complete description see Eccles, The Neurophysiological Basis of Mind.] 98 THEORY OF SELF-REPRODUCING AUTOMATA which repolarizes the neuron. The chemical balance of sodium and potassium is restored still later. It is clear from the preceding description of the excitation, thresh- old, and fatigue processes of the neuron that chemical diffusion plays a fundamental role in these processes. This explains why von Neu- mann chose partial differential equations of the diffusion type for his continuous model of self -reproduction. The reason for von Neumann's choice of non-linear, rather than linear, differential equations is also k clear. The kinematic, cellular, and excitation-threshold-fatigue models all show that switching operations (e.g., threshold, negation) as well as control loops involving branching, feedback, and delay, are essen- tial to the logical, informational, and organizational aspects of self- reproduction. To model these discrete phenomena in a continuous system it is necessary to use non-linear partial differential equations. The preceding plan for constructing the continuous model starts with a discrete system and proceeds to a continuous system. The cellular model of self -reproduction is developed first, it is then reduced to the excitation-threshold-fatigue model, and finally, this model is described by non-linear partial differential equations. The reverse procedure is often followed in science, and von Neumann was, of course, familiar with it. One takes a continuous system, such as a fluid with shock waves in it, and approximates this system by dividing it up into discrete cells, treating everything in a cell as if it were in the same state. In this way, the differential equations of the continuous system are replaced by the difference equations of the discrete system. One may then solve the difference equations on a digital computer, and under appropriate conditions the solution will approximate the solution of the differential equations. But whatever the order of inquiry, a system of differential equa- tions and the corresponding difference equations represents essentially the same phenomena. The transition rule for the cellular model (Ch. 2 below) is the difference equation version of the system of partial differential equations of the continuous model. The design of the pri- mary automaton which reproduces itself corresponds to the boundary conditions on these partial differential equations. Another way to view the contrast between the continuous and cellular models is in terms of the difference between an analog and a digital computer. An analog J computer is a continuous system, and a digital computer is a discrete system. Thus von Neumann's continuous model of self -reproduction stands in the same relation to analog computers as his cellular model of self -reproduction stands to digital computers. In Section 12 of his "Probabilistic Logics and the Synthesis of Reliable Organisms from GENERAL CONSIDERATIONS 99 Unreliable Components," he proposed a scheme for representing and processing digital information in an analog device. His continuous model of self -reproduction should be compared with this scheme. Von Neumann's continuous model of self -reproduction should also be compared with some work of Turing. In "The Chemical Basis of Morphogenesis,' ' Turing analyzed morphogenesis by solving differ- ential equations which describe the interaction, generation, and diffusion of chemical substances. Turing confined himself almost entirely to linear differential equations, but he touched on non-linear differential equations. Von Neumann had been interested in the applications of probability theory throughout his career; his work on the foundations of quantum mechanics and his theory of games are examples. When he became interested in automata, it was natural for him to apply probability theory here also. The Third Lecture of Part I of the present work is devoted to this subject. His "Probabilistic Logics and the Synthesis of Reliable Organisms from Unreliable Components" is the first work on probabilistic automata, that is, automata in which the transitions between states are probabilistic rather than deterministic. Whenever he discussed self -reproduction, he mentioned mutations, which are random changes of elements (cf. p. 86 above and Sec. 1.7.4.2 below). In Section 1.1.2.1 above and Section 1.8 below he posed the problems of modeling evolutionary processes in the framework of automata theory, of quantizing natural selection, and of explaining how highly efficient, complex, powerful automata can evolve from inefficient, simple, weak automata. A complete solution to these problems would give us a probabilistic model of self -reproduction and evolution. 9 ] 1.2 The Role of Logics — Question (A) 1.2.1 The logical operations — neurons. In evaluating question (A), one must obviously consider automata which possess organs that can express the essential propositions of logics and which need not possess any other organs. This can be done by using organs each of which possesses two stable states, corresponding to the basic truth values of true and false in logics. It is convenient to use a plausible physiologi- cal analogy and to designate these organs (whatever they are or are thought to be in reality) as neurons, and the two above states as excited and quiescent, respectively. It is also convenient to attach to these states digital (arithmetical) symbols, namely 1 and 0, respec- 9 [For some related work, see J. H. Holland, "Outline for a Logical Theory of Adaptive Systems, " and "Concerning Efficient Adaptive Systems."] 100 THEORY OF SELF-REPRODUCING AUTOMATA tively. 10 The familiar structure of logics can then be conveyed to an automaton built from such organs by connecting them with lines representing the logical implications, and by introducing a separate species of basic organs, i.e., of neurons, for each basic logical opera- tion. 11 In the usual propositional calculus these are and, or, and not, to be designated by •, +, and — , respectively. 12 The lines which control the neuron's behavior, i.e., which represent the logical varia- bles that enter into the basic logical operation or function to which the neuron corresponds, are its inputs; the lines through which this neuron expresses its resulting behavior, i.e., which represent the value of the logical function in question, are the outputs. These are usually the inputs of other neurons. Instead of attributing to a neuron several outputs, it is preferable to allow only one, and to split it afterwards into as many branches as necessary. The time-element in the func- tioning of a neuron is best expressed by stipulating that the state prescribed by the logical function corresponding to the neuron (i.e., the value of that function) is assumed a fixed delay time r after the neurons that control this behavior have assumed their relevant states. That is, the response of a neuron (on its output line) occurs a fixed delay time r after the stimuli (on its input lines). It is unnecessary to allow propagation delays along lines; i.e., an output may be instan- taneously active wherever it is an input. It is simplest to assume that all relevant events take place at times t that are integer multiples of r: t = ut, n = 0, zfcl, ±2, • • • . Next, r may be chosen as the unit of time: r = 1, and so always t = 0, zbl, ±2, • • • . The basic neurons referred to above are shown in Figure 3. Their behavior is described by the following rules: 1. a, b are the input lines; c is the output line of this neuron. 2.1. The + neuron is excited at time t if and only if either the neuron with output line a or the neuron with output line b is excited at time t — 1. 2.2. The • neuron is excited at time t if and only if both the neuron with output line a and the neuron with output line 6 are excited at time t — 1. 2.3. The — neuron ["minus neuron"] is excited at time t if and only if the neuron with output line a is not excited (i.e., is quiescent) at time t — 1. 10 Boolean algebra is then applicable. 11 McCulloch and Pitts, "A Logical Calculus of the Ideas Immanent in Nervous Activity." 12 Von Neumann, ' 'Probabilistic Logics and the Synthesis of Reliable Or- ganisms from Unreliable Components," Sec. 4. GENERAL CONSIDERATIONS 101 The time delay caused by the operation of each neuron guarantees the effective and constructive character of the logical system arrived at in this manner. 13 It is easily seen — in fact it is essentially inherent in the correspondence between the species of neurons introduced and the basic operations of logics (cf . above and Fig. 3) — that automata built from these organs can express all propositional functions in logics. 14 Beyond this, the inclusion of inductive processes, and more generally, of all processes that are permissible in finitistic logics, re- quires a deeper analysis. 15 It brings in one substantively new element: the need for an arbitrarily large (finite, but freely adjustable in size) memory. This ties in with question (B) and will be considered sub- sequently. 1.2.2 Neural vs. muscular functions. Question (A) involved merely logical determinations; therefore it required only (at least directly only; cf., however, the last remark in Sec. 1.2.1) organs with two states, true and false. These two states are adequately covered by the neural states of excitation and quiescence. Question (B), on the other hand, calls for the construction of automata by automata, and it necessitates therefore the introduction of organs with other than logical functions, namely with the kinematical or mechanical attri- butes that are necessary for the acquisition and combination of the organs that are to make up the automata under construction. To use a physiological simile, to the purely neural functions must be added at least the muscular functions. At this point several alternatives open up. 1.3 The Basic Problems of Construction — Question (B) 1 .3.1 .1 The immediate treatment, involving geometry, kinematics, etc. The most immediate approach is this. The constituent organs are the neurons and lines necessitated by (A), plus such additional organs as (B) (i.e., the present discussion) will require. These constituent organs are to be conceived of as physical objects in actual space. Their acqui- sition and combination (including the establishing of rigid connections between them) must accordingly take place in actual space, i.e., 3-dimensional, Euclidean space. (Further variations on the dimen- sionality and geometrical character of the space are possible, but we 13 McCulloch and Pitts, op. cit. Von Neumann, op. cit., Sees. 2.1 and 3.3. 14 McCulloch and Pitts, op. cit. Von Neumann, op. cit., Sec. 3. 15 McCulloch and Pitts, op. cit. Von Neumann, op. cit., Sees. 3.3 and 5.1. [Von Neumann also mentioned Kleene. He probably intended to refer to the Rand Corporation version of "Representation of Events in Nerve Nets and Finite Automata."] 102 THEORY OF SELF-REPRODUCING AUTOMATA will not consider them at this point. Cf., however, the crystal-lattice discussions later on.) The constituent organs that are needed for the automaton construction must thus be found and acquired in space, they must be moved and brought into contact and fastened together in space, and all automata must be planned as true geometrical (and kinematical and mechanical) entities. The functions that were de- scribed above, rather symbolically, as muscular, will now be very nearly truly muscular, etc. Different degrees of abstraction are still possible; for example, one may or may not pay attention to the truly mechanical aspects of the matter (the forces involved, the energy absorbed or dissipated, etc.). But even the simplest approach, which disregards the above-mentioned properly mechanical aspects entirely requires quite complicated geometrical-kinematical considerations. 16 Yet, one cannot help feeling that these should be avoided in a first attempt like the present one: in this situation one ought to be able to concentrate all attention on the intrinsic, logical-combinatorial aspects of the study of automata. The use of the adjective formalistic at the beginning of Section 1.1.1.1 was intended to indicate such an approach — with, as far as feasible, an avoidance of the truly geomet- rical, kinematical, or mechanical complications. The propriety of this desideratum becomes even clearer if one continues the above list of avoidances, which progressed from geometry, to kinematics, to me- chanics. Indeed, it can be continued (in the same spirit) to physics, to chemistry, and finally to the analysis of the specific physiological, physico-chemical structures. All these should come in later, succes- sively, and about in the above order; but a first investigation might best avoid them all, even geometry and kinematics. (A certain amount of primitive geometry and vestigial kinematics will appear even so, as will be seen later.) 1.3.1.2 The non-geometrical treatment — structure of the vacuum. A more sophisticated approach, which goes far towards meeting the desiderata expressed above, is this. 17 The need to use geometry (and kinematics) merely expresses the fact that even the vacuum (the sites not now occupied, but poten- tially occupiable, by the constituent organs of automata) has a struc- ture. Now 3-dimensional Euclidean space represents (or, represents with an approximation that is sufficient in the situation envisaged) the actual "structure of the vacuum." Nevertheless, this structure involves a number of traits which are unnecessarily complicating. 16 Von Neumann, "The General and Logical Theory of Automata," Collected Works 5.315-318. [See also the Fifth Lecture of Part I above.] 17 [Von Neumann was going to refer to S. Ulam here. See Sec. 1.1.2.3 above.] GENERAL CONSIDERATIONS L03 While these will have to be considered at a later stage, it is desirable to eliminate them in the first approach. We will accordingly try to do this. 1.3.2 Stationarity — quiescent vs. active states. The main complication that we wish to remove is the influence of kinematics, that is, the necessity of moving objects around. It is preferable to have stationary objects only, normally in a quiescent state, and to postulate a system which will, under suitably and precisely defined conditions, transfer them from the quiescent state into an active state — or into a particular one of several possible active states. 1.3.3.1 Discrete vs. continuous framework. Next, these stationary and normally quiescent objects could be thought of as discrete en- tities, or as (infinitesimal) elements of a continuously extended medium. In the first case we will have a granular or cellular structure, while in the second case we are led back to a continuous space, more or less like that of Euclidean geometry. 1.3.3.2 Homogeneity: discrete (crystalline) and continuous (Eu- clidean). We now make the simplifying, but also very restrictive assumption, 18 that this spatial or quasi-spatial substratum be homoge- neous. That is, the granular structure of the first case must have crystalline symmetry, 19 while the continuous space of the second case must be Euclidean. 20 In both cases this degree of homogeneity falls short of absolute homogeneity. Indeed, we have only postulated the homogeneity of the spatial (or, more broadly, combinatorial) matrix which carries the (quiescent or active) objects referred to above — the basic organs — but not the homogeneity of the population of these objects. That is, in the first (discrete) case, we have not postulated that all the cells of the crystal behave according to the same rules; and in the second (continuous) case, we have not postulated that the continuous, space-filling medium be subject everywhere to the same rules. Depending on whether this is, or is not, postulated, we will say that the system possesses, or does not possess, intrinsic, or functional, homogeneity? 1 18 Calling for stronger results. [That is, it is more difficult to construct a self- reproducing automaton in a homogeneous medium than in an inhomogeneous medium.] 19 [A crystal is a solid body having a regular internal structure and bounded by symmetrically arranged plane surfaces intersecting at definite and charac- teristic angles. The regular internal structure consists of the rows and patterns of atoms of the crystal. The faces of the crystal express this regular internal structure externally.] 20 [Apparently von Neumann was going to explain in this footnote why he was excluding the non-Euclidean spaces of Bolyai-Lobachevski and Riemann.] 21 [In the discrete (crystalline, granular, cellular) case, functional homoge- 104 THEORY OF SELF-REPRODUCING AUTOMATA Beyond this, an even more complete homogeneity would exist if all (discrete or infinitesimal) elements were in the same state, for exam- ple, in the quiescent state (cf. above). In this case we will speak of total homogeneity or of total quiescence, respectively. This can obviously not be postulated in general, since it would exclude any positive and organized functioning of the automaton, for example, all moves in the sense of questions (B)-(E) of Section 1.1.2.1 above. It is, how- ever, quite reasonable to strive to assume total quiescence as the initial state of an automaton. This cannot be absolutely enforced, since with the usual systems of rules total quiescence is a self-per- petuating state (cf. later). It will, however, be practical to assume a totally quiescent initial state and to proceed from there with the injection of a minimum amount of external stimulation (cf. later). 1.3.3.3 Questions of structure: (P)-(R). The point made at the beginning of Section 1.3.3.2, namely, that the homogeneity assump- tions of Section 1.3.3.2 are seriously restrictive, is worth elaborating somewhat further. Indeed, even the manner in which the basic organs of the discussion of question (A) (the neurons of Section 1.2.1) are ordinarily strung together (cf. later), violates the first principle of homogeneity formulated in Section 1.3.3.2, that of the underlying granular structure, i.e., its crystalline symmetry. This degree of homogeneity can, however, be attained by some rather obvious and simple tricks, as will be seen later. The systems that are thus obtained will, however, still violate the further, stricter, principle of homogeneity formulated in Section 1.3.3.2, that of functional homogeneity. This is clearly so, as long as several species of neurons are used (cf. Sec. 1.2.1, particularly Fig. 3) and these have to be distributed over the crystal lattice in an irregular (i.e., not crystalline-symmetric) manner. However, this use of several species of neurons and this distribution of them in an irregular manner are natural if one approaches the problem in the way which is the obvious one from the point of view of the ordinary logical and combinatorial techniques (cf. later). We will see that this difficulty, too, can be overcome and that functional homogeneity can be achieved. This, however, is considerably less easy, and it constitutes, in fact, one of the main results of this paper. The homogeneity problem also raises some ancillary questions, which will successively occupy us when their respective turns come. They are the following : neity means that each cell is occupied by the same finite automaton and each such automaton is connected to its neighbors in the same way. The particular cellular structure which von Neumann adopts in Ch. 2 is functionally homoge- neous; see Sec. 1.3.3.5 below.] GENERAL CONSIDERATIONS 105 (P) Which is the minimum usable dimensionality? (This question arises both in the first — discrete, crystalline — and in the second — continuous, Euclidean — cases of Sees. 1.3.3.1 and 1.3.3.2.) (Q) In connection with functional homogeneity, can we require isotropy 22 in addition to homogeneity? {In the crystalline case this is meaningful for the regular crystal class only. 23 Cf. also question (R).} (R) In the crystalline case, which crystal classes are usable? Or, to introduce our real aim explicitly, what is the highest degree of regularity that can be used? 23 With respect to question (P), one would expect that the dimen- sionality 3 is usable and probably minimal. In fact even 2 is usable, while it seems unlikely that 1 should be, at least in combination with any otherwise plausible postulates. 24 These questions will be considered in some detail later. Question (Q) will be considered later; it will appear that isotropy can be achieved, although non-iso tropic models, too, can be of con- siderable interest. As to question (R), we will use the maximum regularity, which is reasonably interpreted as the (body-centered ) cubic class in 3 dimen- sions, and the corresponding quadratic class in 2 dimensions, the emphasis being, in view of what was said about question (P) above, on the latter. Some other classes, however, are also of interest, as will be seen later. 1.3.3.4 Nature of results, crystalline vs. Euclidean: statements (X)~ (Z). To conclude this group of observations, we note this. We sur- mise that the comparative study of the two cases of Section 1.3.3.1 {i.e., of the crystalline (discrete) and of the continuous (Euclidean) case} will prove very rewarding. The available indications point strongly towards these conclusions: 22 [A substance or space is isotropic insofar as it has the same properties in all directions. In the crystalline (discrete) case, functional isotropy means that each cell is connected to each of its immediate neighbors in the same way. The particular cellular structure which von Neumann adopts in Ch. 2 is functionally isotropic; see Sec. 1.3.3.5 below.] 23 Crystal classes in two and three dimensions. [Crystals are divided into six groups or systems according to the number and nature of the axes of sym- metry. Crystals belonging to the cubic system (also called "the isometric sys- tem" and "the regular system") are the most symmetrical of all crystals. In a crystal of the cubic system the three crystallographic axes of reference are at right angles to each other and are equal in length. Crystals having a cubic cell and crystals having an octahedral cell are the simplest of the forms of this class.] 24 [Von Neumann was going to make a reference to Julian Bigelow and H. H. Goldstine here. They suggested modeling self -reproduction in 2 rather than 3 dimensions.] 106 THEORY OF SELF-KEPRODUCINli AUTOMATA (X) The general possibilities are about the same in the two cases. (Y) The continuous case is mathematically much more difficult than the crystalline case. (Z) If and when the appropriate analytical methods to deal with the continuous case are developed, this case will be more satisfac- tory and more broadly and relevantly applicable than the crystalline case. These matters will be discussed in somewhat more detail later. We will make now only one more observation, which relates to the diffi- culties referred to in conclusions (Y) and (Z). These difficulties are due to the fact that, in the case in question, the mathematical problem becomes one of a system of non-linear, partial differential equations. It may be of some significance that non-linear partial differential equations, which in many directions define and limit our mathemati- cal horizon, should make their appearance in this context also. The difficulties referred to in (Y) and (Z) will cause us to direct our attention primarily to the crystalline case. In fact, we will from now on always have this case in mind, except where the opposite is expressly stated. [1.3.3.5 Homogeneity, quiescence, and self -reproduction. In writing the present Part II of this volume, von Neumann was going through a process of reasoning which was to terminate in his cellular model of self -reproduction. At these early stages of the process he was exploring various possibilities, leaving specific choices until later. As the inquiry proceeded his terminology necessarily changed somewhat. Thus the "quiescence" of Section 1.3.3.2 above divides into "unexcita- bility" and "quiescence of an excitable cell" in Sections 1.3.4.1 and 1.3.4.2 below. A brief preview of the final outcome may help the reader to follow the development of von Neumann's thought. Von Neumann's cellular structure is described in detail in Chapter 2 below. He chose an infinite 2-dimensional array of square cells. Each cell is occupied by the same 29-state finite automaton, and each such automaton is connected to its four immediate neighbors in exactly the same way; that is, the transition rule of Chapter 2 below is the same for each cell. Hence this cellular structure is "functionally homogeneous" in the sense of Section 1.3.3.2 above. Since each 29- state automaton is connected to each of its four neighbors in the same way, this structure is also isotropic in the sense of Section 1.3.3.3 above. Functional homogeneity and isotropy have only to do with structure, however, and not with content or state. Consequently, if different cells of a region of the cellular structure are in different states, one part of the region may act in one way and send informa- GENERAL CONSIDERATIONS 107 tion in one direction, while another part of the region acts in a differ- ent way and sends information in a different direction. The 29 states each cell is capable of assuming fall into three cate- gories: unexcitable (1), excitable (20), and sensitized (8). These are listed later in Figure 9. The unexcitable state U is utterly quiescent. This state plays a fundamental role with respect to the information content of the cellular structure, for this structure is operated in such a way that at each moment of time only a finite number of cells of the structure are in some state other than the unexcitable state. In this respect, the unexcitable state of von Neumann's system is analogous to a blank square on the tape of a Turing machine. Indeed, von Neumann represented zero in his linear array L by the unexcitable state U; see Section 1.4.2.5 below. The 20 excitable states fall into three classes. There are 4 confluent states C CC ' , where e and e range over 0 and 1; the value "0" sym- bolizes quiescence, and the value "1" symbolizes excitation. There are 8 ordinary transmission states T 0ac and 8 special transmission states Ti ae , where a = 0, 1, 2, 3 and e = 0, 1 as before. Eight trans- mission states are quiescent and 8 are excited. The sensitized states are transient in nature, each lasting exactly 1 moment of time. The set of 10 states consisting of the unexcitable state U and the quiescent states Coo , T ua o (u = 0, 1; a = 0, 1, 2, 3) has this property: if every cell of the infinite cellular structure is in 1 of these 10 states, the system will never change (i.e., no cell will ever change state). The difference between the unexcitable state U and the 9 quiescent (but excitable) states Coo , T wa o is in the way they respond to stimuli (excitations). Stimuli entering a cell which is in the unexcitable state U convert that cell into 1 of the 9 quiescent states C 0 o , T ua o . This conversion is the direct process of Section 2.6 below. The direct process takes 4 or 5 units of time, the sensitized states serving as interme- diaries in the process. A stimulus entering a cell in 1 of the 20 excitable states C e6 ' , T Mac (e, e', u = 0, 1; a = 0, 1, 2, 3) does one of two things. It may change the cell back to the unexcitable state U; this is the reverse process of Section 2.5 below. Alternatively, this stimulus may be switched, combined with other stimuli, and delayed in the usual way. In particular, a quiescent finite automaton may be embedded in an area & of the cellular structure by putting each of the cells of & in 1 of the 10 states U, Coo , and T ua o . If such a finite automaton is ap- propriately designed, it will compute in the usual way when it is stimulated (activated). 108 THEORY OF SELF-REPRODUCING AUTOMATA The temporal reference frame for the cellular structure consists of the times • • • — 3, — 2, — 1, 0, 1, 2, 3, • • • . Von Neumann did not say exactly how he planned to use this temporal reference frame, but the following is consistent with what he said. All cells are in the unex- citable state for negative times. An initial cell assignment is a finite list of cells together with an assignment of a state to each cell of the list. At time zero an initial cell assignment is imposed on the cellular structure from the "outside," all cells not in the assignment list being left in the unexcitable state. Thereafter the cellular system runs according to the transition rule of Chapter 2. Each initial cell arrange- ment determines a unique history of the infinite cellular structure. We will call the infinite cellular structure together with an initial cell assignment an infinite cellular automaton. An infinite cellular automaton which models self-reproduction operates as follows. The finite automaton E of Section 1.6.1.2 below constitutes an initial cell assignment. More specifically, the initial or starting state of this finite automaton E is an initial cell assignment. We impose this initial cell assignment on the cellular structure at time zero, thereby embedding the finite automaton E in the cellular structure. Let & be the area of cells affected, so that initially all cells outside & are unexcitable. The logical structure of E is such that at some later time r another copy of E will appear in another area o! of the cellular structure. That is, the state of each cell of &' at time r is identical to the state of the corresponding cell of & at time zero. Thus E reproduces itself in area d' . In summary, this infinite cellular automaton has only one copy of E embedded in it at time zero, but two copies of E embedded in it at time r. This is self -reproduction. Let us look at the temporal development of an infinite cellular automaton. At negative times it is totally homogeneous in the sense of Section 1.3.3.2 above, all cells being unexcitable. At time zero this total homogeneity is modified by the introduction of inhomogeneity in a finite area. This inhomogeneity will, in general, propagate into surrounding areas. In the case of self -reproduction, the inhomogeneity of area Ct spreads until area d' is organized in the same way as area d. The above treatment of infinite cellular automata makes no essen- tial use of negative times. Since all cells are unexcitable at these times we could use only the times 0, 1, 2, 3, • • • without any loss of general- ity. Von Neumann did not say why he introduced negative times. It is possible that he planned to use them in connection with a probabi- listic model of self-reproduction and evolution (see the end of Sec. 1.1.2.3 above).] 1.8.4-1 Simplification of the problems of construction by the treatment GENERAL CONSIDERATIONS 109 according to Section 1.3.1.2. We can now return to the original thought of Sections 1.2.2-1.3.3.1, that is, to the necessity of having organs that perform analogs of the muscular rather than of the neural func- tion. In other words, we need organs which are concerned with the acquiring, positioning, and connecting of the basic organs of the automata under construction, rather than with purely logical opera- tions in the sense of Section 1.2.1. Since the properly kinematic aspects of "acquiring" and "positioning" have been removed by the observations of Sections 1.3.1.2 and 1.3.2, the nature of the function, referred to above as an "analog of the muscular," must now be re- considered. The remark at the end of Section 1.3.2 makes it clear that this function now appears under the aspect of causing an object — or, to use a terminology suggested by Section 1.3.3.1, a cell — which is in a quiescent state, to go over into a suitable active state. Now the logical functions, as discussed in Section 1.2.1, also do this, but there is a difference here or, at least, a possibility of a difference. The nature of this difference can be best illustrated by a physiological simile. 1.8.4.2 Quiescence vs. activity; excitability vs. unexcitability; ordinary and special stimuli. A neuron may be quiescent or active, but it is at any rate pote ntially active; that is, if is an excitable cell. Connective tissue, on the other hand, consists of unexcitable, truly passive cells. So far the difference between an excitable, but (momentarily) quies- cent cell, and a (permanently) passive cell is obvious. Now let us for a moment introduce the fiction that the growth of neurons (i.e., of excitable cells) occurs not by the formation of new cells, but by the transformation of existing, unexcitable cells into excitable ones. It should be noted that, while this is not so in reality, it is the arrange- ment that fits best into the picture of stationary cells introduced in Section 1.3.2. To reconcile this with reality, one may have to inter- pret the absence of a cell as the presence of one in a special, particu- larly unexcitable state. This concept is in reasonable harmony with the one relating to a "structure of the vacuum," as used in Section 1.3.1.2. Such a transformation must itself be induced by some special stimuli, i.e., by some special active states of neighboring cells. The ordinary stimuli (i.e., the ordinary active states) which control the logical functions discussed in Section 1.2.1 cannot do this. These stimuli control transitions between quiescent and ordinary active states, but in excitable cells only, and without ever changing the species (in the sense of Sec. 1.2.1) of the cell (neuron) in question. Indeed it was just with respect to these ordinary stimulations that 110 THEORY OF SELF-REPRODUCING AUTOMATA unexcitability was defined above. In order to provide an equivalent of growth, the special stimulations referred to above must be able to cause transitions from unexcitability to excitability, and also to de- termine the specific species (in the sense of Sec. 1.2.1 ) of the excitable cell (neuron) thus created. These concepts permit us to define the difference between quies- cence (with excitability) and unexcitability; the former responds to (i.e., is removed by) ordinary stimuli, the latter only to special ones. This has, of course, only shifted the distinction into one between ordinary and special stimuli, that is, into a distinction between ordi- nary and special active states. Leaving the physiological simile, and returning to the mathematical problem at hand, the matter still presents some dubious aspects. 1.3.4.3 Critique of the distinctions of Section 1.3.4.2. Indeed, one must now consider critically the usefulness of the distinction between ordinary and special stimuli. As outlined above, the underlying idea is this. Ordinary stimuli are to be used for logical operations, taking the species of the neurons that are involved as fixed; that is, ordinary stimuli are to be used for the control and utilization of already satis- factorily organized sub-assemblies. Special stimuli are to be used for growth operations, involving the introduction of excitability, to- gether with a new determination of the neuronic species, into pre- viously unexcitable, or otherwise different areas. In other words, special stimuli are used for the organization (according to some logi- cally determined plan) of hitherto unorganized (or differently organ- ized) areas. This distinction is certainly convenient for a first approach, since it permits one to keep conceptually fairly distinct functions quite sharply distinct in their actual embodiment and performance. We will therefore adhere to it strictly in our first constructions (cf. later). However, it is quite possible to relax it by various logical and combina- torial devices to varying degrees, up to (and including) complete obliteration. These turn out subsequently to be quite desirable for various mathematical and conceptual reasons, and we will therefore introduce them in our later constructions (cf. later). [ Logical functions and growth functions are fairly distinct con- ceptually. In his preliminary discussion von Neumann made a sharp distinction between their respective representations in his cellular system. Logical functions are performed by ordinary stimuli, and growth functions are performed by special stimuli. Later he relaxed the distinction somewhat by using both types of stimuli for both types of functions. (JEN ERA L CO N S 1 1) EH ATI ONS The final distinction between ordinary and special stimuli is shown in Figure 9. Both ordinary and special transmission stimuli bring about "growth" from the unexcitable state U to one of the nine quiescent states Coo > T M «o (u = 0, 1; a = 0, 1, 2, 3); this is the direct process of Section 2.6 and Figure 10 below. Special transmission stimuli change an ordinary transmission state T 0 « € (a = 0, 1, 2, 3; e = 0, 1) or a confluent state C €C ' (e, e = 0, 1) into the unexcitable state U, while ordinary transmission stimuli change a special trans- mission state Ti ac (a = 0, 1, 2, 3; c = 0, 1) into the unexcitable state U; this is the reverse process of Section 2.5 below. The logical functions of disjunction, conjunction, and delay will normally be performed by arrays of ordinary transmission and con- fluent states. The logical function of negation is not directly repre- sented in von Neumann's system. Instead, negation will be accom- plished by breaking a communication path and later restoring it. The breaking will be done by the reverse process and the restoring by the direct process. An example is given in Figure 17 of Section 3.2 below.] 1.4 General Construction Schemes — Question (B) Continued 1.4.1.1 Construction of cell aggregates — the built-in plan. The discus- sion up to this point (i.e., in Sees. 1.2.2-1.3.4.3) dealt only with the first part of question (B), the immediate problems of the construc- tion of one automaton by another automaton. We can now pass to the second part of question (B), i.e., consider by what means a single automaton can be made to construct broad classes of other automata, and how variable, but essentially standard, attachments can be used to facilitate and extend this process. Our discussion dealt so far only with the question: By what means can a single cell of specified characteristics be created? In this respect we developed some orienting principles. There remains the question of how this operation is to be controlled in all its details. It is clear that this will have to be done by the logical section of the primary (parent) automaton, which was considered in Section 1.2.1. It is also natural that this logical section of the primary automaton must supervise and sequence the multiplicity of acts of single-cell creation, which are necessary to produce the complete secondary (constructed ) automaton. This "sequencing" of the single cell creations has to be controlled by a logical pattern which is already laid out in the logical section of the primary automaton. Such a "logical pattern" is obviously neither more nor less than the complete "plan" of the secondary automaton 112 THEORY OF SELF-REPRODUCING AUTOMATA — functionally laid out within the primary automaton in "terms" that the primary automaton can "understand" and act on. Thus the plan of the secondary automaton must be "built into" the primary automaton, presumably in terms of logical connections in the sense of Section 1.2.1. 1.4.1.2 The three schemes for building in multiple plans — the para- metric form. The conclusion of Section 1.4.1.1 is that a primary autom- aton, constructed for this purpose, is prima facie (i.e., assuming the simplest type of design, which presents itself most immediately) suited to construct one and only one secondary automaton. Generali- zations beyond this level are, however, immediate. First, it is, of course, possible to have the plans of several (different) secondary automata built into the primary. Second, it is possible to incorporate the logical facilities that will make the primary autom- aton construct a specific secondary several times, e.g., a certain, preassigned number of times. Third, the plan of the secondary may contain a number of numerical parameters; and this plan can be built into the primary in this (variable) parametric form, together with facilities that make it possible to substitute any desired numeri- cal values for these parameters. The third scheme — or, rather, the combination of the second and the third schemes — is the most general of these. In their immediate form, however, these still contain a limitation, that is, a limitation of the numbers that can be used for the parameter values (third scheme ) and for the number of repetitions (second scheme). Indeed, these numbers must be present in some form in the interior of the primary automaton, say in a digital representation. Assume that p such num- bers are involved, and that they are all integers ^ 0, say, vi , • • • , v p . Let each one of those cells, which are to be used for their repre- sentation, have k states available for this purpose. It is best to in- terpret these states as the base k digits 0, 1, • • • , k — 1. Let such cells be available for vi , where i = 1, • • • , p; this requires a total of n = ni + • • • + n p cells. Vi is thus expressed by n t digits in a base k digital notation; hence it is limited to the k n i values 0, 1, • • • , k n i - 1. 1.4.2.1 The descriptive statement L for numerical parameters. The limitation just described can be circumvented by a simple trick: let these cells lie "outside" (i.e., not within the area of the primary automaton, but next to it), in the external, otherwise quiescent, re- gion of the crystal. They might, for example, form a linear array L extending in the right hand (i.e., positive x) direction away from the area of the primary automaton. The k states used for these "nota- tional" purposes must, of course, also be of a quasi-quiescent charac- GENERA L ( JONS] I >EIt ATI ONR ter, i.e., such that they will normally not disturb (stimulate or other- wise transform) each other or the surrounding quiescent cells. This, however, is a desideratum that is easy to meet (cf. later). The pri- mary automaton must then be endowed with the ability to establish contact with all parts of this linear array L, and to have its operations controlled, in the desired sense, by the "notational" (i.e., base k digital) states of the cells of L. One might, at first sight, expect difficulties in trying to do this for all possible L (for all possible sizes of L, i.e., of n) with a fixed, limited primary automaton. All these difficulties can, however, be overcome by fairly straightforward methods, as will appear when this matter is considered in detail [See Sec. 1.4.2.5 below]. We will mention here only one of them. One might think that the "exploration" of L is not possible without specifying — i.e., expressing within the primary automaton — the size of L, i.e., n. In fact, it would seem natural that p and all the v% , • • • , v p , must be so specified. This would again limit p and U\ , • • • , n p , since the primary automaton is a fixed entity, and hence it would limit L and through it the numbers that it represents. This difficulty can be removed as follows. Let each cell in L have k states for notational purposes as before; i.e., the states corresponding to the digits 0, 1, • • • , k — 1, and two additional states to be called comma and period. (All of these states must be "quasi-quiescent" in the sense indicated above.) Within L, the numbers p and v\ , • • • , v p consist only of cells in digital states. Now let L be lined up as follows (proceeding from left to right, i.e., in the positive .t-direction). The digits of p, a comma, the digits of v\ , a comma, • • • , the digits of Vp , a period. The primary automaton, in "exploring" L, can sense the comma and the period and thereby ascertain the sizes of p and of the vi , • • • , Vp no matter what these are. 14.2.2 Applications of L. The linear array L of Section 1.4.2.1 is the variable, but essentially standard attachment mentioned in question (B). It is the simple addendum to the primary automaton, which, although essentially quiescent and possessed only of the most rudimentary structure, expands the active potentialities of that automaton substantially, as appeared in Section 1.4.2.1. The possi- bilities that are inherent in this device will, however, become really clear only after this. The main application in this sense will be de- scribed later [Sees. 1.5 and 1.6]. We will consider first a lesser applica- tion. 1.4.2.8 Use of L as an unlimited memory for (A). The last men- tioned application relates to the attachments to a purely logical automaton, referred to in question (A). The setup for purely logical functions (in the sense of (A), as dis- 114 THEORY OF SELF-REPRODUCING AUTOMATA cussed in Section 1.2.1) fails to be universal because of the absence of one constituent: an arbitrarily large memory which is finite but of adjustable size (cf. the end of Sec. 1.2.1). The linear array L, as described in Section 1.4.2.1, is just that. Hence L, with its ancillary observation, exploration, and construction facilities, provides the (variable, but in the essential respects standard) attachment to the logical automaton, which bridges the gap to logical universality, as indicated in question (A). It should be noted that the need for the facilities ancillary to L, referred to above, means that componentry which is normally called for in the primary (construction) automata of (B) must also be introduced into the (logical) automata of (A), if logical universality is an aim. For the details of all this, cf. later [Chapters 4 and 5]. 14-® 4 Use of base two for L. One more remark about the cells of L is in order: we can choose k = 2, i.e., let all representations of numbers be base 2. Then each cell in L must have k + 2 = 4 states for the purposes now under consideration (cf. the discussion in Sec. 1.4.2.1). If it is now desired to keep the number of states for nota- tional purposes at 2, this can still be achieved. It suffices to replace each cell of L by 2 cells, since a pair of 2-valued states allows 2 2 = 4 combinations. The digitalization and punctuation scheme for L meets all the re- quirements of Section 1.4.2.1. It is, however, not the only possible one. The following is an obvious variant. Keep the punctuation states (the comma and the period), as in Section 1.4.2.1. Instead of the two base 2 digital states, designated 0 and 1, use only one, designated 1. Designate a number v (an integer ^ 0), not by a sequence of 0's and l's, which express its base 2 digital expansion, but simply by a se- quence of v l's. This representation is a good deal longer than that of Section 1.4.2.1 (v symbols instead of n, where n is the smallest integer with 2 n > v, i.e., with n > 2 log v), but it is more simply defined, and more simply exploitable (in the sense of the ancillary functions referred to in Sec. 1.4.2.3). For a detailed consideration, cf. later. [14-®>5 The linear array L. It may be helpful at this point to anticipate von Neumann's design of the mechanism for reading and writing on an arbitrary cell of the unlimited linear array L. Let us begin with a Turing machine, which is a finite automaton connected to an indefinitely extendible or infinite tape. The tape is divided into squares, each of which contains any one of a finite num- ber of characters (i.e., is in any one of a finite number of states). Let the basic alphabet consist of two characters: zero and one, repre- GENERAL CONSIDERATIONS 1 15 sented by a blank and a mark, respectively. At any given time the finite automaton senses one square of the tape. It can change the contents of this square (make a mark or erase a mark already there) and move the tape one square to the left or right, so that at the next moment of time it senses an adjacent square. Thus the finite autom- aton can, in a finite amount of time, gain access to, read, and modify any square of the tape. It is clear that accessibility of an arbitrary tape square is the im- portant thing, and having the tape move is only a means to this end. Alternatively, we can have the tape stand still and the finite autom- aton move back and forth along it. Or, we can have both the finite automaton and the tape stand still, and let the finite automaton communicate to an arbitrary square x n by means of a contractable and indefinitely extendible "wire." The finite automaton can sense and modify the state of square x n through this wire. Then the finite automaton can extend the wire to square x n+ i , or contract it to square x n -i . This last procedure is the one von Neumann used in his cellular system. The details are given in Chapter 4 below. We will explain the basic idea in connection with Figure 37. The memory control MC is a finite cellular automaton occupying the area indicated in that figure. L is an infinite array of cells extending to the right. "Zero" is represented in cell x n by the unexcitable state U, and "one" is represented by the quiescent but excitable state T030 , which is an ordinary transmission state directed downward. To read cell x n , the memory control MC sends a sequence of stimuli around the connecting loop Ci in the direction of the arrows. This sequence passes through x n without affecting its neighbors x n -i and .r n+ i , is modified according to the state of x n , and returns to the memory control MC with a representation of the contents of x n . The memory control MC then writes on x n and either extends the loop Ci so that it passes through cell x n +i , or contracts the loop Ci so that it passes through cell x n -i . The timing loop C 2 is used in this extension-contraction process and is extended (or contracted ) along with loop Ci . There are finitely many basic characters to be represented on L, including the period and comma. These are represented by binary sequences of some length k, and each character is stored in k cells of L. Initially, we will place a finite sequence of characters on L, of which the last one, and only the last one, is a period. Now, the memory control MC can sense the period and move it to the right or left as it expands or contracts the information on L. Hence, though MC is 110 THEORY OF SELF-REPRODUCI NG AUTOMATA of finite, fixed capacity, there is no bound to the amount of informa- tion on L with which it can interact.] 1.5 Universal Construction Schemes — Question (C) 1.5.1 Use of L for non-numerical (universal) parametrization. The schemes of Section 1.4.2.1 (together with Sec. 1.4.1.2) introduced an important broadening of the class of secondary automata that arc constructible by one, suitably given, (fixed) primary automaton, in the sense of the second question of (B). They do not, however, achieve immediately the construction universality that is the aim of question (C). We will now get this, too, by introducing one further- variation into the methods of Section 1.4.2.1. The class of secondary automata which can be constructed accord- ing to Section 1.4.2.1 by a single primary automaton is limited in this sense. (We disregard for a moment the influence of Sec. 1.4.1.2.) These (secondary) automata may represent a broad class, but they must nevertheless all be particular specimens from a common species; that is, their individual (construction) plans all derive from a common master plan in which certain available parameters are specifically numerically substituted. In other words, even though the specific plan of the secondary automaton must no longer be built into the primary automaton, nevertheless the underlying, generic plan — the plan that controls all the subordinate plans — must be built in there. 1.5.2 The universal type of plan. Consider an arbitrary (but specifically given) secondary automaton and the possible ways to describe it. The following is certainly an adequate one: (a) Specify the (two) x and the (two) y coordinates of the four sides of a rectangular area in which the entire secondary automaton is to be contained. Let these coordinates be X\ , 2/1 , x 2 , and 2/2 . These coordinates should be counted from an origin which is at a suitably designated point within the area of the primary automaton. It is actually better to introduce the side lengths a = x 2 — x\ + 1, 0 = 2/2 — yi + 1 (assuming x\ S £2,2/1 ^ 2/2) of the rectangular area containing the secondary, and to use the numbers £1,2/1, «, 0. (b) According to (a) above, each cell within the rectangle covering the secondary can be characterized by two coordinates i (= 0, 1, • • a — 1), j (= 0, 1, " • j P — 1). (To be precise, with respect to the system of coordinates used in (a) above, the coordinates of the cell i y j are Xi + i, 2/1 + j*) This gives, as it should, a/3 cells in the rec- tangle covering the secondary. Let £ be the number of states that each one of these cells can assume, using accordingly an index X = 0, 1, •••,<£ — 1 to enumerate these states. Designate by \ij the GENERAL CONSIDERATIONS 117 state of cell which is desired (on the basis of the plan of the secondary automaton in question) for the moment when the construc- tion of this automaton is just completed. It is clear from (a) and (b) that the secondary automaton is completely characterized by the specification of the numbers x x , 7/1 , a, j8 and X»v for all pairs i = 0, 1, • • • , a — 1; j =* 0, 1, • • • , 0-1- Note that these numbers have the following ranges: xi , y x = 0, ±1, ±2 - - • . a 9 fi - 1,2, . X t y 0, 1, • • • , <£ — 1 for i = 0, 1, • • • , a — 1; / ~ 0,1, ,0 - 1. To conclude, x\ , y\ are better represented by specifying their absolute values \xi\, \yi\ and two numbers e, 77: { = 0 for xi = 0 1 f= 0 for yi^0\ e {= 1 for x x < 0|, 77 \= 1 for 2/j < 0|. Thus the sequence (of integers = 0) («, ^, l^il, | r/i |, a, /S, (*) fa for < = 0,1, 1; j = 0, 1, ,0 - 1 [ (the X t y are to be thought of as lexicographically ordered by contains a complete description of the desired secondary automaton, in the condition — i.e., with the cell-states — actually desired for its initial moment, immediately after completion. This sequence of numbers may now be treated with the method described in Section 1.4.2.1 for the simpler sequence that occurred there (the sequence p, v\ , • • • , v p ). That is, we can form a linear array of cells L, extending in the right hand (i.e., positive x) direc- tion, and made up as follows: the numbers enumerated in formula (*) in the order in which they appear there, each represented by its base k digital expansion, any two consecutive ones separated by a comma, and the last one followed by a period. The general description above plays now precisely the role of the general plan of a class of secondary automata, which contains parameters, as described in connection with the third scheme in Section 1.4.1.2. In addition to this, the linear array L introduced above is the exact equivalent of the linear array L introduced in Section 1.4.2.1: it specifies the numerical values that have to be substituted for the parameters of the general description. Thus the present description of an arbitrary secondary automaton has been made to fit entirely into the parametrization pattern of the 118 THEORY OF SELF-REPRODUCING AUTOMATA third scheme of Section 1.4.1.2. Since it is entirely unrestricted, this means that the universality referred to in question (C) can be achieved in this way. 1.6 Self -Reproduction — Question (D) 1.6.1.1 The apparent difficulty of using L in the case of self-reproduc- tion. Let us now consider question (D), that is, the problem of self- reproduction. The a priori argument against the possibility of self-reproduction is that it is natural to expect the constructing automaton to be more complex than the constructed one — i.e., the primary will be more complex than the secondary. 25 This is confirmed by the results out- lined in Sections 1.2.2-1.4.1.1, i.e., those dealing with the first ques- tion in (B) : the primary must contain a complete plan of the second- ary (cf. Sec. 1.4.1.1), and in this sense the primary is more complex than the secondary. This limitation is somewhat transformed, but not removed, by the subsequent developments of Sections 1.4.1.1- 1.5.2; even the strongest one among the results that are discussed there (the answer to question (C) in Sec. 1.5.2, securing universality) is subject to one form of it. Indeed, this result calls for a complete description of the secondary, expressed by the linear array of cells L, to be attached to the primary. If one tried to pass from here directly to self-reproduction, it would be necessary to have an automaton which can contain its own plan, for example, in the form L. If the second question of (D) is included, it would also have to contain the plan (i.e., the L) of another, pre- scribed automaton. With the scheme of Section 1.5.2, even the first is impossible: the (secondary) automaton considered there has no more than af3 cells, while L (according to formula (*) in Sec. 1.5.2) consists of a/3 + 6 digitalized numbers, afi + 5 commas, and a period (i.e., 2a/3 + 12 or more cells). Many variants on this theme are possible, but none has yet appeared which, when directly used, overcomes this diffi- culty. However, there is an indirect method that circumvents it. 1.6.1.2 Circumvention of the difficulty — the types E and E F . This method is as follows. 26 Designate the universal (primary) automaton of Section 1.5.2 by A. A constructs any secondary whose description L is attached to A, as described in Section 1.5.2. 25 [See also pp. 79-80 above.] 26 Von Neumann, "The General and Logical Theory of Automata." [See also pp. 84-87 above.] GENERAL CONSIDERATIONS 119 It is possible to design and to position at a definite place adjacent to A another automaton B with the following function. B explores L and produces an exact copy l/ of it, placing L' in exactly the same position with respect to the secondary that L is in with respect to the primary A. The information necessary for this positioning can be obtained by the investigation of L, since the latter contains the numbers #1,3/1, a, 0, which describe the position of the secondary in question with respect to the primary A. Consider finally an automaton C which controls the two previously mentioned automata A and B as follows: C first causes A, as primary, to build the secondary S described by L. C then causes B to make a copy L' of L and to attach it to the secondary S as described above. Now designate the total aggregate of all three automata A, B, C by D. Designate the description L of this automaton D by L D . Note that L D must contain the numbers X\ , ij\ (indirectly, by way of e, 77, |#i|, |?/i|; cf. formula (*) in Sec. 1.5.2), a, j8, which describe the posi- tioning of the desired secondary with respect to the primary. There need be no doubt about the values of a, /3 that are to be used here, since one can ascertain how large a rectangle is needed to cover D. With respect to x\ , y\ , however, there is a real choice; these two coordinates define the relative position of the desired secondary with respect to the primary. Let us assume first that this choice will be made in some definite manner; it need only guarantee that the second- ary and its attachment L' will lie wholly outside the primary and its attachment L. Later on we will have somewhat more to say about this. Now consider the complex E which results from attaching L D to D. By going over the description given above, it is easily verified that E will construct precisely D with L D , displaced as above. Thus E is self -reproducing. This answers the first question of (D ) . The second question of (D ) can now also be answered along the same lines. Indeed, assume that in addition to self -reproduction, the construction of a further autom- aton F is also wanted. In this case, form the L which describes D followed by F: L D+F . Now consider the complex E F which results from attaching L D+F to D. It is clear that this will construct D, at- tach L D+F to it, and also construct F. In other words, it self -reproduces and constructs F in addition. The following remarks serve to clarify somewhat further the nature of the procedure outlined for (D ) . 1.6.2.1 First remark: shape o/L. The construction of an automaton was based on generating separately every correct state of every cell 120 THEORY OF SELF-REPRODUCING AUTOMATA in a suitable covering area {cf. Sec. 1.5.2 (b), where this modus procedendi is indicated}. The covering area is conceived in a sim- plified, and therefore presumably often overextended, form as a rectangle {cf. Sec. 1.5.2 (a)}. The external attachment L is a linear array (cf. the last part of Sec. 1.5.2). These two geometrical shapes will not always fit together perfectly: covering them simultaneously by a rectangle may force an inelegant overextension of the latter. It should be pointed out that there is nothing immutable about the linear shape of L, and that one might well decide to change it (cf. later). On the other hand, the linear shape has the virtue of easy overall accessibility (cf. later [Ch. 4]). 1 .6.2.2 Second remark: avoidance of collision in a single reproduction. As pointed out toward the end of Section 1.6.1.2, xi , yi must be so large that the secondary (whose position relative to the primary is defined by the coordinates Xi , yi) and its attachment L' will lie wholly outside the primary and its attachment L. Hence they are affected by the size of L. (L,' is congruent to L.) This creates the danger of a vicious circle, since L contains \yi\. However, this danger is not serious, and any one of the following procedures will obviate it. L (both for the primary L itself and for the secondary L,') extends in one direction only (the positive x-direction; cf. the end of Sec. 1.5.2), which implies that it is quite thin in the ^/-directions (especially if it is linear; cf. above and also later). Therefore, a fixed minimum value for \yi\ can be assigned, which guarantees that neither D nor L of the primary and of the secondary collide, by virtue of their separation in the ^/-direction. Alternatively, a base k notation for |#i|, \yi\ (cf. Sees. 1.4.1.2 and 1.4.2.1 ) guarantees that the area used for their designation, and there- fore L also, increases only as the 2 log of these numbers (cf. Sec. 1.4.2.4), whereas the separation that they provide is essentially that of their own size. Clearly for sufficiently large numbers these will overtake their own 2 log to any desired degree. Finally, if each number is to be designated, as alternatively sug- gested in Section 1.4.2.4, by a sequence of as many ones as it expresses, we can still avoid any difficulty by, for example, agreeing that \xi\, \yi\ are to be squares of integers and that the numbers to be desig- nated by the means indicated above will be their square roots. Thus the required size of L will go with the square root of the separation provided, which is, like the 2 log, a slowly increasing function, sure to be adequately overtaken when the numbers get sufficiently large. As mentioned above, any one of these three devices is workable, GENERAL CONSIDERATIONS 121 and they are not the only ones. The actual procedure will be devel- oped later. 1.6 .2.3 Third remark: analysis of the method for overcoming the difficulty of Section 1.6.1.1 — the role of L. It is worth recapitulating how the a priori argument against the possibility of self-reproduction, as stated in Section 1.6.1.1, was overcome in Section 1.6.1.2. The essential step was that D contained a sub-assembly B which is able to copy (and re-position) any linear array L. B is a fixed entity, of fixed, finite size, and it is yet able to copy an L of any size. It is essentially this step of "copying" which transcends the otherwise seemingly valid rule of the primary's necessary superiority (in size, also in organization) over the secondary. Now L = L G is the description of the secondary G that is to be constructed, as discussed in Section 1.5.2. (In our actual applications in Sec. 1.6.1.2, D and D + F played the role of G.) One might ask why the description L G is preferable to the original G in controlling the copying device B. In other words, why can B not copy directly G itself, i.e., why must the intermediary L G be introduced? This question is clearly of considerable semantic importance for the area in which we are now working, i.e., for a theory of automata. Indeed, it touches at the base of the entire question of notations and represen- tations, i.e., of the significance and advantages of introducing "de- scriptions" in addition to the original objects. The reason is this. In order to copy a group of cells according to the ideas of Section 1.6.1.2 concerning B, it is necessary to "explore" that group to ascertain the state of each one of its cells and to induce the same state in the corresponding cell in the area where the copy is to be placed. This exploration implies, of course, affecting each cell of this group successively with suitable stimuli and observing the reactions. This is clearly the way in which the copying automaton B can be expected to operate, i.e., to take the appropriate actions on the basis of what is found in each case. If the object under observa- tion consists of "quasi-quiescent" cells (cf. the remarks made on this subject in Sec. 1.4.2.1), then these stimulations can be so arranged as to produce the reactions that B needs for its diagnostic purposes, but no reactions that will affect other parts of the area which has to be explored. If an assembly G, which may itself be an active automaton, were to be investigated by such methods, one would have to expect trouble. The stimulations conveyed to it, as discussed above, for "diagnostic" purposes, might actually stimulate various parts of G in such a manner that other regions could also get involved, i.e., have the states of their cells altered. Thus G would be disturbed; 122 THEORY OF SELF-REPRODUCING AUTOMATA it could change in ways that are difficult to foresee, and, in any case, likely to be incompatible with the purpose of observation; indeed, observing and copying presuppose an unchanging original. The virtue of L G (as compared to G) is that, since it consists of quasi- quiescent cells, no such complications (i.e., no spreading of the diag- nostic stimulations) need be expected. (For the details of all this, cf. later [Ch. 4].) The above requires one more qualification. Our choice actually did not lie between the copying of G and the copying of L G . It was rather the copying of G on the one hand, and the copying of L G , combined with the construction of G from its description L G , on the other hand. The last step in the second procedure, however, is feasible, since this is precisely what the universal constructing automaton in the sense of question (C) will do, according to Section 1.5.2. Note also that the quasi-quiescent character of L = L G is important in this construction step too; in fact, the observations of Section 1.4.2.1 concerning quasi -quiescence in L were aimed directly at this applica- tion. 1.6.3.1 Copying: use of descriptions vs. originals. It is worthwhile to observe at this point, too, why a third step, namely the construc- tion of L G , based on a direct exploration of the original G, cannot be carried out with these methods. Note that if this could be done, then a suitable primary automaton could copy a given automaton G with- out ever having been furnished with its description L G . Indeed, one would begin with the step mentioned above, the construction of L G from G, and then proceed with the two steps mentioned previously, the copying of L G and the construction of G from L G . The difficulty is that the two last mentioned steps require only the observation of the quasi-quiescent L G , while the first mentioned step would also call for the observation of the uncontrollably reactive G. If one considers the existing studies concerning the relationship of automata and logics, it appears very likely that any procedure for the direct copying of a given automaton G, without the possession of a descrip- tion L G , will fail; otherwise one would probably get involved in logical antinomies of the Richard type. 27 To sum up, the reason to operate with "descriptions" L G instead of the "originals" G is that the former are quasi-quiescent (i.e., un- changing, not in an absolute sense, but for the purposes of the ex- ploration that has to be undertaken), while the latter are live and reactive. In the situation in which we are finding ourselves here, the 27 [Von Neumann indicated that he was going to make a footnote reference to Turing at this point. See Sec. 1.6.3.2 below.] GENERAL CONSIDERATIONS 123 importance of descriptions is that they replace the varying and reac- tive originals by quiescent and (temporarily) unchanging semantic equivalents and thus permit copying. Copying, as we have seen above, is the decisive step which renders self -reproduction (or, more generally, reproduction without degeneration in size or level of organization) possible. [ 1 .6.3.2 The Richard paradox and Turing machines. As indicated above, von Neumann was going to make a footnote reference to Turing in connection with the Richard paradox. I do not know what he had in mind, but I think it likely that he was going to mention the parallelism between Richard 's paradox 28 and Turing's proof of the undecidability of the halting problem. In any case, this paral- lelism is illuminating in the present context. Richard's paradox may be generated in a suitable language <£ as follows. Let 6o,6i,6 2 , • • • be an enumeration of all the expressions of £ which define two-valued number-theoretic functions of one varia- ble, that is, functions from the natural numbers to the two values zero and one. The expression "x is odd" is such an expression; it defines a function which is true (has the value 1 ) for odd numbers and is false (has the value 0) for even numbers. Let fi(n) be the number-theoretic function denned by , and define —fi(n) by -/<(n) = 0if/<(n) = 1 -fdn) = 1 if/«(n) = 0. Finally, let e be the expression "the function —f n (n)." We assume that e is expressible in «£, and derive a contradiction. (1) The enumeration e 0 , &i , , • • • contains all the expressions of £ which define two-valued number-theoretic functions of one variable. Expression e clearly defines a two-valued number-theoretic function of one variable. Therefore expression e is in the enumeration e 0 , ei , e 2 , • • • . (2) But e is an explicit definition of the function — f n (n), which differs from every function in the enumeration fo(n), fi(n), / 2 (n), • • • . Therefore e does not define any of the functions /o(w), fi(n), • • • . For each i, fi(n) is defined by . Consequently, e is not in the enumeration e 0 , #i , 62 , • • • . Thus we have shown both that the expression e is in the enumera- tion 6 0 , 61 , 62 , • • • and that it is not in this enumeration. The appear- ance of this contradiction is surprising, because it would seem that expression e is a legitimate expression in a consistent language, namely, the English language enriched with some mathematical 28 [Richard, "Les principes des mathematiques et le probleme des ensem- bles. " See also Kleene, Introduction to Metamathematics , pp. 38, 341.] 124 THEORY OF SELF-REPRODUCING AUTOMATA symbols. Actually, the contradiction shows that if a language £ is consistent then e cannot be expressed in it. Let us turn next to the halting problem for Turing machines. This problem was explained at the end of the Second Lecture of Part I above. A Turing machine is a finite automaton with an indefinitely extendible tape. A "concrete Turing machine ,, is a Turing machine which has a finite "program" or problem statement on its tape ini- tially. A concrete Turing machine is said to be "circular" if it prints a finite sequence of binary digits and halts, while it is said to be "circle-free" if it continues to print binary digits in alternate squares forever. Turing proved that there is no decision machine for halting, that is, no abstract Turing machine which can decide whether an arbitrary concrete Turing machine is circular (will halt sometime) or circle-free. Turing's proof that there is no decision machine for halting may be put in a form which closely parallels the preceding proof concern- ing Richard's paradox. Let fa , fa , fa , • • * , U , • • • be an enumeration of all the circle-free concrete Turing machines. Let s*(0), s»(l), Si (2), ••• , Si(n), ••• be the sequence computed by machine fa . Each Si(n) is either zero or one, so machine fa computes the two- valued function s t -(n) in the sense of enumerating its values in their natural order. Now consider the function — s n (n). This function is analogous to the function —f n (n) defined by expression e in the Richard paradox. To continue the parallelism, we assume that there is a circle-free concrete Turing machine t' which computes the function — s n (n) and derive a contradiction. (1) The enumeration to , fa , fa , • • • con- tains all circle-free concrete Turing machines. Machine t' is by hy- pothesis a circle-free concrete Turing machine. Consequently, machine t' is in the enumeration to , fa , fa , • • • . (2) By definition, { computes the function —s n (n), which clearly differs from every function in the enumeration So(ft), Si(ft), s 2 (n), ••• . For each i, the function s» (n) is computed by the machine fa . Consequently, machine t' is not in the enumeration fa , fa , fa , * • • . Thus we have shown both that machine { is in the enumeration U , t\ , fa ' ' ' and that it is not. The appearance of this contradiction is not surprising, however, for we had no reason to believe that machine t' exists. In other words, the contradiction shows that ma- chine { does not exist, and hence that the function — s„(ft) is not computed by any circle-free concrete Turing machine. We assume next that there is a decision machine t h for halting, and derive a contradiction. There is a concrete Turing machine which GENERAL CONSIDERATIONS can enumerate all the concrete Turing machines; call it t\ The output of t e can be fed into t h to produce a machine t e + t h which enumerates all the circle-free concrete Turing machines. There is an abstract Turing machine t u which can simulate each circle-free concrete machine in turn, find s n (n) for each machine n, and print — s n (n). Thus the machine f + t h + t u computes the function — s n (n), and is the machine t . But we know from the preceding paragraph that machine t' does not exist. Machines f and t u do exist. Therefore, machine t h does not exist. That is, there is no decision machine for halting. It follows also that machine f + t h does not exist, i.e., there is no machine which enumerates all the circle-free concrete Turing machines. The first part of the preceding proof that there is no decision ma- chine for halting establishes both that machine { is in the enumeration to , h , h , * * * and that it is not. This closely parallels the earlier proof, given in connection with Richard's paradox, that the expres- sion e is in the enumeration e 0 , #i , e 2 , • • • and that it is not. Both use Cantor's diagonal procedure to define a function which is not in a given enumeration. I suspect that it was because of this paral- lelism that von Neumann was going to refer to Turing at this point. It should be noted that the Richard paradox can be barred from a language by imposing a "theory of types" on that language. 29 For example, we can design the language so that every expression of the language has a type number, and so that an expression of given type can refer only to expressions of lower type. Suppose now that the expressions e 0 , #i , e 2 , • • • are of type m. Since expression e refers to all these expressions, it must be of higher type, and therefore cannot be in the list e 0 , e\ , e 2 , * • • . This being so, our earlier deriva- tion of Richard's paradox fails. See in this connection the letter from Kurt Godel quoted at the end of the Second Lecture of Part I above. These considerations about self -reference are relevant to the prob- lem of designing a self -reproducing automaton, since such an autom- aton must be able to obtain a description of itself. In Section 1.6.3.1 (entitled "Copying: use of descriptions vs. originals") von Neumann considers two methods for accomplishing this, which I will call the "passive" and "active" methods. In the passive method the self- reproducing automaton contains within itself a passive description of itself and reads this description in such a way that the description 29 [Russell, ' 'Mathematical Logic as Based on the Theory of Types." See also Kleene, Introduction to Metamathematics , pp. 44-46.] 126 THEORY OF SELF-REPRODUCING AUTOMATA cannot interfere with the automaton's operations. In the active method the self-reproducing automaton examines itself and thereby constructs a description of itself. Von Neumann suggests that this second method would probably lead to paradoxes of the Richard type, and for this reason he adopts the first method. See also Sections 1.7.2.1, 2.3.3, 2.6.1, and 2.8.2 below. We will see by the end of Chapter 5 below that a self -reproducing machine can indeed be constructed by means of the first method. This shows that it is possible for an automaton to contain a description of itself.] 30 1.7 Various Problems of External Construction Intermediate Between Questions (D) and (E) 1.7.1 Positioning of primary, secondary, ternary, etc. We pass now to an extension of question (D) which points the way towards question (E). This deals with the question of positioning the second- ary that the self -reproducing primary E or E F constructs, and the initiation, timing, and repetitions of the act of self-reproduction. Note that the positioning of F for E F need not create any new prob- lems: E F is D with L D+F attached (cf. the end of Sec. 1.6.1.2) and L D+F is a description of D followed by a description of F. In this joint description of D with F the latter must be unambiguously posi- tioned with respect to the former, and this takes care of what is needed in this respect. Returning to the main question of positioning the secondary by E or E F , we can argue as follows. Assume that this positioning is done by the first method of Section 1.6.2.2, i.e., by choosing a y such that I Vi | ^ y guarantees the separateness of the primary and the second- ary E or E F . (In the case of E F we think of both primary and second- ary as provided with an F positioned according to L D+F relatively to D; cf. above, although at the beginning of the process only the secondary need be accompanied by such an F.) Let the origin of the x, ^/-coordinate system referred to in Section 1.5.2 lie in the extreme lower left corner of the rectangle covering the primary, i.e., at the point that corresponds to the one designated by Xi , yi in the secondary. Thus the secondary is translated by x x , yi against the primary. Since the secondary is otherwise identical with the primary (except for the addition of F in the second case), it will again reproduce (and produce another F in the second case), constructing a ternary. This 30 [There is an interesting parallel between Godel's undecidable formula (see p. 55 above), which refers to itself, and von Neumann's self -reproducing automaton, which contains a description of itself. See Burks, "Computation, Behavior, and Structure in Fixed and Growing Automata/' pp. 19-21.] GENERAL CONSIDERATIONS 127 will then produce a quaternary, followed by a quinary (each with its concomitant F in the second case; cf. above), etc. The shifts involved will be 2xi , 2yi , then Sxi , Syi , then 4xi , 4?/i , etc. Thus the shift between the p-ary and the g-ary is (q — p)x\ , (q — p)yi . Since p, q = 1,2, • • • , therefore p 9^ q implies | q - p | = 1, 2, • , and hence | (q — p)y x \ = \ q — p \ • 1 yi | ^ y. Hence, in view of our above observation relating to Section 1.5.2, these two will not intersect. That is, all the successively constructed descendants of the primary will be distinct and non-interfering entities in space. (To be more precise, these descendants will be distinct and non- interfering entities in the underlying crystal [crystalline structure].) Actually, this program of mutual avoidance among the primary and its descendants must be extended to the paths within the crystal through which each one of these entities, acting as a primary for its own reproduction, connects with the site and operates on the construction of its immediate successor in the line of descent, i.e., its secondary. This, however, presents no difficulties, and will be gone into in the course of the detailed discussion. 1.7.2.1 Constructed automata: initial state and starting stimulus. The next point to be gone into is that of initiation and timing. Consider the state of the secondary automaton which the con- struction is designed to achieve, i.e., its so-called initial state {cf. Sec. 1.5.2 immediately after formula (*)}. In all states that lead up to this, and therefore conveniently in this state too, the automaton must be quasi-quiescent. This is clearly necessary for an orderly process of construction, since the already-constructed parts of the not yet completed secondary must not be reactive and changing, while the construction — in adjacent as well as in other areas — is still in progress. The problem that is encountered here is not unlike the one dis- cussed in Section 1.6.2.3 relative to the quasi-quiescence of L. How- ever, it is less severe here. The stimuli that have to be used in ex- ploring L must be able to induce responses in the primary, if they are to perform their function of inducing there appropriate actions that depend on the information acquired in inspecting L. (This is quite essential to the proper functioning of the primary, as was seen in the discussion of the operation of constructing under the control of instructions in Sec. 1.4.2.1, and again in the discussion of the operation of copying in Sec. 1.6.2.3 and in Sec. 1.6.3.) On the other 128 THEORY OF SELF-REPRODUCING AUTOMATA hand, the stimuli that create the desired cell states during the con- struction of the secondary need not have such effects on the class of automata that is involved here, secondary or primary. This will be verified in detail later. Thus it was necessary to keep the "descrip- tions" L of automata sharply apart from the "originals" (cf. Sec. 1.6.3.1), while it will be possible to construct the automata which are relevant here so that they have quasi-quiescent initial states. For the details and precise definitions, cf. later. The crux of this matter is, of course, that once such a (secondary) automaton is completed, and hence present in its quasi-quiescent initial state, it can then be transferred by some appropriate process of stimulation into its normal (i.e., intended) mode of activity. This process of stimulation is most conveniently thought of as a single stimulus, delivered to the appropriate point of the secondary, at the appropriate time after completion, by the primary. This is the secondary's starting stimulus. This is, then, the concluding step of the primary in its construction of the secondary. For the details, cf. later. In the case of self-reproduction, i.e., for the E or E F discussed in Sections 1.6.1.2 and 1.7.1, the secondary (or one of the secondaries) is a shifted copy of the primary. The starting stimulus activates this secondary and makes it self -reproducing (as the primary had been originally). This, then, maintains the iterative process of self-re- production with which Section 1.7.1 dealt. 1.7.2.2 Single-action vs. sequential self -reproduction. For the self- reproducing primary (E or E F ; cf. above) the next question is this: What does the primary do after it has completed the secondary and given it the starting stimulus? The simplest arrangement would be to let it return to a quasi- quiescent state which is identical with its original state. Implicitly this is the assumption which fits discussions like the one of continued reproduction in Section 1.7.1. An alternative possibility is to finish with the quasi-quiescent state, plus activity in a suitable terminal organ which imparts the starting stimulus again. This scheme leads to repeated self-reproduc- tions by the original primary, and of course similarly by all its descendants in the first, second, third, etc. degree. However, it is not operable without one more elaboration. Indeed, as it stands it would cause the primary to try to form all successive secondaries with the same Xi , iji , i.e., at the same location in the crystal. This is obviously conflicting; at best the second con- struction of a secondary would override and destroy the first speci- GENERAL CONSIDERATIONS 129 men. However, it is even more likely that the first secondary, which by then is reactive, will interfere with the second (attempted) construction, causing an unforeseeable class of malfunctions and corrupting all reproduction. It is therefore necessary to change Xi , 2/1 between the first and the second attempt to construct a second- ary, and similarly between the second and the third one, the third and the fourth one, etc., etc. This changing of X\ , yi must therefore take place during (i.e., as a part of) the activity of the terminal organ referred to above. The arithmetical rules that control these successive modifications of x\ , yi must be such that the whole se- quence of secondaries of the original primary do not conflict with each other, nor with the possible F which accompany them (cf. the first part of Sec. 1.7.1), nor with the paths which are required for their construction (cf. the end of Sec. 1.7.1). In addition, every secondary of the original primary, since it is a shifted copy of the latter, will behave in the same way. Thus a double sequence of ternaries will be constructed from these, then by the same mechanisms a triple sequence of quaternaries, then a quadruple sequence of quinaries, etc., etc. The rules for the successive modifications of the Xi , y\ must hence be such that no two in all the orders of this hier- archy ever interfere with each other, or with each other's possible F, or with each other's construction paths. This requirement sounds complicated, but it is not particularly difficult to implement by suitable arithmetical rules concerning the formation of the successive X\ , yi . This will be discussed later. The above discussion thus distinguishes between two types of self- reproduction : first, when each primary constructs only one second- ary, and second, when each primary keeps constructing secondaries sequentially without ever stopping. We will designate these as the single-action and the sequential type of self -reproduction, respectively. 1.7.3 Construction, position, conflict. Some remarks about physio- logical analogs of the above constructions are now in order. Comparing these processes of construction and reproduction of automata, and those of actual growth and reproduction in nature, this difference is conspicuous; in our case the site plays a more criti- cal role than it does in reality. The reason is that by passing from continuous, Euclidean space to a discrete crystal, we have purposely bypassed as much as possible of kinematics. Hence the moving around of a structure which remains congruent to itself, but changes its position with respect to the crystal lattice, is no longer the simple and elementary operation it is in nature. In our case, it would be about as complex as genuine reproduction. This means that all 130 THEORY OF SELF-REPRODUCING AUTOMATA of our structures are rather rigidly tied to their original location, and all conflicts and collisions between them are primarily conflicts in location. It is true in the natural setting, too, that conflicts and collisions are due in the same way to location, but there the scheme has more elasticity because of the possibility of motion. The limitations of the pattern due to this circumstance are obviously the price one has to pay for the simplicity obtained by our elimination of kinematics (cf . the discussions of Sees. 1.3.1.1-1.3.3.1). An essential preliminary condition for the mechanisms of repro- duction that we have considered is the quiescence of the area in which they are to function (cf., e.g., the remarks in the first part of Sec. 1.7.2.1 and in the first part of Sec. 1.7.2.2). That is, the region of the crystal surrounding the primary must be free of all reactive organisms, and this must be true as far as the process of reproduction is expected to progress unhindered. It is quite clear that where the reproductive expansion of the area under the influence of the primary collides with other reactive organisms, the "unforeseeable malfunc- tions" referred to in Section 1.7.2.2 can set in. This is, of course, just another way to refer to the conflict situations involving several independent organisms that have come into contact and interaction. 1.7.4.1 E F and the gene-function. Another physiological analog worth pointing out is the similarity of the behavior of automata of the E F type with the typical gene function. 31 Indeed, E F reproduces itself and also produces a prescribed F. The gene reproduces itself and also produces — or stimulates the production of — certain specific enzymes. 1.1 ' .1+.2 E F and the mutation — types of mutation. A further property of E F that may be commented on is this. Assume that a cell of E F is arbitrarily changed. If this cell lies in the D-region of E F , it may inhibit or completely misdirect the process of reproduction. If, on the other hand, it lies in the L D+F region of E F , then E F will construct a secondary, but this may not be related to it (and to F) in the de- sired manner. If, finally, the altered cell lies in the L D+F region, and more particularly in the description of F within it, modifying F into, say F', then the production of E F > will take place, and in addition to it an F' will be produced. Such a change of a cell within E F is rather reminiscent of a mutation in nature. The first case would seem to have the essential traits of a lethal or sterilizing mutation. The second corresponds to one without 31 Von Neumann, "The General and Logical Theory of Automata." Col- lected Works 5.317-318. GENERAL CONSIDERATIONS 131 these traits, but producing an essentially modified, presumably sterile, successor. The third one produces a successor which is viable and self-reproducing like the original but has a different by-product (F instead of F). This means a change of the hereditary strain. Thus the main classification of mutations turns out to be quite close to the one occurring in nature. ] .8 Evolution — Question (E) The observations of Section 1.7 tend towards the transition from question (D) to question (E). On (E), itself, the question of evolu- tion, we will only make a few remarks at this point. There is no difficulty in incorporating logical devices into automata of the types E or E F which will modify the D, F areas in their L D , L D+F , respectively, depending on outside stimuli which they may have received previously. This would amount to a modification of the mass of heredity that they represent by the occurrences (experiences) of their active existence. It is clear that this is a step in the right di- rection, but it is also clear that it requires very considerable additional analyses and elaborations to become really relevant. We will make a few remarks on this subject later. In addition to this it must be remembered that conflicts between independent organisms lead to consequences which, according to the theory of "natural selection," are believed to furnish an important mechanism of evolution. As was seen at the end of Section 1.7.3, our models lead to such conflict situations. Hence this motive for evolution might also be considered within the framework of these models. The conditions under which it can be effective here may be quite complicated ones, but they deserve study. Chapter 2 A SYSTEM OF 29 STATES WITH A GENERAL TRANSITION RULE 2.1 Introduction 2.1.1 The model: states and the transition rule. In this chapter we will develop the first model that possesses the potentialities of logical and constructive universality and of self-reproduction (cf. questions (A)-(E) in Sec. 1.1.2.1), as well as the other attributes evolved in the course of the discussion of these in Chapter 1. This model is based on a crystalline medium (cf. Sees. 1.3.3.1-1.3.3.3); we will be able to construct it in two dimensions and to use there the quadratic 1 (regular) lattice {cf. the end of Sec. 1.3.3.3, in particular questions (P) and (R)}. Each lattice point of this crystal will be able to assume a finite number of different states (say N states) and its behavior will be described (or controlled) by an unambiguous transition rule, covering all transitions between these states, as affected by the states of the immediate neighbors. We will, then, perform the major constructions called for by ques- tions (A)-(E) in Section 1.1.2.1 (and the relevant subsequent discussions of Ch. 1) for a specific model defined along these lines. 2.1.2 Formalization of the spatial and the temporal relations. At this point we introduce some rigorous concepts and notations. The lattice points of the quadratic crystal (cf. Sec. 2.1.1) are designated by two integer-valued coordinates, i, j. It is natural to treat the crystal as unlimited in all directions, at least as long as there does not emerge some definite reason for proceeding differently. This determines the ranges of i, j: (1) i,j = 0, ±1, ±2, ... . [It does not matter which lattice point is selected as the origin (0, 0).] The pair i,j thus represents a point in the plane, but it is also convenient to view it as a vector, i.e., to treat it as an additive quan- 1 [The lattice points of a quadratic crystal lie at the corners of squares.] 132 SYSTEM OF 29 STATES WITH A TRANSITION RULE 133 tity. We write (2) - The nearest neighbors of (i,j) are the four points (i =b l,j)> (hj ± 1). The next neighbors are the four points (i db 1 , j ± 1 ) . In Figures 4a and 4c the nearest neighbors of X are marked with small circles (O), and the next nearest neighbors are marked with heavy dots (•). Put (3) and (4) = (1,0), v' = (0, 1), = -v° = (-1,0), v> = -v 1 = (0, -1), v e = —v* = ( — 1, —1), v = —v = (1, — 1). See Figure 4b. The nearest neighbors of & are the # + v a (a = 0, • • • , 3), and the next neighbors of # are the # + v a (a = 4, • • • , 7). One might hesitate as to whether the immediate neighbors of # re- ferred to in Section 2.1.1 should be, among the $ + v a , the four with a = 0, • • • , 3 or the eight with a = 0, • • • , 7. We will choose the former, since it leads to a simpler set of tools. In Figures 4a and 4b the crystal lattice was shown in the usual manner, the lattice points being the intersections of the lines. In future figures we will use a different scheme: the lattice points will be shown as squares, and immediate neighbors (which in Figs. 4a and 4b were connected by single edges) will now be squares in contact (i.e., with a common edge). Furthermore, we will always show only those squares (i.e., only those lattice points) which are needed to illustrate the point immediately at hand. Thus Figure 4a assumes the appearance of Figure 4c. As discussed in Section 1.2.1, the range of time t is (5) t = 0, d=l, ±2, ... . Each lattice point is a cell in the sense of Sections 1.3.3.1 and 1.4.1.1. It is able to assume N states (cf. Sec. 2.1.1); let these be designated by an index (6) n = 0, 1, ••• , N - 1. The state of cell & = (£, j) at time t will therefore be written (7) n,'. 134 THEORY OF SELF-REPRODUCING AUTOMATA Also, the N numerals 0, 1, • • • , N — 1 used in expression (6) to designate n may be replaced by any other N symbols, according to convenience. The system is to be intrinsically homogeneous in the sense of Section 1.3.3.2; i.e., the same rule will govern its behavior at each lattice point This rule is the transition rule referred to in Section 2.1.1, which defines the state of the cell # at time t in terms of its own state and of the states of its immediate neighbors at suitable previous times. We will limit and simplify our system by restricting these "suitable previous times" to precisely the immediate prede- cessor of t, i.e., t — 1. Thus n& will be a function of n#~ l and of the nf+ v a (a = 0, • • • , 3). That is, (8) n# = F(n# M .; 4+ x ,« | a = 0, • • • , 3). Let m take the place of n# and let m a take the place of n#+ v a . The function F then becomes F(m; m a \ a = 0, 1, 2, 3). This TV-valued function F of five Af-valued variables represents, therefore, the transi- tion rule. It is the sole and complete rule that governs the behavior of this (intrinsically homogeneous) system. Note that the range of F has N elements, while the domain of F (the set of all quintuplets) has N b elements. Hence there are (9) jV ( " 6> possible functions F, i.e., this many possible transition rules, or models of the class under consideration. 2.1.3 Need for a pre-formalistic discussion of the states. Let us now discuss in a more heuristic way what the N states of a cell should be. The nature of these states is, of course, described not by their enumer- ation (6), but by the transition rule (8). The only relevant informa- tion contained in (6 ) is the number of states, N. In accord with this, the rigorous summation of these considerations will consist of a specification of the transition rule (8), i.e., of the function F. In the present, heuristic stage, however, it will be better to proceed with an enumeration (6), attaching to each n of (6) a name and a verbal description of the role that it is intended to play. In this connection we will also make use of the possibility of notational changes, referred to in the remark after (6) and (7) in Section 2.1.2. 2.2 Logical Functions — Ordinary Transmission States 2.2.1 Logical-neuronal functions. To begin with, states are needed to express the properly logical or neuronal functions, as discussed in Section 1.2.1. This calls for the equivalents of the neurons of Figure 3 and of their connecting lines. SYSTEM OF 29 STATES WITH A TRANSITION RULE 135 2.2.2.1 Transmission states — connecting lines. We consider first the connecting lines. These must now be rows of cells, i.e., of lattice points. Since a line must be able to pass a (neural) stimulus, each one of its cells must possess, for this purpose alone, a quiescent and an excited state. The purpose that we consider here is to transmit a (neural) stimulus. We call these therefore the transmission states of the cell and designate them by the symbol T. We use an index e = 0, 1; i.e., we write T t to indicate quiescence and excitation. Let e = 0 designate the former and e = 1 the latter. This transmission must be a directed process, since the lines (that the cells in transmission states replace) were directed to connect definite points. Hence we must set up certain limitations. We may stipulate that a cell in a transmission state accepts a stimulus only from one, definite direction, its input direction. That is, an excited transmission cell brings an immediate neighbor (which is a quiescent transmission cell) into the excited transmission state (or, if the latter is found in that state, it keeps it there) only if the former lies in the latter's input direction. Alternatively, we may also stipulate that a cell in a transmission state emits a stimulus only in one, definite direction, its output direction. That is, an excited transmission cell brings an immediate neighbor (which is a quiescent transmission cell) into the excited transmission state (or, if the latter is found in that state, it keeps it there) only if the latter lies in the former's output direction. Finally, we may make both stipulations together. After trying various models along these lines, it appeared most convenient to stipulate a definite output direction. In order to avoid certain uncontrolled, and hence undesirable, return-stimulation phenomena, it seems desirable, while not prescribing any particular input direction, to specify that the output direction is insensitive to inputs. The v a (a = 0, • • • , 3) of Figure 4b enumerate all possible direc- tions for an immediate neighbor (cf. the remarks after expressions (3) and (4) in Sec. 2.1.2). Hence the T e will be given a further index a = 0, • • • , 3: T ac , so that T a e has the output direction v a . The above stipulations now assume this form: T a >i at induces T«i at * (from T a0 or T«i) if and only if & = &' + v a \ but + v a , i.e., if and only if # — &' = v a ^ — v a . Let us now use the symbols T ae (a = 0, • • • , 3; e = 0, 1) in place of some eight number values in expression (6) (cf. the remark after expression (6) and (7) in Sec. 2.1.2). Let us also recognize the unit time delay of the stimulus-response process, as discussed in Section 2.1.2. Then the above rule becomes: 136 THEORY OF SELF-HEPUODIHM N( J AUTOMATA I Then n* = T al if n#7* = TV, for some with § — §' = v a ^ —v a . Otherwise n& = T a0 . Assume n& 1 = T«, (10) 2.2.2.2 Delays, corners, and turns in connecting lines. Note that this model for the connecting lines differs from the one considered in Section 1.2.1 in that it introduces finite propagation delays. We have now a unit time delay between immediate neighbors. However, this deviation from the pattern of Section 1.2.1 will have no relevant undesirable consequences. Note also that this model serves equally to synthesize straight connecting lines from transmission cells, and connecting lines with corners or turns in them. Straight lines are shown in Figures 5a-5d; these represent the four possible "straight" directions in our lattice. "Corners" and "turns" are shown in Figures 5e and 5f. Figures 5a-5f are drawn according to the rules stated in Figure 4c. Figures 5a'-5f' are simplified (and more easily readable) versions of Figures 5a-5f, respectively, in which each T a€ is replaced by the arrow of its v a (cf. Fig. 4b). We consider next the specific neurons of Figure 3. 2.3.1 The + neuron. The + neuron merely calls for an excitable cell which has an output and two possible inputs. There is, of course, no harm done if it can accommodate more than two such inputs. We defined a transmission cell in Section 2.2.2.1 so that it has three possible inputs. Every one of its four sides, excepting the output side, is an input. Thus our transmission cells not only fulfill the function of (elements of) connecting lines between neurons (this being the function for which they were originally intended), but also fill the role of + neurons. The use of an ordinary transmission cell as a + neuron, i.e., as a connecting line junction, is shown in Figure 5g. This figure is drawn according to the scheme of Figures 5a'-5f'. 2.3.2 Confluent states: the • neuron. The • neuron calls for an excita- ble cell that has an output and two inputs, which must be stimulated together in order to produce excitation. It would be quite practical to introduce a class of such states. However, a free choice of an output direction and two input directions (from the totality of four possible directions, as represented by the v a , a = 0, • • • , 3), would require (4 X 3 X 2)/2 = 12 kinds, and, since there must be a quiescent and an excited state for each kind, a total of 24 states. It is possible to 2.3 Neurons — Confluent States SYSTEM OF 29 STATES WITH A TRANSITION RULE 137 achieve equally satisfactory results with more economy, namely with only one kind, and hence with two states. This can be done by prescribing no particular input or output directions at all, i.e., by stipulating that every direction is a possible input, as well as a possible output. In addition to this, one can then prescribe as the prerequisite of excitation a minimum of two stimulations, i.e., a minimum of two excited transmission cells that are immediate neighbors and in whose output direction our cell lies. However, it is still more convenient to frame this condition more elastically and to stipulate that the cell under consideration gets excited if every immediately neighboring transmission cell, whose output direction points at this cell, is itself excited. (This is to be taken with the exclusion of the subcase — which is strictly logically admissible, but obviously conflicting with the intention — that none of the immediate neighbors qualifies, i.e., is a transmission cell and has its output direction pointing at this cell.) This formulation of the rule has the effect that the cell under con- sideration can act as a neuron of threshold one (i.e., from the point of view of inputs, like an ordinary transmission cell), or two (i.e., like the desired • neuron), or three (i.e., like a combination of two • neurons), depending on whether one, two, or three, respectively, of its immediate neighbors are transmission cells with their output direc- tions pointing at it. (Since this cell should not be able to stimulate any transmission cell whose output direction is pointing at it — cf. rule (10) and its adaptation to the present situation in rule (12) below — it would be pointless to have all of its four immediate neigh- bors in such a state. This situation would preclude any results of an excitation of our cell.) We will call these states of a cell confluent states, and designate them by the symbol C. We again use the index e = 0, 1; that is, we write C € to indicate quiescence (e = 0) and excitation (e = 1). We proceed now similarly as we did at the end of Section 2.2.2.1. We use the symbols C e (e = 0, 1) in place of two number values in ex- pression (6) (cf. the remark after expressions (6) and (7) in Sec. 2.1.2). The rule that we formulated above now becomes, inasmuch as it affects the inputs to C: Assume n&~ 1 = C € . (11) Then n* = d , if both (a), (b) hold: (a) nlv 1 = T«'i for some #' with # - / = v a ' . (b) Never n#~ l = T^ 0 for an with & - #' = v a ' . Otherwise n/ = C 0 . The portion of the rule that affects the outputs of C must be stated 138 THEORY OF SELF-REPRODUCING AUTOMATA as a modification of rule (10), since it provides for a new way to excite a transmission cell, i.e., to produce a T«i from a T ac . This is expressed by the following insertion between the second and third sentences of rule (10): s 19 v (Also n# = T«i if n^T 1 = Ci for some { Z) \with 0 - y = / ^ -if 08 = 0, • • • , 3). Note that the system of rules (10), (11), and (12) provides for excitation of T by T, also of C by T, and T by C, but not for an excita- tion of C by C. This arrangement will have no relevant undesirable consequences. A • neuron with its close surroundings is shown in Figure 6a. This figure is drawn according to the scheme of Figures 5a'-5f' and 5g. The confluent state C here makes its first appearance. 2.3.S The — neuron. The — neuron calls for an excitable cell in which the roles of quiescence and excitation are interchanged in comparison with the transmission states. It must be ordinarily excited (i.e., able to excite an immediately neighboring cell in a transmission state, at which its output direction points), but it must be made quiescent by an input stimulation (reverting to excitation when the stimulation ceases). We could introduce a class of such states — e.g., with a given output direction, all other directions being input direc- tions, just as in the transmission states. Since there are four possible directions, this would require four kinds, and with quiescence and excitation for each kind, eight states would be needed. However, we are reluctant to introduce a class of such states whose ordinary, unperturbed condition is not quiescence. This objection could be circumvented in various ways, with higher or lower degrees of econ- omy. 2 We shall find that a class of states which we will introduce later for other reasons can be used to synthesize the function of the — neuron. We can therefore forego altogether taking care of it at this stage. 2.34 The split. Although all the neuron species of Figure 3, as 2 [Von Neumann here referred to the double line trick of his "Probabilistic Logics and the Synthesis of Reliable Organisms from Unreliable Components," Collected Works 5.337. Using only -f neurons and • neurons, he synthesized a complete set of truth-functional primitives by using a pair of lines with the codings 01 (for "zero") and 10 (for "one"). In other words, each line of the pair is in the opposite state from the other line of the pair, so that negation may be realized by interchanging (crossing) the two lines of a pair. But in the pres- ent manuscript von Neumann synthesized negation from the destructive (re- verse) and constructive (direct) processes of Sees. 2.5 a