A Letter from Dijkstra on APL Acknowledgments. I would like to thank
Bob Bernecky, Nicolas Delcros, Jay Foad, and Eric Iverson
for comments on the manuscript.
Nick Nickolov brought to my attention comments by Dijkstra on APL
[0]
that I had not seen before.
I contacted the author of the website and obtained a copy
of Dijkstra’s letter, transcribed below:
I find Dijkstra’s comments deeply ironic, because Ken Iverson invented his notation as a means of communications among people [1], and it was only years later that the notation was implemented on a computer at which time it became APL. Moreover, Dijkstra encountered “Iverson notation” no later than August 1963 before there was an implementation [2]. Even with APL, perhaps especially with APL, one can reasonably do nontrivial things without ever executing it on a computer. I have read at least one of Dijkstra’s EWDs in which he wrote programs using formal methods, at the end of which is derived a provably correct program. As I read it/them, I thought to myself, “APL should have been natural for Dijkstra”. One can argue what “provably correct program” means. To me, it means what a typical mathematician means when he/she says a theorem has been proven. I know it is far from saying that the program will produce a correct result in all circumstances (compiler/interpreter has a bug, somebody pulls the plug, cosmic ray strikes a transistor, etc.), but I believe I am using “prove” in the same sense that Dijkstra did. Like Dijkstra’s “visiting professor at an American university”, I would be distressed if I had to teach a course on APL without an APL machine. Were it a course on formal methods, one can get by without a machine; but even in a course on formal methods executability would be an asset, because executability keeps you honest, a faithful servant that can be used to check the steps of a proof. Were it a general programming course, it seems extreme to eschew the use of a machine in showing, teaching, and discussing. It would be like trying to learn a natural language without ever conversing with a speaker of that language. Herewith, two examples of using APL in formal manipulations. Further such examples can be found in Iverson’s Turing Award Lecture [3]. A proof is here presented as in [3], a sequence of expressions each identical to its predecessor, annotated with the reasoning. A Summary of Notation is provided at the end.
Example 0: Ackermann’s Function The derivation first appeared in 1992 [4] in J and is transcribed here in Dyalog APL. Ackermann’s function is defined on nonnegative integers as follows: ack←{ 0=⍺: 1+⍵ 0=⍵: (⍺1) ∇ 1 (⍺1) ∇ ⍺ ∇ ⍵1 } 2 ack 3 9 3 ack 2 29 Lemma: If ⍺ ack ⍵ ←→ f⍢(3∘+) ⍵ , then (⍺+1)ack ⍵ ←→ f⍣(1+⍵)⍢(3∘+) 1 . Proof: By induction on ⍵ .
Using the lemma (or otherwise), it can be shown that: 0∘ack = 1∘+⍢(3∘+) 1∘ack = 2∘+⍢(3∘+) 2∘ack = 2∘×⍢(3∘+) 3∘ack = 2∘*⍢(3∘+) 4∘ack = */∘(⍴∘2)⍢(3∘+) 5∘ack = {*/∘(⍴∘2)⍣(1+⍵)⍢(3∘+) 1} Example 1: Inverted Table IndexOf Presented at the 2013 Dyalog Conference [5]. A table is a set of values organized into rows and columns. The rows are records. Values in a column have the same type and shape. A table has a specified number of columns but can have any number of rows. The extended indexof on tables finds record indices. tx ty tx ⍳ ty ┌──────┬─┬───┬──┐ ┌──────┬─┬───┬──┐ 3 1 5 2 5 5 │John │M│USA│26│ │Min │F│CN │17│ ├──────┼─┼───┼──┤ ├──────┼─┼───┼──┤ tx ⍳ tx │Mary │F│UK │24│ │Mary │F│UK │24│ 0 1 2 3 4 ├──────┼─┼───┼──┤ ├──────┼─┼───┼──┤ │Monika│F│DE │31│ │John │M│UK │26│ ty ⍳ ty ├──────┼─┼───┼──┤ ├──────┼─┼───┼──┤ 0 1 2 3 4 4 │Min │F│CN │17│ │Monika│F│DE │31│ ├──────┼─┼───┼──┤ ├──────┼─┼───┼──┤ │Max │M│IT │29│ │Mesut │M│DE │24│ └──────┴─┴───┴──┘ ├──────┼─┼───┼──┤ │Mesut │M│DE │24│ └──────┴─┴───┴──┘ An inverted table is a table with the values of a column collected together. Commabar each (⍪¨) applied to an inverted table makes it “look” more like a table. And of course the columns have the same tally (≢). A table can be readily inverted and vice versa. x ┌──────┬─────┬───┬──────────────┐ │John │MFFFM│USA│26 24 31 17 29│ │Mary │ │UK │ │ │Monika│ │DE │ │ │Min │ │CN │ │ │Max │ │IT │ │ └──────┴─────┴───┴──────────────┘ ⍪¨x ≢¨x ┌──────┬─┬───┬──┐ 5 5 5 5 │John │M│USA│26│ │Mary │F│UK │24│ │Monika│F│DE │31│ │Min │F│CN │17│ │Max │M│IT │29│ └──────┴─┴───┴──┘ invert ← {↑¨↓⍉⍵} vert ← {⍉↑⊂⍤¯1¨⍵} x ≡ invert tx 1 tx ≡ vert x 1 A table has array overhead per element. An inverted table has array overhead per column. The difference that this makes becomes apparent when you have a sufficiently large number of rows. The other advantage of an inverted table is that column access is much faster. An important computation is x indexof y where x and y are compatible inverted tables. Obviously, it can not be just x⍳y . The computation obtains by first “verting” the arguments (uninverting the tables) and then applying ⍳ , but often there is not enough space for that. ⍪¨x ⍪¨y ┌──────┬─┬───┬──┐ ┌──────┬─┬───┬──┐ │John │M│USA│26│ │Min │F│CN │17│ │Mary │F│UK │24│ │Mary │F│UK │24│ │Monika│F│DE │31│ │John │M│UK │26│ │Min │F│CN │17│ │Monika│F│DE │31│ │Max │M│IT │29│ │Mesut │M│DE │24│ └──────┴─┴───┴──┘ │Mesut │M│DE │24│ └──────┴─┴───┴──┘ x ⍳ y 4 4 4 4 (vert x) ⍳ (vert y) 3 1 5 2 5 5 We derive a more efficient computation of indexof on inverted tables:
Point (d) illustrated on column 0: ⊂⍤¯1⊢x0←0⊃x ┌──────┬──────┬──────┬──────┬──────┐ │John │Mary │Monika│Min │Max │ └──────┴──────┴──────┴──────┴──────┘ x0 ⍳ x0 0 1 2 3 4 ⊂⍤¯1⊢y0←0⊃y ┌──────┬──────┬──────┬──────┬──────┬──────┐ │Min │Mary │John │Monika│Mesut │Mesut │ └──────┴──────┴──────┴──────┴──────┴──────┘ x0 ⍳ y0 3 1 0 2 5 5 That is, the function {(⍉↑⍺⍳¨⍺)⍳(⍉↑⍺⍳¨⍵)} computes indexof on inverted tables. I believe that in another language a derivation such as the one above would be very long
(in part because the program would be very long),
possibly impractically long.
Summary of Notation The following table lists the APL notation used in the paper. A complete language reference can be found in [6]. Dfns are described in [6, pp. 112127] and [7].
References
Written in honor of Ken Iverson’s 93^{rd} birthday.
