A Comparison of the IPSA and STSC
Implementations of Operators and General Arrays

D.L. Orth



This paper contains the details of a comparison of the I.P. Sharp Associates (IPSA) and STSC, Inc. operators and general array systems presented by the author at the APL81 conference. The presentation consists of three phrases: first, three elementary applications will be presented, together with the ways in which a programmer might expect to use general arrays in these applications; second, the general features of the systems will be presented in terms of these applications; and third, the details of the systems will be presented in a series of tables at the end of the paper.

The IPSA and STSC operators and general arrays systems have general features in common, but almost always differ in details; consequently APL names will usually be used here for new primitive functions and operators. Moreover, the systems differ in fundamental concepts that underly detailed design choices. By viewing the extensions in terms of applications we are able to illustrate these concepts in practical ways; however, not all extensions receive equal attention because not all extensions are equally relevant to the applications presented here.

The STSC operators and general arrays extensions reside on an experimental system known as the NARS system (for Nested ARrays System) which also contains many APL extensions that have nothing directly to do with general arrays. Operators and general arrays extensions to the IPSA implementation are being introduced in a step-by-step manner on the floor system, which also contains non-general arrays extensions. The only extensions discussed here are those directly related to general arrays. References to IPSA’s future plans are taken from [1].

The first application is a simplified version of a materials data base that maintains information on the physical materials used in IBM products. This data base is implemented in terms of a general arrays simulation and has been running in that mode for about six years. The structure of the data base will be described here in terms of one general array, although in practice the structure must be segmented on files.

The data base is a vector mdb with one element for each of the several thousand materials on which information is maintained. The i th element of this vector holds all the information on the i th material.

The information on each material is maintained in a vector of length 125 with one element for each of the 125 possible physical properties, such as color and tensile strength; the j th element of the i th vector holds all the information on the j th property of the i th material. (It is the variation in the types and ranks of the arrays containing physical property information, such as character vectors for color and numerical matrices for tensile strength, that first suggested the use of general arrays.) For our purposes here I will restrict the properties to 2 of the 125, known as codename and tradename.

The codename of a material is a unique 9-character technical name that indicates the chemical classification of the material. A tradename is a name by which a material is commercially known, and often is not unique. Thus the 2-element vector associated with a material holds a 9-element vector as its first element containing the codename of the material, while its second element holds a vector of varying length whose elements in turn hold character vectors containing the various tradenames of the material.

In terms of general arrays and what we might expect to do conveniently with this application, we would first of all expect to be able to represent the data base much in the same way as it has been described, i.e. as a vector whose elements all hold 2-element vectors, each of whose first element holds a character vector and whose second element holds a vector whose elements hold character vectors. We should also expect to conveniently append new information to the data base, extract old information, and update old information. We might also expect to be able to manipulate the data base, such as sorting by codename or tradename.

The second application is the representation of rational numbers and rational arithmetic, and is an example of the use of general arrays for so-called data abstraction. A rational number is a fraction consisting of a numerator and a denominator, and can therefore be represented as a 2-element vector whose first element is, say, the numerator and whose second element is the denominator. Thus arrays of rational numbers can be represented as general arrays whose elements all hold 2-element vectors. (Rational arithmetic, which can be used to empirically test the numerical stability of algorithms, is only one of many alternate arithmetics which could be useful in numerical computations. Others are, for example, complex arithmetic, high-precision arithmetic, and polynomial arithmetic. Polynomial arithmetic, phrased in terms of general arrays and new and extended operators, has been used in the formal description and implementation of a derivative algorithm for computer programs, which is the most significant use of the new language features I know.)

What might we expect to do conveniently with an alternate arithmetic? Ideally, we would expect to be able to convert an arithmetic algorithm (i.e., an algorithm whose numerical computations use only the arithmetic primitives +-×÷) to an equivalent algorithm in the alternate arithmetic, simply by substituting defined functions that carry out the alternate arithmetic for the four arithmetic primitives. For example, if plus is the scalar function that applies to arrays of 2-element vectors and carries out addition of rational numbers, then the expression

   a←x++/y

would be replaced by

   a←x plus plus/y

The third and last application illustrates the use of general arrays for emphasizing particular axes in ordinary or flat arrays. A collection of names can be represented as a character matrix whose rows contain the names, one to each row. The same collection could be represented as a vector with each element holding a character vector holding one name. (Note that in the latter representation there is no need to pad the vectors to be of equal length.)

What might we expect to do conveniently with general arrays in this application depends, I think, on our viewpoint of the relationship of general arrays to present APL. If that view is of general arrays as the foundations of a language that significantly transcends present APL then we should expect to be able to do with the vector of vectors representation all that can be done conveniently in APL with the matrix representation, and more. If, however, that view is of general arrays as simply another extension that enhances the power of APL, then we might only expect to do some things—or even, perhaps, nothing—more conveniently than before.

This completes the description of the three applications. I will now use these applications to illustrate more specifically what we might expect to do conveniently with general arrays, and to tie these expectations in a general way to the IPSA and STSC systems.

It is evident, of course, that all applications of general arrays ultimately require primitives to form “compound scalars”, i.e. to encode arrays as elements of other arrays, and to reveal the contents of these compound scalars. These primitives are commonly called enclose (enclose an array in a scalar) and disclose (disclose the contents of a scalar), for which I will use the names enc and dis . Both the STSC and IPSA systems have such primitives.

The materials data base illustrates the utility of conveniently retrieving part of the contents of a compound scalar and replacing that part. For example, the j th tradename of the i th material in the materials data base can be obtained by a sequence of disclosures:

   dis(dis(dis mdb[i])[2])[j]

and that tradename can be replaced by the following sequence:

   i1←dis mdb[i]
   i2←dis i1[2]
   i2[j]←enc b
   i1[2]←enc i2
   mdb[i]←enc i1

We might expect more convenient ways to do this, particularly when there are many encloses and discloses, such as a primitive name pick with which the above retrieval and replacement could be accomplished as follows:

   (i,j,2) pick mdb

and

   ((i,j,2) pick mdb)←b

Only STSC has such a primitive. This retrieval and replacement can also be accomplished in the STSC system by the extensions to indexing and index-specification called reach and reach-specification:

   dis mdb[enc i,2,j]

and

   mdb[enc i,2,j]←enc b

In many applications, data representations change during various phases of computation, and we should expect this to be true when using general arrays. Usually these changes of representations occur when it is more convenient to compute in one representation than another. For example, in the representation of names application, it may be more convenient to apply certain sorting algorithms to the matrix of names or to a vector holding the column vectors of the matrix as elements, while (as we will see below) it may be more convenient to look up a name in a list when the list is in the vector of vectors representation. We might expect, then, to be able to move conveniently back and forth between representations such as a character matrix and a vector of character vectors by enclosing an array along specified axes (denoted enc[k]⍵) and disclosing all elements of an array at once and placing the contents along, say, the last axes of the result (denoted disall). Both IPSA and STSC provide such primitives.

Continuing this line of reasoning, it is not difficult to imagine other interesting ways of segmenting an array into subarrays and enclosing each resulting subarray. For example, for a character vector representing a fragment of text we may wish to form a vector whose elements hold the words in the text, one word per element. That is, we may wish to partition the original vector at the blanks. Only STSC has partitioned enclose primitive that carries out this type of transformation, although IPSA has a partitioning operator planned for later release.

It is evident from the representation of a list of names as a vector of vectors that it would be very useful to be able to conveniently determine when two arrays are identical, and thereby when two enclosed arrays (or two elements of general arrays) are identical. Both IPSA and STSC provide an “identical” primitive, and both have extended the index-of and membership primitives in the obvious ways with this new primitive in place of equals. For example:

   v←(enc 'Tom'),(enc 'Dick'),(enc 'Harry')
   v⍳enc 'Dick'
2

It is also very useful to be able to apply the present selection and structural primitives to general arrays in the obvious ways. Both STSC and IPSA have made these extensions, but there are some temporary restrictions in the IPSA system.

We will now turn to expectations for new operators and extensions of old ones.

In the representation of rational numbers and rational arithmetic, if pl:(+/⍺×⌽⍵),(¯1↑⍺)×(¯1↑⍵) and if 2 3 and 4 7 represent rational numbers, then 2 3 pl 4 7 represent their sum as a rational number. Thus if a and b are scalars holding 2-element vectors representing rational numbers, then a pls b is a scalar holding a 2-element vector representing their sum, where

   pls:enc(dis ⍺)pl dis ⍵

It would be very useful to have a scalar extension operator which applies pls element-by-element to arrays whose elements all hold 2-element vectors representing rational numbers. Both STSC and IPSA provide such an operator. In the case of STSC the operator (called each) achieves this effect by applying to pl instead of pls , while in the case of IPSA, scalar extension operators that apply to pls or pl are obtained as special cases of other operators. Let plus denote the scalar extension of pls .

The STSC each operator is denoted by ¨ and I will use this notation for convenience. For a monadic function f , a dyadic function g , and vectors a and b we have

   (f¨b)[i] equals enc f dis b[i]

and

   (a g¨b)[i] equals enc (dis a[i])g(dis b[i])

In the dyadic case, if one of a and b is a scalar then it is reshaped to a vector of the same size as the other. (The enclose function in these definitions is assumed to behave like the enclose primitive on the STSC system, which differs from the one on the IPSA system in that the enclose of a flat scalar equals that scalar. For example, enc 3 equals 3 on the STSC system, but enc 3 does not equal 3 on the IPSA system.)

As I mentioned before, it would also be very useful if the present operators reduction, scan, inner, and outer product applied to all scalar functions (such as plus), not just the primitive ones. Only STSC has extended these operators at this time, although in the case of plus the extended operators apply to pl instead. That is, uses of the each operator are built into the operator extensions. For example, in converting an arithmetic algorithm to a rational arithmetic algorithm, the expression

   a←x++/y

would be replaced by

   a←x plus pl/y

and not

   a←x plus plus/y

One more thing is needed to conveniently produce equivalent rational arithmetic algorithms from ones defined for primitive arithmetic, and that is the correct application of a function derived from reduction to an empty vector. For example, plus/⍳0 (or pl/⍳0 in the case of STSC) should yield the identity element enc 0 1 . The STSC system provides no mechanism for the user to specify the identity element of a non-primitive scalar function, and reports domain error for pl/⍳0 .

Combinations of a scalar extension operator and the primitives for enclosing along specified axes (enc[k]⍵) and disclosing all elements (disall) provide potentially powerful programming tools. For example, if the name contained in a character vector is in the order of last name first followed by a comma and then first name and initial, then f:(¯1+⍵⍳',')↑⍵ applied to the vector produces the last name. f can be applied to every row of a character matrix by applying the following function to the matrix:

   f¨enc[2]⍵

The result will be a vector whose elements hold character vectors containing the last names in the rows of the matrix argument. One way to have the last names arranged as rows in a matrix is to define g:(⍴⍵)↑f⍵ and apply the function

   disall g¨enc[2]⍵

to the matrix argument. Evidently the effect of combinations like the latter could just as well be accomplished by a more powerful axis operator, as in g[2]⍵ . In fact many applications that emphasize particular axes of flat arrays can be handled with appropriate axis operators, and do not require general arrays. Neither the IPSA or STSC system has such axis operators, although IPSA is planning them for future release.

There is an interesting question concerning the application, to empty arrays, of a function derived from an extended axis operator and a defined function. For example, how is the shape of the result of g[2]⍵ (or better yet f[2]⍵) determined when applied to a matrix of shape 0,n ? Evidently the same mechanism that would permit user specification of results of reductions on empties (e.g. plus/⍳0) could be applied here. Analogous questions for general arrays have led to a fairly complex construction that I will present in terms of the collection of names application, taking the viewpoint that we should expect to do with the vector of vectors representation all that can be done conveniently in APL with the matrix representation.

Consider the problem of expanding or overtaking the matrix representation along the first axis. Each new row introduced in this manner is a vector of blanks, which suggests that when expand or overtake is applied to the vector of vectors representation, the resulting new elements should hold vectors of blanks. In the case of the matrix, all the new rows are of the same length as the old ones. But how are the lengths of the vectors within the new elements of the vector of vectors to be determined, since the vectors held by the old elements are not necessarily all of the same length? The answer provided by the STSC system and suggested by array theory is that the length of the vectors held by the new elements equals the length of the vector held by the first element of the general array. IPSA has not yet implemented expand and overtake for general arrays, so the following comments refer only to the STSC system.

More generally, the fill element for a non-empty array (general or flat) that is used by expand and overtake is called the prototype of the array and is obtained from the first element of the array by descending all the way to the “bottom” of that element, replacing each character with a blank and each number with a zero. As we might suspect, problems can occur with this definition in applications were the structure of the first element is dissimiliar from the structures of the other elements. (I have found in mathematical applications that the best choice of the fill element is usually the scalar 0.) It would be very helpful if users could specify the fill element, but such a feature has not been provided. (The fill element for expand can be provided in an indirect way by using a new primitive called mesh in place of expand; there is an analogous replacement for take.)

The fill elements for empty arrays are also called prototypes, but the prototype of an empty array is not so easy to define. (The prototype of an empty array a can always be produced by dis a on the STSC system; the question is how it obtains its particular value.) If a is produced directly from a non-empty array b as in 0/b then the prototype of a is the same as the prototype of b . However, if a is produced directly from another empty array b by way of a function f , i.e. a←f b , then the prototype of b must be propagated through f to produce the prototype of a , which means f may actually be applied to the prototype of b .

For example, consider the ranking function rank:(⍴alph)⊥⍉alph⍳⍵ —the vector alph is an alphabet—that can be used to sort the collection of names. The names in the matrix representation are sorted by the function ⍵[⍋rank⍵;] , while the names in the vector of vectors representation are sorted by ⍵[⍋rank¨⍵;] (if all the vectors are of the same length). The expression ⍋rank⍵ applies to a matrix with no rows and produces the empty numerical vector, and therefore the expression ⍋rank¨⍵ should apply to an empty vector whose prototype is an enclosed vector of blanks to produce an empty vector whose prototype is 0 (i.e., the empty numerical vector). The way this is done on the STSC system is to actually apply rank to the vector of blanks held by the prototype of the empty vector argument of ⍋rank¨⍵ , thereby producing a numeric value, and to then take that numeric value’s prototype, which is 0 , as the prototype of the empty vector result of ⍋rank¨⍵ .

Thus the user must be prepared for the eventuality of his functions being applied to prototypes. No difficulties arise in this example, but in the rational number representation the prototype of an empty array of 2-element vectors is enc 0 0 , and 0 0 does not represent a valid rational number. The defined functions that carry out rational arithmetic on 2-element vectors would ordinarily produce a domain error report if an argument did not represent a valid rational number, thus warning the user of an apparent difficulty. The argument 0 0 , however, may simply be the contents of a prototype that should be passed along as a function value. But how are the functions to know when the argument 0 0 is a prototype and when it is the result of a user error? There is no way to determine from within a defined function as to whether or not the function is being executed as part of a derived function.

The point is that there are functions f for which the propagation of prototypes cannot produce the correct result of f¨m when 0=(⍴m)[a] for some axis a  and therefore the same mechanism employed to specify results for f/[a]m and f[a]m must also be permitted by f¨m .

Speaking of sorting, it was mentioned earlier that we might expect to be able to conveniently sort the general array representation of the materials data base in terms of of codenames and tradenames. In discussing the algorithms that do the sorting I will use the above ranking function rank and will let sc (for super catenate) denote the monadic function which, when applied to a vector of vectors v  produces a vector whose elements are the elements held by the elements of v . For example,

   sc(enc 'Tom'),(enc 'Dick'),(enc 'Harry')

equals 'TomDickHarry' .

sc can be defined in the STSC system by sc:dis,/⍵ .

Sorting by codename is the easier of the two because there is a one-one correspondences between materials and codenames. For the function

   f:rank dis ⍵[1]

the expression

   ⍋f¨mdb

produces the desired order of the materials. To sort by tradename, assume that all the tradenames are of the same length and define the function

   g:rank¨dis ⍵[2]

Note that g applies “one level deeper’ than f . The analog of the above expression for codenames is

   ⍋sc g¨mdb

However, since there is not a one-one correspondence between materials and tradenames, in order to sort by tradename we must determine, for every index k  the material and tradename that corresponds to the k th element of this vector. In either IPSA or STSC system we must explicitly compute all “paths” to tradenames to establish this correspondence. That is, we must compute

   r:⍺,[0.5]⍳⍴dis ⍵[2]
   paths←sc(⍳⍴mdb)r¨mdb

where paths is a two-row matrix for which paths[1;k] is an index of mdb and paths[2;k] is an index of dis(dis mdb[paths[1;k]])[2] for every index k . Therefore

   paths[⍋sc g¨mdb]

produces the desired order for sorting by tradename.

In practice the same problem was encountered, and it proved to be more convenient to maintain the path information with the data instead of computing it every time it was needed. It then turned out that the organizational information provided by the enclosures in mdb was redundant. That is, it was more convenient to work with the materials data base organized as a 3-column matrix mat , where the first column mat[;1] contains material numbers (i.e., the indices of the materials in mdb), the second column mat[;2] contains property numbers (1 for codename, 2 for tradename), and the elements of the third column mat[;3] hold the character vectors containing codenames and tradenames. For each row index k the element mat[k;3] holds data for the mat[k;2]th property of the mat[k;1]th material. The above sorts can be accomplished as follows:

   (j/mat[;1])[⍋rank¨(j←i=mat[;2])/mat[;3]]

where i is 1 for sorting by codename and 2 for sorting by tradenames.

Returning to the axis operators, the planned IPSA axis operators will not be denoted by brackets but will conform to general rules for operators. Both IPSA and STSC have formalized the operator syntax of APL, but only the STSC implementation of that syntax is complete. All the monadic operators (e.g. reduction) have their function arguments on the left (in contrast to monadic functions) and all dyadic operators (e.g. inner product) have their function arguments arranged one on the left and one the right (like dyadic functions). In particular, outer product is syntactically a special case of inner product, obtained by using the null () in place of the left function argument. Function arguments of operators can be primitive functions, defined functions, or functions derived from operators (e.g. +.×). Finally, operators have long scope to the left and short scope to the right (in contrast to functions). For example, a+.×.×b is equivalent to a(+.×).×b .

One point of practical interest is the ease of expressing function arguments of operators. For example, if the numbers in a vector v represent the terms in a continued fraction, then the value of that fraction is

   v[1]+÷v[2]+÷ … +÷v[⍴v]

which we might expect to be able to phrase in terms of reduction. According to the above rules, +÷/v is the same as +(÷/)v and therefore does not product the desired result. A simple solution to this problem is provided by the STSC function composition operator, denoted here by C . (IPSA also has function composition operators, but they do not provide a solution to this particular problem.) That is, ⍺+÷⍵ is functionally equivalent to ⍺+C÷⍵ and +C÷ is a derived function, so that +C÷/v has the desired effect.

This use of a composition operator is fairly restrictive because derived function expressions, like primitive functions, can only take arguments on their left (monadic case) or left and right (dyadic case). For example, it is awkward at best to describe (⍺≠⍵)/⍵ as a derived function of a composition operator. Of course we can always fix function definitions (as we did with the arguments of the each operator in the sorting algorithms for the materials data base mdb). Only STSC provides a convenient way to fix function definitions for arguments to operators, which is by way of a function definition operator. I will denote this operator here by D and use it monadically, although the STSC operator is dyadic and more general. For example, the continued fraction evaluation could be expressed as '⍺+÷⍵'D∘/v . As a second example, the previous application of the axis operator to a defined function g , i.e. g[2]⍵ , is equivalent to

   ('(⍴⍵)↑(¯1↑⍵⍳'','')↑⍵'D∘)[2]⍵

Up to now I have spoken of the two systems in general terms and have for the most part avoided the detailed differences, which are many. In fact, there are several important differences in the function composition operators alone. I will conclude the second phase of this summary, and introduce the third as well, which a brief description of what I believe is the most important difference in function composition.

The IPSA function composition operators, which differ for dyadic function arguments but agree for monadic ones, have a feature based on the concept of function rank that unifies the concepts of ordinary function composition and the application of functions to subarrays along certain axes. One effect of this feature is that for monadic functions f and g , f g ⍵ is not necessarily identical toF g ⍵ , where F denotes an operator which agrees with the IPSA function composition operators for monadic function arguments.

The monadic function g is said to have argument rank r if r is the smallest non-negative integer such that for every argument a of g which has rank at least r , g a is identical to g[(-r)↑⍳⍴⍴a]a . For example, i/⍵ has argument rank 1 for every vector i . The composition operator in the IPSA system have the property that the argument rank of the composite functionF g is the same as the argument rank of g . Therefore,F g , like g , applies to subarrays along the last r axes of any argument of rank at least r . For example, if f is and g is disall , which has argument rank 0, thenF g also has argument rank 0 and

   f g (enc 'Tom '),(enc 'Dick '),(enc 'Harry')

equals 3 5 , while

   f F g (enc 'Tom '),(enc 'Dick '),(enc 'Harry')

equals 3 1⍴3 4 5 . In effect, f F g ⍵ is the same as (f F g)[⍴⍴⍵]⍵ in this example.

The function rank feature, which is not present in the STSC implementation, permits the IPSA system to define the each operator in terms of composition with disall (which has argument rank 0) instead of as a separate operator.

And now for the tables showing the details of the two systems and their differences. An asterisk indicates a temporary restriction in the present implementation.


IPSA

<a Enclose a

Note: <3 is not the same as 3

STSC

⊂a Enclose a
↓aEnclose along the last axis (called split)
↓[k]a Enclose along axis k
l⊂a Partitioned enclose along the last axis
l⊂[k]a Partitioned enclose along axis k
a b c (⊂a),(⊂b),(⊂c)
(called vector notation or strand notation)

Note: ⊂3 is identical to 3

Table 1: Enclose


IPSA

>a Disclose if 0=⍴⍴a
Disclose-all otherwise (new axes become the last axes)

STSC

⊃a Disclose if 0=⍴⍴a
⊃(⍳0)⍴a otherwise (also called first)
v⊃a Pick (v is a path into a)
↑a Disclose-all (called mix)
↑[k]a Disclose-all and insert between axes ⌊k and ⌈k

Table 2: Disclose


IPSA

a≡b 1 when a and b are identical (called match)
 
''≡(⍳0) 1 
 
a⍳b  a∊bExtended usingin place of =

STSC

a≡b 1 when a and b are identical (called equivalent)
''≡(⍳0) 0 (prototypes must be identical)
a⍳b  a∊bExtended usingin place of =
a≢b ~a≡b (called inequivalent)
≡b 1 when b is flat (called simple)
≢b ~≡b (called not-simple)

Table 3: Identical


IPSA

a,b Defined for a and b both numeric, or both character, or both general
 
a⍴b ⍴b a⍉b ⍉b a/b a\b a⌽b ⌽b a↑b a↓b ,b
 
Extended for general b , except where fill is required
a⊥b a⊤b a⌹b ⌹b ⍋b ⍒b a⍕b ⍕b ⍎b a?b
 
Not extended
Scalar functionsNot extended

STSC

a,b Defined for all a and b
 
 
 
a⍴b ⍴b a⍉b ⍉b a/b a\b a⌽b ⌽b a↑b a↓b ,b
 
Extended for general b
a⊥b a⊤b a⌹b ⌹b ⍋b ⍒b a⍕b ⍕b ⍎b a?b
 
Not extended
Scalar functionsExtended (now called pervasive functions)
 
a[b] 
a[b]←c 
b is flat, or the elements of b hold “index vectors’ of length ⍴⍴a , or the elements of b hold paths into a (see Pick, Table 1)
 
⍳b a≡a[⍳⍴a] for all a
 
a⍴b Extended for b empty and ^/0≠a , e.g. (⍳0)⍴0⍴b holds the prototype of b (⊃b is the prototype)
 
⊤b ⊃0⍴⊂b (called type)

Table 4: Primitive Functions


IPSA

* f⍤g
  f⍥g 
Composition operators (called on and over)
 
f⍤g⍵f g ⍵
f⍥g⍵f g ⍵
⍺f⍤g⍵(g⍺)f(g⍵)
⍺f⍥g⍵f⍺g⍵
 
* f¨gh⍥f⍤g , where h is the inverse of g (called with)



f¨> Each operator
f⍤> Scalar extension operator
 
f/⍵ f\⍵ ⍺∘.f⍵ ⍺f.g⍵Not extended

STSC

f∘g Composition operator (called composition)
 
Axis operatorf[i]⍵ extended to some new primitives and ⍺f[i]⍵ is defined for primitive scalar functions f
 
f⍢g Dual operator
f⍢g⍵ h f g ⍵ , where h is the inverse of g
⍺f⍢g⍵h(g⍺)f(g⍵)
 
 Each operator


f/⍵⍵[1]f¨f/1↓⍵
 
f\⍵ (f\⍵)[i] equals f/⍵[⍳i]
 
⍺∘.f⍵ (⍺∘.f⍵)[i;j] equals ⍺[i]f¨⍵[j]
 
⍺f.g⍵f/⍺g¨⍵
 
⍺∇⍵ ∘∇⍵ ⍺∇∘Function definition operator (ambivalent, monadic, dyadic)

Table 5: Operators
 

In conclusion, I believe that the IPSA and STSC systems represent two very different approaches to general arrays and APL, even though they have many of the same general features.

The STSC system appears to be based on the approach of extending existing primitive functions and operators, wherever possible, to the domain of general arrays. For example:

1.  the extensions of reduction, scan, inner product, and outer product use the each operator implicitly;
2.  strand notation extends the representation of vector numeric constants;
3.  the extensions of indexing and monadic use general arrays;
4.  extensions for the application of functions to collections of subarrays are by way of the each operator and enclosing along specified axes only.

(Note that this approach requires the enclose of a flat scalar to be the same flat scalar. Otherwise, for example, extensions 1 and 2 fail.) Since many of the primitive functions and operators so extended could also be effectively extended without using general arrays, one must conclude that this approach is based on the belief that general arrays and APL are potentially more effective for most applications than flat arrays and APL.

On the other hand, the IPSA system appears to be based on the approach that there are alternatives to general arrays extensions that are potentially powerful without general arrays (i.e., the axis and function composition operators), but which can also be applied to general arrays through the explicit use of enclose and disclose. Thus an underlying belief of the IPSA approach appears to be that generality is still one of the fundamental design principles of APL (see [2]).

     

D.L. Orth
IBM Thomas J. Watson Research Center
PO Box 218
Yorktown Heights, NY
USA   10598
 

References

1.  Bernecky, R. and Iverson, K.E., Operators and Enclosed Arrays, Proceedings of the IPSA 1980 APL Users Meeting, p. 319.
2.  Falkoff, A.D. and Iverson, K.E., The Design of APL, IBM Journal of Research and Development, 17 4 (July 1973) p.324.



Originally appeared in the APL Quote Quad, Volume 12, Number 2, 1981-12.

created:  2010-10-15 07:45
updated:2013-07-23 22:10