The Role of Errors in the History of
Loglans
Disclaimer: This is my personal
account, from my point of view. The events involved are described as
I remember them and interpreted them. The science involved is as I
understand it and extrapolate from it. I have tried not to assign
blame here; I think most of the errors were inevitable in the
situations involved and were probably seen as errors only in
corrected hindsight (if at all). Others may have different memories
or interpretations, read the science differently, and disagree about
what was an error, but this is my story. Bring your own salt.
I am
dividing this essay into sections headed by various claims that were
made for Loglan (and Lojban). Most of the errors discussed here
attach more or less well to one of these claims, a few others can be
sandwiched in. These claims have played a major role in the spread
of interest in Loglans and have, in various ways, guided developments
over the decades, so they may an informative and useful guide for
presenting the problems.
Maxim
One: Loglan is spoken Formal Logic (or Symbolic Logic or First Order
Predicate Logic)
In
many ways, this is the root error, from which the others derive.
Most of the features later claimed for Loglans or sought for them
derive from similar feature had by or claimed for FOPL (and it
predecessors into the 19th
century and successors into the 21st).
The formulae of FOPL are syntactically unambiguous; there is only
one way to analyze one. Translating an argument into such formulae
provides a definitive way to demonstrate the validity of the argument
(or its invalidity and where it goes wrong). Such translations also
reveal misleading features of ordinary language, which give rise to
many needless confusions and disagreements (and much metaphysics,
some would say). Thus, FOPL is a valuable tool for rational
discussion and for promoting understanding among people of different
views, since it can be used to reveal the structures of any language.
Of
course, the claim that some set of formulae was translation of a
given argument is open to some disagreement; there is no automatic
procedure for such translations as there is for judging validity of
the translated set. Thus, the validity of many historic arguments
(the Ontological, as a prime example) are still undecided. Of
course, if the argument was given in FOPL – or an appropriately
fleshed-out version of it – to begin with, this problem would
disappear. So, the construction and use of such a language
(partially realized for present purposes in careful ordinary German
or English) became the goal of some logicians/philosophers from the
'20s on. James Cooke Brown, the creator of Loglan, studied with
Broadbeck at Minnesota and was at least thoroughly exposed to this
Logical Positivist tradition. So, whether consciously or not, the
“logically perfect language” played a role in his choices when he
came to create an experimental language.
Another
major factor was simplicity. FOPL does away with the many parts of
speech and with the variety of tenses, moods and modes, and cases of
familiar languages. Among content words there are only two parts of
speech, terms and predicates, and, while there are a variety of
subtypes (more as logic developed beyond the '50s) they all behave in
the same way. Terms are divided into names, which stand for
individuals (however that may be defined), and variables, which play
a role in forming compound formulae, together, eventually, with
compounded terms. Eventually there came to be terms of various sort,
depending upon what was being counted as an individual, but this did
not change the basic grammar. Predicates were divided according to
the number of terms they required to make a formula (and, eventually,
what types of terms). A(n atomic) formula, then, was just a
predicate with the appropriate number of terms (of the right sorts)
in order: Faxb, for example. No cases or sentential roles, no
prepositions, no tenses, etc. Beyond this were the recursive steps
involving -makers: a maker took a specified number of variables and
formulae and returned a term or a formula, depending on its type
(this gets somewhat more complicated later, but the basic pattern
remains the same). Thus, &, a typical formula maker, takes two
formulae and returns a new formula, their conjunctions: from Faxb and
Gxc to (Faxb & Gxc). A, a variable binding predicate maker
(quantifier), takes a variable and a formula to give a new formula,
the universal generalization of the original formula: so from x and
the previous formula we get Ax(Faxb & Gxc). The occurrence of x
in this formula are now said to be bound by this quantifier, whereas
before they were free. Similarly, @, an illustrative term maker,
takes one variable and one formula and produces a new term, in which
the variable is now bound: from x and Fx to @xFx, the salient F, say.
The formulae used by a maker may be of any degree of complexity and
the so may be the terms used in formulae. But the history of their
construction and so their ultimate structure is always apparent:
there is never any doubt about what formulae and variables (and
whatever else) is involved at any level. At any stage in the
development of to and beyond FOPL, the set of akers is closed and
introducing new ones (beyond mere abbreviations) takes a rather
dramatic effort, even though the pattern for defining them is clear
throughout.
Given
this, spoken FOPL would seem to be an easy thing to achieve. We need
an open class of expressions for names and another for variables and
another for predicates, perhaps with some special markers for
different types of each sort. Then we get some special expressions
for the various makers in use. Then we just rattle the formulae off
as they are written, using the recommended expressions for the
various symbols. Every logic teacher does this every day to talk
about the formulae on the board:”For all ex both eff ay ex be and
gee ex see”, for the sentence above. Clearly, this ad hoc
technique is not quite good enough for our purposes, even leaving
aside the fact that we don't have any meaningful expressions here
yet. The three classes of expressions are not separated; they are
all just letters and the capital/lower case distinction does ot come
across in speech. Then there are the parentheses, the left one here
pronounced “both”, looking ahead to the connective to follow, and
the right one omitted altogether. With a different connective, the
left parenthesis would have been differently pronounced, as “if”
or “either” or “as”, say, so we need either to deal with them
all the same (as “paren”, say) or make the nature of the enclosed
compound sentence clearer at the beginning. (This is the way this
problem arises in parenthesized infix – or Principia – notation;
in other version the problem arises in different way, either by
complexities on the connective to show how deeply it is buried in the
compound, in labeled infix, or , in prefix – Polish – notation,
by the need to mark the division between component sentences.). The
right parenthesis can, in fact always be dropped in sentence
compounding (though it is often a kindness not to), but needs to be
reintroduced (and, indeed, extended) in the case of compounded terms
with in a simple sentence: is Fa@xGxcb,
composed of the predicate F and the terms a and @xGxcb or of that
predicate and the terms a , @xGxc and b, or, indeed, of the terms a,
@xGx, c, and b with predicate F? We must either enclose the formula
in the composition in parentheses, if they are not already there, or
else enclose the terms which follow a predicate in some sort of
parentheses as well (F, say) and in either case, take care
to pronounce both of these parentheses. (There are other, even more
tiresome ways to deal with this problem, by always marking the number
of places of each predicate, for example). But these can all be done
rather cheaply: a few more words for constant characters, like right
parentheses of various sorts (or, actually, for right parentheses,
one sort is enough, if we put all of them in – but would we want
to?).
The
thought of a string of “end”s (say) at the end of every sentence
is enough to show that spoken FOPL needs to be different from the
written form, where adding a few right parentheses is a minor matter.
So you need rules about when you can drop parentheses and when you
can't and (probably) when you can but shouldn't, for clarity's sake.
Or find another way around the problem. This is the first stage of
the Loglans' adoption of FOPL.
Step
A. Atomic sentences
Loglan
took as its basic sentence type, before any frills, a predicate with
a fixed number of places (given in the glossary but not marked in the
word anywhere, despite regular suggestions to do so). Predicates had
a definite (though increasingly complex as the years went by)
phonemic structure, so were distinctive. Names also were distinctive
in a variety of ways, while term variables were given by a finite
list and rules for extending by subscripts. Composite terms were
formed by replacing the first term (which Loglan had moved in front
of the predicate – an insignificant change, to aFxb and xGc) by an
operator which did the work of a 1-variable, 1-formula term maker and
by attaching the arguments of the predicates the formula by explicit
connectives, @G+c, for instance. This solved the first level of
possible term misalliance, but for deeper ones, a right hand end
marker for these term makers was used as needed (i.e., when more
terms for a higher component followed). The problem about only using
the first term of a predicate was solved a device creating new
predicates in which the original first term and another term were
swapped, from aFxb to xF'ab, for example, giving rise then to a term
@F'+a-b, say. The situation calling for RHE markers is then
something like H@xF' b>,
which would first appear as H@F'+@G+c-b,
where the predicate to which b is attached is unclear, hence
H@F'+@G+c]-b, however pronounced. These and
other changes created a new problem, when the predicate of a term
might come directly before the predicate of a sentence, creating a
potential ambiguity (predicate strings having been made legal – see
later). One could make this a case where the RHE parenthesis of the
term was used, but, since that could trigger a string of such
parentheses, a separate divider was introduced. H@xFx
becomes @F/H (or, still legal but riskier, @F]H). Since this
automatically closes all the terms that went before, it suggests
similar RHEs to close several terms, but not all without stringing
out the term closers. This turns out to not help a lot, since
learning different words for closing two, three, … terms is less
efficient that just using two or three or … closers (having that
many open terms at any point is probably bad style, but grammar has
to apply to bad style as well as good).
The
complexity of speaking even a relatively simple sentence makes one
wonder if there is not some other way organize terms without loss of
crucial information (what term occupies what place with which
predicate). The answer so far is “No”. There are ways of
reducing the reliance on order and devices for tagging terms
according to what predicate they go with, but these all introduce yet
more essentially empty and repetitive items, which the present
complexities sought to reduce. A case system, meant both to relieve
the requirements for a fixed order and to give some meaning to the
various positions – which now have meaning only if you remember the
definition of the predicate correctly – does not simplify the need
to shift order to make a term nor does it help enough with the
problem of dropped places (upcoming) to be worth the cost. It is not
clear that the Loglans' solution to making this structure speakable
is the simplest or shortest or clearest one, but alternate proposals
so far have offered no obvious advantages and have often had clear
downsides. So let us call this a success: it keeps all the essential
information but gets rid of as much superfluous verbiage as possible.
That it is often notoriously easy to get wrong, whether by
(unstylishly) leaving in unnecessary RHEs or by (disastrously)
leaving out needed ones, is a problem for eventual textbooks. And
one that gets worse as we get deeper into the language.
In
making a language based in this way on FOPL, another inelegance
arises. When using FOPL to transcribe arguments from another
language, we naturally pick predicates that exactly fit the situation
we are dealing with. But, in a Loglan, we have fixed predicates with
fixed places. So, to deal with a particular situation, we may not
need all places that the predicate supplies (or we may need one not
supplied, but that is a later problem). For instance, the predicate
briefly rendered “go” is actually a five-place predicate “1
goes to 2 from 3 along route 4 using mode of travel 5”, so to say
just “Sam goes to San Francisco” leaves three places unfilled.
Since we don't at the moment care about what goes in there in fact
(from here on Southwest by airplane, say), we don't want to say
anything more (as we don't in English). The stock logical move in
this case would be to bind each of these unused places with a
particular quantifier, “some” (and, so, a number of at least
implicit parentheses) Sx*Sy*Sz*sGfxyz***. The Loglans can do that,
of course, but that seems to be defeating the purpose of making this
as much like other spoken languages as possible while keeping it as
rigorous as FOPL. So, the Loglans have three responses, each
ultimately going back to the official form. One is to introduce a
new predicate based on the original but having on the interesting
places (it holds of the mentioned things just in case there are
things for the other places so that the original predicate holds for
all of them together). The second is to insert a dummy term in the
unfilled slots. And finally, and most pleasingly, the slots are just
left empty. This last is the standard when the empty slots are all
at the end, with no intervening filled slots. The dummies (there are
several, for some reason) are used when a filled slot comes after an
unfilled slot, though other devices can also be used with just the
blanks. So “Sam is going by Southwest”, officially SxSySzsGxywz,
might be sG- -w- (- for a dummy insert) or sG- -w or sG4w (where 4 is
a marker that the next term is, in fact, the fourth one for the
predicate) or, modifying the predicate, sG[1,4]w (dropping the other
terms) or sG<2>w (rearranging the terms by exchanging the 2nd
and 4th),
dropping the unused final terms in each case. Of course, restoring
the FOPL original requires knowing the places of the original
predicate so that the quantifiers can be properly placed, as close to
the predicate as possible. These unmentioned quantifiers will come
to raise questions in going beyond atomic sentences. 2>
In the opposite
direction, predicates may be extended by adding terms. Although the
general idea of prepositions (or cases) to give overt meaning to
predicate places was rejected, it is retained for situations where a
predicate needs to be extended beyond its usual sense. So a new term
is introduced by a marker that says what its role is to be. It can
be inserted into the physical string of arguments at almost any
place, though traditionally goes at the end. Similarly, the
possibility, briefly mentioned above, of a “preposition” to
indicate which place of a predicate an argument fills is fully
realized in the Loglans, though rarely used. The prefixes to the
predicate that exchange the first place and another can be combined
to create any order of arguments we want. However, these
combinations are often long and not transparent for some
rearrangements, so the prepositions are a better choice in
speakability terms. Both these prepositional structures behave just
like the regular arguments. In particular, they attach to the
predicate of a term with the same ties as other arguments. They can
even, with some adjustments be made the term replaced by a
term-maker. And they are closed off just like other terms.At this
point is is fairly clear, if not rigorously demonstrated, that the
Loglans' reading of atomic sentences does represent the structure
completely and accurately.
No comments:
Post a Comment