In reading the recent threads about the direction of scheme
as a programming langauge, it occurs to me that there is a
way to use design principles to reconcile some conflicting
desires about what the language should be.
In particular, the issue of dynamism: people would like
to experiment with dynamic features, and people would like
to be able to guarantee their absence from programs as a
way of making programs easier to reason about.
I think that a sign of having truly found the essentials
of something, is that you can define a single feature
or very small set of features which is sufficient to
implement that idea. And that's been one of the roles
of design in scheme; seeking after essences.
But when you have a single feature or very small set of
features, you can usually add and remove them orthogonally
without making anything else stop working. And I think
there's a niche for a language (or a set of closely related
dialects) to take advantage of that property as well.
Consider the following as a blueprint for some future
lispy language (I do not here say scheme in particular -
somebody's already doing the "R6RS counterproposal" bit).
Under the following proposal, a particular implementation
would be required to support both a "core language" of
heartbreaking simplicity and elegance, and a set of
additional "semantics" packages implementing other required
semantics-altering additions. But users would *always*
be permitted to disable or lock out those other "semantics"
"Feature" packages are not really a part of this semantics
discussion, but I want to mention them and make clear that
they are separate things. Adding a gob of handy functions
for manipulating lists or running CNC mills or something
does not alter the semantics of the language. The standard
might require a number of "feature" packages, but their use
or non-use wouldn't affect the semantics of the rest of
Other packages, "extension" packages, would implement
further semantic extensions. These would be described
by the standard, but not required. Programs requiring
those extensions would have a known and well-specified
set of functions (and occasional macros) to use allowing
their code to be portable among implementations that
support them, but implementations could support or refuse
to support them at will (and in fact, in many cases might
decide that refusing to support them is the Right Thing
To Do because they are Bad Ideas).
I've ordered these semantics packages according to how
much I think they obfuscate or frustrate reasoning about
programs expressed in the language. Other people may
like to order them differently. R4RS scheme was at
(1,2 (recommended only), 4,5b), R5RS scheme is at
(1,2,4,5a,7). The R6RS proposal, I think, is at
(1) Purely Functional Core Language: This would be the core of
the language, a Lisp dialect with no mutation and no macrology.
It would require space-conserving tail calls, as does scheme,
and would likely use monads or streams for I/O. A program
would be considered to be an expression that calculates an
ordered list of output from an ordered list of input. Numerous
objects like vectors and strings would require some serious
re-thinking and new APIs in order to be usable (and USEFUL)
without mutation, but the research into these APIs has
already been done, mostly, in existing functional languages.
Any program written in the PFCL could be subject to very
complete analysis, memoization, and other performance-
enhancing tricks that compilers could do.
(2) Rigidly Specified Macrology: An Add-on to the PFCL, adding
referentially-transparent macros (roughly define-syntax equivalent
hygiene-preserving macros) to the language.
(3) Dynamic-environment package: This would add a dynamic environment
in addition to the static scoping of the PFCL - but rather than
allowing direct references to variable names (and the endless
hair that arrives when a static variable and dynamic variable
have the same names) it would require an explicit reference to
the dynamic environment to dereference a dynamic variable by
(4) Mutability Package: another add-on, this time adding set-car!,
set-cdr!, vector-set!, string-set!, set!, etc. to the language,
as well as nonfunctional I/O. This would also define procedures
created with the extant lambda as having a particular order of
evaluation, and introduce a variant explicitly leaving the order
of evaluation unspecified, OR define procedures created with the
extant lambda as not having a particular order of evaluation,
and introduce a variant explicitly leaving the order of evaluation
specified. The mutability package, in a given implementation,
might need to provide a replacement for much of (1).
(5a) Continuations with unlimited scope. This would introduce
continuations with unlimited scope, with the semantics of call/wc
and dynamic/wind. You cannot have 5a and 5b in the same dialect.
(5b) Continuations with unlimited scope. This would introduce
continuations with unlimited scope, with the semantics of call/cc.
You cannot have 5b and 5a in the same dialect.
(6) Referentially-Nontransparent Macrology: this add-on would add
referentially nontransparent macros to the language, explicitly
allowing variable capture (roughly equivalent to defmacro macros
(7) Limited Evaluation: adds eval to the language. The first
argument of eval is an expression: the second and third arguments,
if present, is a lexical environment specifier. If (3) is installed,
a possible third argument is a dynamic-environment specifier.
The environments that are allowed/required to implement this package
are a limited set of immutable environments.
(8) Mutable environments: Adds mutable environments to the language
as first-class values. This dramatically increases the power (and
hair) of (3) and (7), and may require code to replace those packages
completely if installed in addition to them. In practice, many schemes
have provided this.
(9) conditions and exceptions ...
(10) runtime macroexpansion ...
(11) pre-emptive multitasking ...
(12) reader macros ...
(13) identifier syntax ...
(14) Fexprs ...
Anyway, my idea is that a lot of these features are orthogonal
in principle, and where they're not, it's likely a sign that we
haven't boiled them down to fundamentals yet. So I would advance
a design schema where people are encouraged to *think* of them
as orthogonal, and likely to encourage future research to find
the vital essences of these features.