Home     |     .Net Programming    |     cSharp Home    |     Sql Server Home    |     Javascript / Client Side Development     |     Ajax Programming

Ruby on Rails Development     |     Perl Programming     |     C Programming Language     |     C++ Programming     |     IT Jobs

Python Programming Language     |     Laptop Suggestions?    |     TCL Scripting     |     Fortran Programming     |     Scheme Programming Language


 
 
Cervo Technologies
The Right Source to Outsource

MS Dynamics CRM 3.0

Scheme Programming Language

Controlled Dynamism


In reading the recent threads about the direction of scheme
as a programming langauge, it occurs to me that there is a
way to use design principles to reconcile some conflicting
desires about what the language should be.

In particular, the issue of dynamism: people would like
to experiment with dynamic features, and people would like
to be able to guarantee their absence from programs as a
way of making programs easier to reason about.

I think that a sign of having truly found the essentials
of something, is that you can define a single feature
or very small set of features which is sufficient to
implement that idea.  And that's been one of the roles
of design in scheme; seeking after essences.

But when you have a single feature or very small set of
features, you can usually add and remove them orthogonally
without making anything else stop working.  And I think
there's a niche for a language (or a set of closely related
dialects) to take advantage of that property as well.

Consider the following as a blueprint for some future
lispy language (I do not here say scheme in particular -
somebody's already doing the "R6RS counterproposal" bit).

Under the following proposal, a particular implementation
would be required to support both a "core language" of
heartbreaking simplicity and elegance, and a set of
additional "semantics" packages implementing other required
semantics-altering additions.  But users would *always*
be permitted to disable or lock out those other "semantics"
packages.

"Feature" packages are not really a part of this semantics
discussion, but I want to mention them and make clear that
they are separate things.  Adding a gob of handy functions
for manipulating lists or running CNC mills or something
does not alter the semantics of the language.  The standard
might require a number of "feature" packages, but their use
or non-use wouldn't affect the semantics of the rest of
the language.

Other packages, "extension" packages, would implement
further semantic extensions.  These would be described
by the standard, but not required.  Programs requiring
those extensions would have a known and well-specified
set of functions (and occasional macros) to use allowing
their code to be portable among implementations that
support them, but implementations could support or refuse
to support them at will  (and in fact, in many cases might
decide that refusing to support them is the Right Thing
To Do because they are Bad Ideas).

I've ordered these semantics packages according to how
much I think they obfuscate or frustrate reasoning about
programs expressed in the language.  Other people may
like to order them differently.  R4RS scheme was at
(1,2 (recommended only), 4,5b),  R5RS scheme is at
(1,2,4,5a,7).  The R6RS proposal, I think, is at
(1,2,4,5a,6,7,9).

(1) Purely Functional Core Language: This would be the core of
the language, a Lisp dialect with no mutation and no macrology.
It would require space-conserving tail calls, as does scheme,
and would likely use monads or streams for I/O.  A program
would be considered to be an expression that calculates an
ordered list of output from an ordered list of input.  Numerous
objects like vectors and strings would require some serious
re-thinking and new APIs in order to be usable (and USEFUL)
without mutation, but the research into these APIs has
already been done, mostly, in existing functional languages.
Any program written in the PFCL could be subject to very
complete analysis, memoization, and other performance-
enhancing tricks that compilers could do.

(2) Rigidly Specified Macrology: An Add-on to the PFCL, adding
referentially-transparent macros (roughly define-syntax equivalent
hygiene-preserving macros) to the language.

(3) Dynamic-environment package: This would add a dynamic environment
in addition to the static scoping of the PFCL - but rather than
allowing direct references to variable names (and the endless
hair that arrives when a static variable and dynamic variable
have the same names) it would require an explicit reference to
the dynamic environment to dereference a dynamic variable by
name.

(4) Mutability Package: another add-on, this time adding set-car!,
set-cdr!, vector-set!, string-set!, set!, etc. to the language,
as well as nonfunctional I/O. This would also define procedures
created with the extant lambda as having a particular order of
evaluation, and introduce a variant explicitly leaving the order
of evaluation unspecified, OR define procedures created with the
extant lambda as not having a particular order of evaluation,
and introduce a variant explicitly leaving the order of evaluation
specified.  The mutability package, in a given implementation,
might need to provide a replacement for much of (1).

(5a) Continuations with unlimited scope.  This would introduce
continuations with unlimited scope, with the semantics of call/wc
and dynamic/wind.  You cannot have 5a and 5b in the same dialect.

(5b) Continuations with unlimited scope.  This would introduce
continuations with unlimited scope, with the semantics of call/cc.
You cannot have 5b and 5a in the same dialect.

(6) Referentially-Nontransparent Macrology: this add-on would add
referentially nontransparent macros to the language, explicitly
allowing variable capture (roughly equivalent to defmacro macros
and/or syntax-case).

(7) Limited Evaluation: adds eval to the language.  The first
argument of eval is an expression: the second and third arguments,
if present, is a lexical environment specifier.  If (3) is installed,
a possible third argument is a dynamic-environment specifier.
The environments that are allowed/required to implement this package
are a limited set of immutable environments.

(8) Mutable environments: Adds mutable environments to the language
as first-class values. This dramatically increases the power (and
hair) of (3) and (7), and may require code to replace those packages
completely if installed in addition to them. In practice, many schemes
have provided this.

(9)  conditions and exceptions ...
(10) runtime macroexpansion ...
(11) pre-emptive multitasking ...
(12) reader macros ...
(13) identifier syntax ...
(14) Fexprs ...

Anyway, my idea is that a lot of these features are orthogonal
in principle, and where they're not, it's likely a sign that we
haven't boiled them down to fundamentals yet.  So I would advance
a design schema where people are encouraged to *think* of them
as orthogonal, and likely to encourage future research to find
the vital essences of these features.

                                Bear

On May 31, 10:28 pm, Ray Dillinger <b@sonic.net> wrote:

> Anyway, my idea is that a lot of these features are orthogonal
> in principle, and where they're not, it's likely a sign that we
> haven't boiled them down to fundamentals yet.  So I would advance
> a design schema where people are encouraged to *think* of them
> as orthogonal, and likely to encourage future research to find
> the vital essences of these features.

You may want to look into the work of Moggi, Steele, and Espinosa.
Start here:
@misc{ espinosa94building,
  author = "D. Espinosa",
  title = "Building interpreters by transforming stratified monads",
  text = "Espinosa, D. Building interpreters by transforming
stratified monads. Unpublished
    manuscript, June 1994.",
  year = "1994",
  url = "citeseer.ist.psu.edu/espinosa94building.html" }

The idea is to build up these various levels through composition of
monads.
I'm not sure what ever happened to these ideas, though.

In article <<1180676734.948318.127@x35g2000prf.googlegroups.com>>,

Joe Marshall <eval.ap@gmail.com> wrote:

> The idea is to build up these various levels through composition of
> monads.  I'm not sure what ever happened to these ideas, though.

Many of those ideas got sucked into Haskell's monad transformer
libraries. They're well-worth ripping off^H^H^H porting to your
own language of choice.

I don't think we've hit the end of the line for this research,
though. I really like the stuff that Gordon Plotkin and John Power
(later joined by Martin Hyland and Paul Blain-Levy) have
been doing -- they try to start with algebraic characterizations
of the side effects they want, and then give methods for combining
groups of effects and deriving the appropriate monads from them.

That seems like fundamentally the right approach to me, since
the monad doesn't pop up as a magic trick. I find their math quite
formidable, though, and will probably need to try implementing their
theorems as a library before I can honestly say I understand it.

--
Neel R. Krishnaswami
n@cs.cmu.edu

Add to del.icio.us | Digg this | Stumble it | Powered by Megasolutions Inc