Home     |     .Net Programming    |     cSharp Home    |     Sql Server Home    |     Javascript / Client Side Development     |     Ajax Programming

Ruby on Rails Development     |     Perl Programming     |     C Programming Language     |     C++ Programming     |     IT Jobs

Python Programming Language     |     Laptop Suggestions?    |     TCL Scripting     |     Fortran Programming     |     Scheme Programming Language


 
 
Cervo Technologies
The Right Source to Outsource

MS Dynamics CRM 3.0

Scheme Programming Language

scheme monads and bind vs compose


How do Haskells notions of >>=, run and return map into widespread
scheme constructs like compose, apply and values?

I'm guess I'm looking for a clean, schemish way to deal with monadic
side-effects using existing scheme procedures like delay, force, and
compose. (Am I right in assuming that compose always calls its
arguments rtl since call-with-values needs to extract the values from
the rightmost function application before passing it on, and that
compose is implicitly delayed since a call to compose yields a
procedure, not the results of a procedure application?)

Can Haskells bind be approximated using let* or begin for >> and
function composition (either explicitly or via compose) for >>=? What
are the risks in doing so?

I'm looking at, and grokking (I think), http://okmij.org/ftp/Scheme/monad-in-Scheme.html
and Message-ID: <slrnc3781u.oor.ne@gs3106.sp.cs.cmu.edu> over in
CLF, both which strikes me as too alien from Scheme; I'd like to use
the general compose instead of >>=, IO->>- and so on.

I've spent the day looking at monads and mostly it seems to me to be
kind of similar to a program I wrote last fall --
http://munnen.handgranat.org/Sunnan/slt.scm (noteably the last line.)
extract and rename have side-effects. Is this a style of programming I
should move from or can it work?

Am I missing something?

Sunnan

On May 21, 9:15 am, sunnanpup@gmail.com wrote:

> How do Haskells notions of >>=, run and return map into widespread
> scheme constructs like compose, apply and values?

Unfortunately, there isn't a simple mapping.

> I'm guess I'm looking for a clean, schemish way to deal with monadic
> side-effects using existing scheme procedures like delay, force, and
> compose.
> (Am I right in assuming that compose always calls its
> arguments rtl since call-with-values needs to extract the values from
> the rightmost function application before passing it on, and that
> compose is implicitly delayed since a call to compose yields a
> procedure, not the results of a procedure application?)

What I think you are asking is correct, but the way you are phrasing
it technically is not.  Compose isn't special in any way.  It is a
normal procedure and it takes its arguments in the normal way.  You
can see this if you try the following:

(compose (begin
           (display "Evaluating first arg")
           sqrt)
         (begin
           (display "Evaluating second arg")
           sin))

The two calls to display will happen when this form is evaluated.
SQRT and SIN are evaluated to yield the square-root and sine
functions, but these functions are not applied to anything at this
point.  The result of the call to compose is a function that *will*
apply square-root to the result of sine of whatever argument.

> Can Haskells bind be approximated using let* or begin for >> and
> function composition (either explicitly or via compose) for >>=?

No.

> I'm looking at, and grokking (I think),http://okmij.org/ftp/Scheme/monad-in-Scheme.html
> and Message-ID: <slrnc3781u.oor.ne@gs3106.sp.cs.cmu.edu> over in
> CLF, both which strikes me as too alien from Scheme; I'd like to use
> the general compose instead of >>=, IO->>- and so on.

What Oleg is trying to show is how to take a program like this:

(define (build-btree depth)
  (if (zero? depth)
      (make-node depth '())
      (let* ((left-branch (build-btree (- depth 1)))
             (right-branch (build-btree (- depth 1))))
        (make-node depth (list left-branch right-branch)))))

(define counter 0)

(define (make-node depth children)
  (set! counter (+ counter 1))
  (cons counter (cons depth children)))

;;; note the explicit side effect in make-node

and transform it into a side-effect free program like this:

(define (build-btree depth counter)
  (if (zero? depth)
      (values (make-node depth '() counter)
              (+ counter 1))
      (call-with-values
        (lambda () (build-btree (- depth 1) counter))
        (lambda (left-branch counter1)
          (call-with-values
            (lambda () (build-btree (- depth 1) counter1))
            (lambda (right-branch counter2)
              (values (make-node depth
                                (list left-branch right-branch)
                                counter2)
                      (+ counter2 1))))))))

(define (make-node depth children counter)
  (cons counter (cons depth children)))

;; Which is *ugly*, but state free!

and transform it further by `factoring' the program into two parts:
one that builds the tree, the other that tracks the counter:

(define (build-btree depth)
  (if (zero? depth)
      (make-node depth '())
      (letM* ((left-branch (build-btree (- depth 1)))
              (right-branch (build-btree (- depth 1))))
        (make-node depth (list left-branch right-branch)))))

(define incr
  (lambda (n)
    (make-numbered-value (+ 1 n) n)))

(define (make-node val kids)
  (>>=
    incr
    (lambda (counter)
      (return (cons (make-numbered-value counter val) kids)))))

The final version of build-btree doesn't even *mention* the counter,
yet the counter is correctly tracked and incremented as if it were
global state.  And there are no side-effects either.  However, there
is a cost to doing this:  the build-tree program, although it
*appears* to be a `normal' recursive program, is actually a higher
order procedure.  We have two `levels' we have to think on:  the upper
level where we are creating a tree, and the `lower' level where we are
operating on the counter.  The counter is hiding in the lower level
where we cannot see it.  The >>= operator `plucks' the counter out of
hiding so we can use it.  The `return' operator takes a lower level
result and lifts it back up into the upper level.

Monads are a funny name for this process:
  1.  Rewrite your program in `state-passing-style'.
  2.  Refactor your program so that the state is passed in as a
curried argument.
  3.  Remove the explicit currying where possible, and use >>= and
return to cross levels.

> I've spent the day looking at monads and mostly it seems to me to be
> kind of similar to a program I wrote last fall --http://munnen.handgranat.org/Sunnan/slt.scm(noteably the last line.)
> extract and rename have side-effects. Is this a style of programming I
> should move from or can it work?

> Am I missing something?

Only that with monadic style, there wouldn't even be side effects in
your extract and rename procedures.
Add to del.icio.us | Digg this | Stumble it | Powered by Megasolutions Inc