Home     |     .Net Programming    |     cSharp Home    |     Sql Server Home    |     Javascript / Client Side Development     |     Ajax Programming

Ruby on Rails Development     |     Perl Programming     |     C Programming Language     |     C++ Programming     |     IT Jobs

Python Programming Language     |     Laptop Suggestions?    |     TCL Scripting     |     Fortran Programming     |     Scheme Programming Language


 
 
Cervo Technologies
The Right Source to Outsource

MS Dynamics CRM 3.0

Perl Programming Language

The Concepts and Confusions of Prefix, Infix, Postfix and Fully Functional Notations


The Concepts and Confusions of Prefix, Infix, Postfix and Fully
Functional Notations

Xah Lee, 2006-03-15

[This articles explains away the confusion of common terms for
notation systems used in computer languages: prefix, infix, postfix,
algebraic, functional. These notation's relation to the concept of
operators. These are explained using examples from LISP, Mathematica,
and imperative languages. Then, it discuss some problems of purely
nested notation.]

In LISP languages, they use a notation like (+ 1 2) to mean 1+2.
Likewise, they write (if test this that) to mean if (test) {this}
else {that}. LISP codes are all of the form (a b c ...), where the
a b c themselves may also be of that form. There is a wide
misunderstanding that this notation being prefix notation. In this
article, i'll give some general overview of the meanings of Algebraic
Notation and prefix, infix, postfix notations, and explain how LISP
notation is a Functional Notation and is not a so-called prefix
notation or algebraic notation.

The math notation we encounter in school, such as 1+2, is called
Infix Algebraic Notation. Algebraic notations have the concept of
operators, meaning, symbols placed around arguments. In algebraic
infix notation, different symbols have different stickiness levels
defined for them. e.g. 3+2*5>7 means (3+(2*5))>7. The stickiness
of operator symbols is normally called Operator Precedence. It is
done by giving a order specification for the symbols, or equivalently,
give each symbol a integer index, so that for example if we have
a b c, we can unambiguously understand it to mean one of (a b) c
or a (b c).

In a algebraic postfix notation known as Polish Notation, there needs
not to have the concept of Operator Precedence. For example, the infix
notation (3+(2*5))>7 is written as 3 2 5 * + 7 >, where the
operation simply evaluates from left to right. Similarly, for a prefix
notation syntax, the evaluation goes from right to left, as in > 7 +
* 5 2 3.

While functional notations, do not employ the concept of Operators,
because there is no operators. Everything is a syntactically a
function, written as f(a,b,c...). For example, the same expression
above is written as >( +(3, *(2,5)), 7) or greaterThan( plus(3,
times(2,5)), 7).

For lisps in particular, their fully functional notation is
historically termed sexp (short for S-Expression, where S stands for
Symbolic). It is sometimes known as Fully Parenthesized Notation. For
example, in lisp it would be (f a b c ...). In the above example it
is: (> (+ 3 (* 2 5)) 7).

The common concepts of prefix, postfix, infix are notions in
algebraic notations only. Because in Full Functional Notation, there
are no operators, therefore no positioning to talk about. A Function's
arguments are simply explicitly written out inside a pair of enclosing
delimiters.

Another way to see that lisp notation are not pre anything, is by
realizing that the head f in (f a b c) can be defined to be placed
anywhere. e.g. (a b c f) or even (a f b c), and its syntax syntactical
remains the same. In the language Mathematica, f(a b c) would be
written as f[a,b,c] where the argument enclosure symbols is the square
bracket instead of parenthesis, and argument separator is comma
instead of space, and the function symbol (aka head) is placed in
outside and in front of the argument enclosure symbols.

The reason for the misconception that lisp notations are prefix is
because the head appears as the first element in the enclosed
parenthesis. Such use of the term prefix is a confusion engenderer
because the significance of the term lies in algebraic notation
systems that involves the concept of operators.

A side note: the terminology Algebraic Notation is a misnomer. It
seems to imply that such notations have something to do with the
branch of math called algebra while other notation systems do not. The
reason the name Algebraic Notation is used because when the science of
algebra was young, around 1700s mathematicians are dealing with
equations using symbols like + = written out similar to the way we
use them today. This is before the activities of systematic
investigation into notation systems as necessitated in the studies of
logic in 1800s or computer languages in 1900s. So, when notation
systems are actually invented, the conventional way of infixing +
= became known as algebraic because that's what people think of when
seeing them.

--------
This post is part of a 3-part exposition:
The Concepts and Confusions of Prefix, Infix, Postfix and Fully
Functional Notations,
Prefix, Infix, Postfix notations in Mathematica,
How Purely Nested Notation Limits The Language's Utility,
available at:
http://xahlee.org/UnixResource_dir/writ/notations.html

  Xah
  x@xahlee.org
http://xahlee.org/

["Followup-To:" header set to comp.lang.lisp.]

On 2007-05-23, Xah Lee <x@xahlee.org> wrote:

> The Concepts and Confusions of Prefix, Infix, Postfix and Fully
> Functional Notations

> Xah Lee, 2006-03-15

Xah, why do you post year-old essays to newsgroups that couldn't care
less about them?

> On 2007-05-23, Xah Lee <x@xahlee.org> wrote:
>> The Concepts and Confusions of Prefix, Infix, Postfix and Fully
>> Functional Notations

>> Xah Lee, 2006-03-15

> Xah, why do you post year-old essays to newsgroups that couldn't care
> less about them?

And even more to the point -- why does he post now again the same
drivel he already posted on the 9th of May 2007? And will we now
treated to repeats of his garbage every 2 weeks?

The answer to your question is very simple: Xah Lee is a troll.

Regards -- Markus

Markus E Leypold wrote:
> The answer to your question is very simple: Xah Lee is a troll.

In this context, I believe he is marketing/advertising himself as a
consultant and some kind of vampiric man-whore according to this page:

  http://xahlee.org/PageTwo_dir/Personal_dir/xah.html

"... I'm technically American. Love me and I can make you American."

Xah is perhaps the world's first person to claim to be both a Lisp
programmer and "strong at siring". :-)

Anyway, are there any libraries to do hardware accelerated vector graphics
in Perl, Python, Lisp, Java or any functional language (except OCaml and F#
and excluding WPF and Silverlight)?

--
Dr Jon D Harrop, Flying Frog Consultancy
The F#.NET Journal
http://www.ffconsultancy.com/products/fsharp_journal/?u7

On Tue, 29 May 2007, Jon Harrop wrote:
> Anyway, are there any libraries to do hardware accelerated vector graphics
> in Perl, Python, Lisp, Java or any functional language (except OCaml and F#
> and excluding WPF and Silverlight)?

I believe there are OpenGL bindings for quite many languages, here are two
for java:

https://jogl.dev.java.net/
http://www.lwjgl.org/

- Ville Oikarinen

>>>>> "Jon" == Jon Harrop <j@ffconsultancy.com> writes:

  Jon> Anyway, are there any libraries to do hardware accelerated
  Jon> vector graphics in Perl, Python, Lisp, Java or any functional
  Jon> language (except OCaml and F# and excluding WPF and
  Jon> Silverlight)?

I guess the OpenGL binding for Erlang qualifies. The best
exhibit of this would be Wings3D, an Open Source 3D
graphics modeller, written in Erlang, and with quite a
large user base.

http://www.wings3d.com

BR,
Ulf W

--
Ulf Wiger, Senior Specialist,
   / / /   Architecture & Design of Carrier-Class Software
  / / /    Team Leader, Software Characteristics
 / / /     Ericsson AB, IMS Gateways

Jon Harrop <j@ffconsultancy.com> wrote:

>Anyway, are there any libraries to do hardware accelerated vector graphics
>in Perl, Python, Lisp, Java or any functional language (except OCaml and F#
>and excluding WPF and Silverlight)?

http://www.cairographics.org/bindings/
That covers all the languages you named, plus O'Caml and Haskell.

Tony.
--
f.a.n.finch  <d@dotat.at>  http://dotat.at/
GERMAN BIGHT: NORTH BECOMING CYCLONIC 4 OR 5, THEN WEST 5 OR 6. MODERATE OR
ROUGH. RAIN OR DRIZZLE. MODERATE OR GOOD, OCCASIONALLY POOR.

Prefix, Infix, Postfix notations in Mathematica

2000-02-21, 2007-05

[In the following essay, I discuss prefix, infix, postfix notations
and Mathematica's syntax for them. The full HTML formatted article is
available at:
http://xahlee.org/UnixResource_dir/writ/notations.html
]

THE HEAD OF EXPRESSIONS

Lisp's nested parenthesis syntax is a Functional Notation. It has the
general form of (f a b ...) where any of the symbols inside the
matching parenthesis may again be that form. For example, here's a
typical code from Emacs Lisp.

; Recursively apply (f x i), where i is the ith element in the list
li.
; For example, (fold f x '(1 2)) computes (f (f x 1) 2)
(defun fold (f x li)
  (let ((li2 li) (ele) (x2 x))
    (while (setq ele (pop li2))
      (setq x2 (funcall f x2 ele))
    )
    x2
  )
)

Vast majority of computer languages, interpret source code in a one-
dimensional, linear nature. Namely, from left to right, line by line,
as in written text. (Examples of computer languages's source code that
are not linear in nature, are spread sheets, cellular automata,
graphical programing languages) For languages that interprets source
code linearly, the logics of their syntax necessarily have a
hierarchical structure (i.e. tree). The lisp's notation, is the most
effective in visually showing the logics of the syntax. This is
because, a function and its arguments, are simply laid out inside a
parenthesis. The level of nesting corresponds to the precedence in
evaluating the expression.

The first element inside the matching parenthesis, is called the
head of the expression. For example, in (f a b), the f is the
head. The head is a function, and the rest of the symbols inside the
matching parenthesis are its arguments.

The head of lisp's notation needs not to be defined as the first
element inside the parenthesis. For example, we can define the head
to be the last element inside the parenthesis. So, we write (arg1
arg2 ... f) instead of the usual (f arg1 arg2 ...) and its
syntactical analysis remains unchanged. Like wise, you can move the
head outside of the parenthesis.

In Mathematica, the head is placed in front of the parenthesis, and
square brackets are used instead of parenthesis for the enclosing
delimiter. For example, lisp's (f a b c) is syntactically equivalent
to Mathematica's f[a,b,c]. Other examples: (sin ) vs Sin[ ],
(map f list) vs Map[f,list]. Placing the head in front of the
matching bracket makes the notation more familiar, because it is a
conventional math notation.

However, there is a disadvantage in moving the head of a expression
from inside the matching bracket to outside. Namely: The nesting of
the matching delimiters, no longer corresponds to the logics of the
syntax, when the head is itself a compound expression.

For example, suppose Reflection(vectorV,pointP) is function that
returns a function f, such that f(graphicsData) will reflect the
graphicsData along a line passing pointP and parallel to vectorV. In
lisp, we would write ((Reflection vectorV pointP) graphicsData). In
Mathematica, we would write Reflection[vectorV,pointP]
[graphicsData]. In lisp's version, the nesting corresponds to the
logics of the evaluation. In the Mathematica's form, that is no longer
so.

For another example, suppose Deriv is a function that takes a function
f and returns a function g (the derivative of f), and we want to apply
g to a variable x. In lisp, we would write ((Deriv f) x). In
Mathematica, we would write Deriv[f][x]. In lisp's version, the
nesting corresponds to the logics of the evaluation. In the
Mathematica's form, the logics of the evaluation no longer corresponds
to the nesting level, because now the head is outside of the enclosing
delimiters, so the head of expressions no longer nests.

PREFIX, POSTFIX, INFIX

A prefix notation in Mathematica is represented as f@arg.
Essentially, a prefix notation in this context limits it to uses for
functions on only one argument. For example: f@a@b@c is equivalent
to f[a[b[c]]] or in lispy (f (a (b c))). Mathematica also offers a
postfix notation using the operator //. For example, c//b//a//f is
syntactically equivalent to f[a[b[c]]]. (unix's pipe | syntax, is
a form of postfix notation. e.g. c | b | a | f).

For example, Sin[List[1,2,3]] can be written in postfix as
List[1,2,3]//Sin, or prefix Sin@List[1,2,3]. (by the way, they are
semantically equivalent to Map[Sin, List[1,2,3]] in Mathematica) For
infix notation, the function symbol is placed between its arguments.
In Mathematica, the generic form for infix notation is by sandwiching
the tilde symbol around the function name. e.g.
Join[List[1,2],List[3,4]] is syntactically equivalent to List[1,2]
~Join~ List[3,4].

In Mathematica, there is quite a lot syntax variations beside the
above mentioned systematic constructs. For example, Plus[a,b,c] can be
written as a+b+c, Plus[a+b,c], Plus[Plus[a,b],c], or (a
+b)~Plus~c. List[a,b,c] can be written as {a,b,c}, and
Map[f,List[a,b,c]] can be written as f /@ {a,b,c}.

The gist being that certain functions are given a special syntactical
construct to emulate the irregular and inefficient but nevertheless
well-understood conventional notations. Also, it reduces the use of
deep nesting that is difficult to type and manage. For example, the
Plus function is given a operator +, so that Plus[3,4] can be
written with the more familiar 3+4. The List function is given a
syntax construct of {}, so that, List[3,4] can be more easily
written as {3,4}. The boolean And function is given the operator
&&, so that And[a,b] can be written with the more familiar and
convenient a && b. Combining all these types of syntax variations,
it can make the source code easier to read than a purely nested
structure. For example, common math expressions such as 3+2*5>7
don't have to be written as Greater[Plus[3,Times[2,5]],7] or the
lispy (> (+ 3 (* 2 5)) 7).
C and Perl

When we say that C is a infix notation language, the term infix
notation is used loosely for convenience of description. C and other
language's syntaxes derived from it (e.g. C++, Java, Perl,
Javascript...) are not based on a notation system, but takes the
approach of a ad hoc syntax soup. Things like i++, ++i, for(;;)
{}, while(){}, 0x123, sprint(...%s...,...), ... are syntax
whimsies.

As a side note, the Perl mongers are proud of their slogan of There
Are More Than One Way To Do It in their gazillion ad hoc syntax
sugars but unaware that in functional languages (such as Mathematica,
Haskell, Lisp) that there are consistent and generalized constructs
that can generate far more syntax variations than the ad hoc
inflexible Perl both in theory AND in practice. (in lisp, its power of
syntax variation comes in the guise of macros.) And, more importantly,
Perlers clamor about Perl's expressiveness more or less on the
syntax level but don't realize that semantic expressibility is far
more important.

  Xah
  x@xahlee.org
http://xahlee.org/

How Purely Nested Notation Limits The Language's Utility

[The full HTML formatted article is available at:
http://xahlee.org/UnixResource_dir/writ/notations.html
]

2007-05-03

There is a common complain by programers about lisp's notation, of
nested parenthesis, being unnatural or difficult to read. Long time
lisp programers, often counter, that it is a matter of conditioning,
and or blaming the use of inferior text editors that are not
designed to display nested notations. In the following, i describe how
lisp notation is actually a problem, in several levels.

(1) Some 99% of programers are not used to the nested parenthesis
syntax. This is a practical problem. On this aspect along, lisp's
syntax can be considered a problem.

(2) Arguably, the pure nested syntax is not natural for human to read.
Long time lispers may disagree on this point.

(3) Most importantly, a pure nested syntax discourages frequent or
advanced use of function sequencing or compositions. This aspect is
the most devastating.

The first issue, that most programers are not comfortable with nested
notation, is well known. It is not a technical issue. Whether it is
considered a problem of the lisp language is a matter of philosophical
disposition.

The second issue, about nested parenthesis not being natural for human
to read, may be debatable. I do think, that deep nesting is a problem
to the programer. Here's a example of 2 blocks of code that are
syntactically equivalent in the Mathematica language:

vectorAngle[{a1_, a2_}] := Module[{x, y},
    {x, y} = {a1, a2}/Sqrt[a1^2 + a2^2] // N;
    If[x == 0, If[Sign@y === 1, /2, - /2],
      If[y == 0, If[Sign@x === 1, 0, ],
        If[Sign@y === 1, ArcCos@x, 2 - ArcCos@x]
        ]
      ]
    ]

SetDelayed[vectorAngle[List[Pattern[a1,Blank[]],Pattern[a2,Blank[]]]],
    Module[List[x,y],
      CompoundExpression[
        Set[List[x,y],
          N[Times[List[a1,a2],
              Power[Sqrt[Plus[Power[a1,2],Power[a2,2]]],-1]]]],
        If[Equal[x,0],
          If[SameQ[Sign[y],1],Times[ ,Power[2,-1]],
            Times[Times[-1, ],Power[2,-1]]],
          If[Equal[y,0],If[SameQ[Sign[x],1],0, ],
            If[SameQ[Sign[y],1],ArcCos[x],
              Plus[Times[2, ],Times[-1,ArcCos[x]]]]]]]]]

In the latter, it uses a full nested form (called FullForm in
Mathematica). This form is isomorphic to lisp's nested parenthesis
syntax, token for token (i.e. lisp's (f a b) is Mathematica's
f[a,b]). As you can see, this form, by the sheer number of nested
brackets, is in practice problematic to read and type. In Mathematica,
nobody really program using this syntax. (The FullForm syntax is
there, for the same reason of language design principle shared with
lisp of consistency and simplicity, or the commonly touted lisp
advantage of data is program; program is data.)

The third issue, about how nested syntax seriously discourages
frequent or advanced use of inline function sequencing on the fly, is
the most important and I'll give further explanation below.

One practical way to see how this is so, is by considering unix's
shell syntax. You all know, how convenient and powerful is the unix's
pipes. Here are some practical example: ls -al | grep xyz, or cat a
b c | grep xyz | sort | uniq.

Now suppose, we get rid of the unix's pipe notation, instead, replace
it with a pure functional notation: e.g. (uniq (sort (grep xyz (cat a
b c)))), or enrich it with a composition function and a pure function
construct ( ), so this example can be written as: ((composition uniq
sort (lambda (x) (grep xyz x))) (cat a b c)).

You see, how this change, although syntactically equivalent to the
pipe | (or semantically equivalent in the example using function
compositions), but due to the cumbersome nested syntax, will force a
change in the nature of the language by the code programer produces.
Namely, the frequency of inline sequencing of functions on the fly
will probably be reduced, instead, there will be more code that define
functions with temp variables and apply it just once as with
traditional languages.

A language's syntax or notation system, has major impact on what kind
of code or style or thinking pattern on the language's users. This is
a well-known fact for those acquainted with the history of math
notations.

The sequential notation f@g@h@x, or x//h//g//f, or unixy x|h|g|
f, are far more convenient and easier to decipher, than (f (g (h
x))) or ((composition f g h) x). In actual code, any of the f, g, h
might be a complex pure function (aka lambda constructs, full of
parenthesis themselves).

Lisp, by sticking with a almost uniform nested parenthesis notation,
it immediately reduces the pattern of sequencing functions, simply
because the syntax does not readily lend the programer to it as in the
unix's x|h|g|f. For programers who are aware of the coding pattern
of sequencing functions, now either need to think in terms of a
separate composition construct, and or subject to the much
problematic typing and deciphering of nested parenthesis.

(Note: Lisp's sexp is actually not that pure. It has ad hoc syntax
equivalents such as the quote construct '(a b c) , and also `,
#, ,@ constructs, precisely for the purpose of reducing
parenthesis and increasing readability. Scheme's coming standard the
R6RS , even proposes the introduction of [] and {} and few other
syntax sugars to break the uniformity of nested parenthesis for
legibility. Mathematica's FullForm, is actually a version of
unadulterated nested notation as can be.)

  Xah
  x@xahlee.org
http://xahlee.org/

xah@gmail.com wrote:

[nothing relevant to Perl]

Oh no, it is back.
Did your ISP finally cancel your old account or why are you switching to a
new address?
Don't try to disguise yourself. Your 'contributions' can easily be
identified no matter what pseudonym you are using.

***PLONK AGAIN***

jue

On Jun 8, 7:30 pm, "Jrgen Exner" <jurge@hotmail.com> wrote:

> xah@gmail.com wrote:

> [nothing relevant to Perl]

Perl?? Perl is even less relevant to Java than the original post,
which admittedly has some connection to pretty much all programming
languages. (Perl, on the other hand, has no connection to any known
programming language. ;) In particular, Perl code looks more like line
noise than like code from any known programming language. ;))

Twisted wrote:
> On Jun 8, 7:30 pm, "Jrgen Exner" <jurge@hotmail.com> wrote:
>> xah@gmail.com wrote:

>> [nothing relevant to Perl]

> Perl?? Perl is even less relevant to Java than the original post,
> which admittedly has some connection to pretty much all programming
> languages. (Perl, on the other hand, has no connection to any known
> programming language. ;) In particular, Perl code looks more like line
> noise than like code from any known programming language. ;))

Hmm - I know of APL and SNOBOL.

--
Lew

On Jun 9, 6:49 am, Lew <l@noemail.lewscanon.com> wrote:

> > In particular, Perl code looks more like line
> > noise than like code from any known programming language. ;))

> Hmm - I know of APL and SNOBOL.

> --
> Lew

TECO editor commands.  I don't have direct experience with TECO, but
I've heard that a common diversion was to type random characters on
the command line, and see what the editor would do.

-- Paul

"Paul McGuire" <p@austin.rr.com> wrote in message

news:1181414625.121073.203940@g4g2000hsf.googlegroups.com...

> On Jun 9, 6:49 am, Lew <l@noemail.lewscanon.com> wrote:
>> > In particular, Perl code looks more like line
>> > noise than like code from any known programming language. ;))

>> Hmm - I know of APL and SNOBOL.

>> --
>> Lew

> TECO editor commands.  I don't have direct experience with TECO, but
> I've heard that a common diversion was to type random characters on
> the command line, and see what the editor would do.

> -- Paul

J

http://www.jsoftware.com/

On Jun 9, 8:21 pm, "BCB" <b@undisclosedlocation.net> wrote:

Oh come on! Toy languages (such as any set of editor commands) and
joke languages (ala Intercal) don't count, even if they are
technically Turing-complete. ;)

Nor does anything that was designed for the every-character-at-a-
premium punch-card era, particularly if it is, or rhymes with,
"COBOL".

Those have excuses, like it's a joke or it's a constrained
environment. Perl, unfortunately, has no such excuses. If there were
such a thing as "embedded Perl", I'd have to hesitate here, but since
there isn't...

Neither APL nor Snobol nor J are toy or joke languages.

Twisted wrote:
>> Oh come on! Toy languages (such as any set of editor commands) and
>> joke languages (ala Intercal) don't count, even if they are
>> technically Turing-complete. ;)

>> Nor does anything that was designed for the every-character-at-a-
>> premium punch-card era, particularly if it is, or rhymes with,
>> "COBOL".

>> Those have excuses, like it's a joke or it's a constrained
>> environment. Perl, unfortunately, has no such excuses. If there were
>> such a thing as "embedded Perl", I'd have to hesitate here, but since
>> there isn't...
Larry Elmore wrote:
> Neither APL nor Snobol nor J are toy or joke languages.

Indeed.  One wonders where Perl would be if Snobol hadn't preceded it.

--
Lew

On Jun 10, 3:11 pm, Larry Elmore <ljelm@verizon.spammenot.net>
wrote:

I'd like register my agreement.  SNOBOL was a very sophisticated
language and way ahead of its time in many ways.  While it's not
really used anymore, SNOBOL's legacy does live on in languages that
are in wide use.

APL and it's successors (including J & K) are neither toys nor extinct
relics.  APL is still used in a variety of applications.  The price of
the last airline ticket you bought was probably determined by a yield
management application written in APL.  K was created in 1993 and Kx
systems has built an incredibly valuable company on top of it.

APL's terseness has more to do with the Iverson's notational goals
than economy with characters related to punchcards.  In fact, the
dominant languages of the punchcard era (COBOL & FORTRAN) are both
pretty verbose.

Lastly, ITS Teco wasn't a joke or toy language either.. It was
psychotically terse and virtually impenetrable to later review.  But
it wasn't a toy.  When I learned to use EMACS, it was still
implemented in ITS Teco.

<snip>

> Neither APL nor Snobol nor J are toy or joke languages

I wholeheartedly agree, and did not mean to imply as much in my original
post, in which my intent was to emphasize the fact that, until you learn the
language, a J program /does/ resemble line noise! :-)
On Jun 10, 8:50 pm, "BCB" <b@undisclosedlocation.net> wrote:

> I wholeheartedly agree, and did not mean to imply as much in my original
> post, in which my intent was to emphasize the fact that, until you learn the
> language, a J program /does/ resemble line noise! :-)

Eh. This isn't right. The whole discussion was supposed to have died
after the original Perl joke, certainly after the subsequent exclusion
of joke and toy languages. I think I made it clear also that an
editor's command set, Turing-complete though it may be, constitutes a
toy language. Anyway I amend the original claim to cover joke
languages, toy languages, and any write-only languages that
mysteriously aren't considered to fall into either of the former two
categories. After all, you can't really take a language seriously if
it's either impossible to write unmaintainable code in it OR
impossible to write maintainable code in it. The one is necessarily
trivial, and the other unsuitable for anything serious, except as a
machine-compiled intermediate format or a mechanism for assuring job
security.
Twisted schrieb:

> After all, you can't really take a language seriously if
> it's either impossible to write unmaintainable code in it

That's true for any language.
Substitute "not straightforward" for "impossible", and you have a
condition that actually distinguishes languages.

 > OR impossible to write maintainable code in it.

It is possible to write maintainable Perl.
It's just too easy to write unmaintainable Perl.
Also, a Perl "golfer" and an intermediate Perl programmer will have
quite different ideas about what idioms should be considered
maintainable. (I consider Perl's mantra of "many ways to express it" to
be a weakness, not a strength, since it lengthens the learning curve
considerably and doesn't buy much. YMMV.)

Regards,
Jo

On Jun 11, 2:42 am, Joachim Durchholz <j@durchholz.org> wrote:

> It is possible to write maintainable Perl.

Interesting (spoken in the tone of someone hearing about a purported
sighting of Bigfoot, or maybe a UFO).

Still, extraordinary claims require extraordinary evidence. (And no, a
fuzzy picture of something that might be a giant serpent-like thing in
the loch, or equivalent, does not constitute "extraordinary
evidence".)

Twisted schrieb:

> On Jun 11, 2:42 am, Joachim Durchholz <j@durchholz.org> wrote:
>> It is possible to write maintainable Perl.

> Interesting (spoken in the tone of someone hearing about a purported
> sighting of Bigfoot, or maybe a UFO).

> Still, extraordinary claims require extraordinary evidence. (And no, a
> fuzzy picture of something that might be a giant serpent-like thing in
> the loch, or equivalent, does not constitute "extraordinary
> evidence".)

There's enough Perl code around. Including some that's been reported as
maintainable and well-maintained.
I haven't looked too deeply into it, but what I have seen from e.g.
Webmin looked quite clear and straightforward to me. (Real Programmers
can write Fortran code in any language - and they can write Pascal code
in any language...)

Perl code *can* resemble line noise. I don't like the language. I think
Larry and the Perl community have been getting some priorities very
wrong over time (and other things very right as well: take a look at the
regex redesign for Perl 6, for example - it's all shades of grey, not
black-and-white).

Regards,
Jo

Add to del.icio.us | Digg this | Stumble it | Powered by Megasolutions Inc