Home     |     .Net Programming    |     cSharp Home    |     Sql Server Home    |     Javascript / Client Side Development     |     Ajax Programming

Ruby on Rails Development     |     Perl Programming     |     C Programming Language     |     C++ Programming     |     IT Jobs

Python Programming Language     |     Laptop Suggestions?    |     TCL Scripting     |     Fortran Programming     |     Scheme Programming Language


 
 
Cervo Technologies
The Right Source to Outsource

MS Dynamics CRM 3.0

Scheme Programming Language

ex 1.7 SICP


The book mentions that arithmetic operations
are almost always performed with limited
precision(this I know). It then says that
this makes the test inadequate for
very large numbers(this I don't follow).

(define (good-enough? guess x)
 (< (abs (- (square guess) x)) 0.001))

I'm not understanding why the above
test is insufficient for very large numbers.
That would seem to imply that precision
somehow affects large numbers but not small
ones. Are they considering 'large numbers'
to be those numbers unrepresentable by the
machine? For example, in C, the type int
has the range [INT_MAX, INT_MIN].
Using this comparison, would 'large numbers'
be referring to those that exceed INT_MAX?

The other problem I have, is that it goes on
to explain an alternative way of handling this
problem. It says to 'watch' how guess changes from
one interation to the next. To stop when the change
is a 'very small' fraction of the guess.

Well, what do they consider to be a 'very small'
fraction? is guess/10000 ideal? guess/1000000, etc?

I'm just looking for feedback to clear this up
so that I can proceed with reading SICP.

Thanks.

--
conrad

I assume the values in the square root computation are floating-point
numbers: I assume < compares a floating-point number to 0.001. I assume the
divide operator yields a floating-point number. Integer arithmetic wouldn't
be very helpful for the square root of 2.

For large numbers, there are holes in the list of numbers that can be
represented in base 2. Here is a simple way to think about the holes. Assume
the mantissa can represent the numbers 0 to 15 (4 bits) and the exponent can
represent the numbers 0 to 7 (3 bits) (ignoring negative mantissas and
exponents and fancy IEEE 754 exponent encoding).

When the exponent is 6 and the mantissa is 0 to 15, we can represent the
numbers 0, 64, 128, ..., 64*15 (i.e multiples of 2^6).
When the exponent is 7 and the mantissa is 0 to 15, we can represent the
numbers 0, 128, 256, ..., 128*15 (i.e multiples of 2^7).

The largest number is 15 * 2^7 = 15*128 = 1920. When the exponent is 7, the
next largest number is 14*2^7 = 14*128 = 1792. When the exponent is 6, the
largest number is 15*2^6 = 15*64 = 960. The numbers between 1792 and 1920
cannot be represented. That's what I call the holes.

If you want to see what the real holes are, Hennessy and Patterson have a
nice section on floating-point representation (32-bit and 64-bit).

> The other problem I have, is that it goes on
> to explain an alternative way of handling this
> problem. It says to 'watch' how guess changes from
> one interation to the next. To stop when the change
> is a 'very small' fraction of the guess.

> Well, what do they consider to be a 'very small'
> fraction? is guess/10000 ideal? guess/1000000, etc?

It is your choice. How precise do you want to be?

conrad skrev:

> The book mentions that arithmetic operations
> are almost always performed with limited
> precision(this I know). It then says that
> this makes the test inadequate for
> very large numbers(this I don't follow).

> (define (good-enough? guess x)
>  (< (abs (- (square guess) x)) 0.001))

> I'm not understanding why the above
> test is insufficient for very large numbers.
> That would seem to imply that precision
> somehow affects large numbers but not small
> ones.

Marlene has given a precise answere.
Here is a "intuitive" one.

Suppose the floating points work
a precision of 4 decimal digits.

Then 12340000 and 12343333 are considered the
same number.

Now if we subtract these numbers we get 0.
And 0 is smaller than 0.001.

The morale is, that is dangerous to
subtract two numbers of equal size.

> Are they considering 'large numbers'
> to be those numbers unrepresentable by the
> machine? For example, in C, the type int
> has the range [INT_MAX, INT_MIN].
> Using this comparison, would 'large numbers'
> be referring to those that exceed INT_MAX?

No, they mean floating point numbers much
larger than 0.001.

--
Jens Axel Sgaard

Add to del.icio.us | Digg this | Stumble it | Powered by Megasolutions Inc