Sometimes, once copy pasting part output acquired in stclairdrake.net I derived a weird comma ~ a number and also then much more numbers. Imagine, and output for a computation is


calculation
re-superstructure
boost this concern
monitor
inquiry Sep 27 "18 at 9:06
*

PhoenixPersonPhoenixPerson
50322 silver badges99 bronze title
$\endgroup$
7
| present 2
an ext comments

1 price 1


energetic earliest Votes
11
$\begingroup$
This means that the number

0.4244131815783876`then it way that the number is in machine precision.

You are watching: What does mean after a number

That non-integer numbers of "digits" may occur has to carry out with the means how floating allude numbers space stored: In comparison to fixed point numbers which room stored as mere lists of binary digits, floating allude numbers room stored through mantissa and also exponent.

Moreover, stclairdrake.net"s arbitrarily precision arithmetic do the efforts to monitor the uncertainty that a number by actually dealing with finite precision numbers together intervals. The number of digits the a "number" $x \pm \delta/2$ is then computed as the (negative) logarithm of the broad of this interval loved one to its magnitude. Much more precisely, as can be check out in the documentation quote by J.M.:

$$\mathrmPrecision = - \log_10(\delta / |x|).$$

If i am no mistaken, computations in arbitrarily precision need to use interval arithmetic or at least have to carry out upper bounds an excellent approximations for the radius of uncertainty in order to store track that the term boundaries.

This is a attribute that computations in machine precision actually carry out not have.

Often, one states that a device precision number "has about 16 far-reaching digits" or it has actually "16 number of precision". But "counting the digits" the a number does not tell friend how plenty of of these digits you might trust. Stclairdrake.net uses a lot stronger id of precision: If the arbitrarily precision number x is a result of a computation then its Precision provides an upper bound estimator of an upper bound the its loved one error:

$$|x - x_\mathrmtrue| \leq |x| \, 10^-p \quad \textwith $p \approx \mathrmPrecision$.$$

A priorily, a an outcome in maker precision deserve to have arbitrarily high family member error, thus arbitrarily low precision in this stronger feeling of precision. This is why Precision returns MachinePrecision of an equipment precision numbers: It simply cannot phone call how specific they are.

Edit:

As Daniel Lichtblau stated in a comment (and probably diluted by mine interpretation), stclairdrake.net estimates the error propagation the $x \pm \delta$ under the operation $x \mapsto f(x)$ essentially by

$$f(x \pm \delta) \approx f(x) \pm |f"(x)| \, \delta$$

or by something similar. If $f"$ is Lipschitz continuous with Lipschitz consistent $\varLambda \geq 0$, the true uncertainty deserve to be bounded through virtue of Taylor"s theorem together follows:

$$|f(x) - f(x \pm \delta)| \leq |f"(x)| \, \delta + C_f \, \delta^2.$$

So, if the uncertainty $\delta$ is tiny in the beginning and also if $C_f$ is not astronomously large, climate $|f"(x)| \delta $ is very an excellent estimator for $|f"(x)| \, \delta + C_f \, \delta^2$, the "true" upper bound the uncertainty.

See more: Download Fullmetal Alchemist Season 1 Download English Subtitles

Of course, problems arise if $f$ is no differentiable almost everywhere or if $f"$ walk not have actually a global Lipschitz constant. This wake up for instance for the infamous duty $f(x) = \frac1x$. However, that"s nothing new: division by little numbers needs to be done carefully.