Arbitrary precision numbers are represented with the built-in BigInt
variable type. Precision of a calculation is determined here by the number of decimal places in the value, not significant digits. This allows leveraging routines already available for addition, subtraction, multiplication and division. The first two are straightforward in this context, but simple multiplication will acquire an extra factor of the precision scale while simple division loses the same factor. The functions mul()
and div()
remove or restore these factors as appropriate and should always be used for these two operations on arbitrary precision numbers.
Arbitrary precision is tracked by two global variables: decimals
, with a default value of 20, and precisionScale
, with a default value of 10n**20n
. The precision can be changed at will:
setPrecisionScale( n ) — set precision of future calculations to n decimals
Since JavaScript is single threaded, tracking precision globally rather than at each calculational step should pose no problems, but if so then please open an issue on GitHub.
Additional functions available:
arbitrary( x ) — convert a real or complex float to arbitrary precision or the reverse
A( x ) — convert a real or complex float to arbitrary precision or the reverse
getConstant( name ) — retrieve a mathematical constant at the current precision scale
The two functions sqrt()
and ln()
already support arbitrary precision evaluation, and additional functions will follow. The arbitrary precision apparatus is used internally to improve the evaluation of certain functions, such as expIntegralEi()
.