C语言 C 如何计算 sin() 和其他数学函数?

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/2284860/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-09-02 04:32:02  来源:igfitidea点击:

How does C compute sin() and other math functions?

cmathtrigonometry

提问by Hank

I've been poring through .NET disassemblies and the GCC source code, but can't seem to find anywhere the actual implementation of sin()and other math functions... they always seem to be referencing something else.

我一直在研究 .NET 反汇编和 GCC 源代码,但似乎无法在任何地方找到sin()其他数学函数的实际实现……它们似乎总是在引用其他东西。

Can anyone help me find them? I feel like it's unlikely that ALL hardware that C will run on supports trig functions in hardware, so there must be a software algorithm somewhere, right?

谁能帮我找到他们?我觉得 C 运行的所有硬件都不太可能支持硬件中的触发函数,所以某处肯定有软件算法,对吧?



I'm aware of several ways that functions canbe calculated, and have written my own routines to compute functions using taylor series for fun. I'm curious about how real, production languages do it, since all of my implementations are always several orders of magnitude slower, even though I think my algorithms are pretty clever (obviously they're not).

我知道可以计算函数的几种方法,并编写了我自己的例程来使用泰勒级数计算函数以获得乐趣。我很好奇生产语言的真实性如何,因为我的所有实现总是慢几个数量级,即使我认为我的算法非常聪明(显然它们不是)。

采纳答案by Jason Orendorff

In GNU libm, the implementation of sinis system-dependent. Therefore you can find the implementation, for each platform, somewhere in the appropriate subdirectory of sysdeps.

在 GNU libm 中, 的实现sin依赖于系统。因此,您可以在sysdeps的适当子目录中的某处找到每个平台的实现。

One directory includes an implementation in C, contributed by IBM. Since October 2011, this is the code that actually runs when you call sin()on a typical x86-64 Linux system. It is apparently faster than the fsinassembly instruction. Source code: sysdeps/ieee754/dbl-64/s_sin.c, look for __sin (double x).

一个目录包含一个由 IBM 提供的 C 实现。自 2011 年 10 月以来,这是在sin()典型的 x86-64 Linux 系统上调用时实际运行的代码。它显然比fsin汇编指令快。源代码:sysdeps/ieee754/dbl-64/s_sin.c,寻找__sin (double x).

This code is very complex. No one software algorithm is as fast as possible and also accurate over the whole range of xvalues, so the library implements several different algorithms, and its first job is to look at xand decide which algorithm to use.

这段代码非常复杂。没有一个软件算法是尽可能快的,并且在x值的整个范围内也是准确的,因此该库实现了几种不同的算法,它的第一个工作是查看x并决定使用哪种算法。

  • When xis very veryclose to 0, sin(x) == xis the right answer.

  • A bit further out, sin(x)uses the familiar Taylor series. However, this is only accurate near 0, so...

  • When the angle is more than about 7°, a different algorithm is used, computing Taylor-series approximations for both sin(x) and cos(x), then using values from a precomputed table to refine the approximation.

  • When |x| > 2, none of the above algorithms would work, so the code starts by computing some value closer to 0 that can be fed to sinor cosinstead.

  • There's yet another branch to deal with xbeing a NaN or infinity.

  • x非常非常接近 0 时,sin(x) == x是正确答案。

  • 更远一点,sin(x)使用熟悉的泰勒级数。但是,这仅在 0 附近准确,所以......

  • 当角度大于约 7° 时,将使用不同的算法,计算 sin(x) 和 cos(x) 的泰勒级数近似值,然后使用预计算表中的值来改进近似值。

  • 当 | ×| > 2,上述算法都不起作用,因此代码首先计算一些接近 0 的值,该值可以提供给sincos替代。

  • 还有另一个分支来处理x是 NaN 或无穷大。

This code uses some numerical hacks I've never seen before, though for all I know they might be well-known among floating-point experts. Sometimes a few lines of code would take several paragraphs to explain. For example, these two lines

这段代码使用了一些我以前从未见过的数字技巧,尽管我知道它们在浮点专家中可能是众所周知的。有时几行代码需要几段时间来解释。例如,这两行

double t = (x * hpinv + toint);
double xn = t - toint;

are used (sometimes) in reducing xto a value close to 0 that differs from xby a multiple of π/2, specifically xn× π/2. The way this is done without division or branching is rather clever. But there's no comment at all!

用于(有时)将x减少到接近 0 的值,该值与x相差π/2 的倍数,特别是xn× π/2。没有划分或分支的方式是相当聪明的。但根本没有评论!



Older 32-bit versions of GCC/glibc used the fsininstruction, which is surprisingly inaccurate for some inputs. There's a fascinating blog post illustrating this with just 2 lines of code.

较旧的 32 位版本的 GCC/glibc 使用了该fsin指令,这对于某些输入来说非常不准确。有一篇引人入胜的博客文章仅用 2 行代码说明了这一点

fdlibm's implementation of sinin pure C is much simpler than glibc's and is nicely commented. Source code: fdlibm/s_sin.cand fdlibm/k_sin.c

fdlibmsin在纯 C 中的实现比 glibc 简单得多,并且有很好的注释。源代码:fdlibm/s_sin.cfdlibm/k_sin.c

回答by John D. Cook

Functions like sine and cosine are implemented in microcode inside microprocessors. Intel chips, for example, have assembly instructions for these. A C compiler will generate code that calls these assembly instructions. (By contrast, a Java compiler will not. Java evaluates trig functions in software rather than hardware, and so it runs much slower.)

像正弦和余弦这样的函数是在微处理器内部的微代码中实现的。例如,英特尔芯片有这些的组装说明。AC 编译器将生成调用这些汇编指令的代码。(相比之下,Java 编译器不会。Java 在软件而不是硬件中计算触发函数,因此运行速度要慢得多。)

Chips do notuse Taylor series to compute trig functions, at least not entirely. First of all they use CORDIC, but they may also use a short Taylor series to polish up the result of CORDIC or for special cases such as computing sine with high relative accuracy for very small angles. For more explanation, see this StackOverflow answer.

Chips使用泰勒级数来计算三角函数,至少不是完全如此。首先,他们使用CORDIC,但他们也可能使用简短的泰勒级数来完善 CORDIC 的结果或用于特殊情况,例如计算非常小角度的相对精度较高的正弦值。有关更多解释,请参阅此StackOverflow 答案

回答by Donald Murray

OK kiddies, time for the pros.... This is one of my biggest complaints with inexperienced software engineers. They come in calculating transcendental functions from scratch (using Taylor's series) as if nobody had ever done these calculations before in their lives. Not true. This is a well defined problem and has been approached thousands of times by very clever software and hardware engineers and has a well defined solution. Basically, most of the transcendental functions use Chebyshev Polynomials to calculate them. As to which polynomials are used depends on the circumstances. First, the bible on this matter is a book called "Computer Approximations" by Hart and Cheney. In that book, you can decide if you have a hardware adder, multiplier, divider, etc, and decide which operations are fastest. e.g. If you had a really fast divider, the fastest way to calculate sine might be P1(x)/P2(x) where P1, P2 are Chebyshev polynomials. Without the fast divider, it might be just P(x), where P has much more terms than P1 or P2....so it'd be slower. So, first step is to determine your hardware and what it can do. Then you choose the appropriate combination of Chebyshev polynomials (is usually of the form cos(ax) = aP(x) for cosine for example, again where P is a Chebyshev polynomial). Then you decide what decimal precision you want. e.g. if you want 7 digits precision, you look that up in the appropriate table in the book I mentioned, and it will give you (for precision = 7.33) a number N = 4 and a polynomial number 3502. N is the order of the polynomial (so it's p4.x^4 + p3.x^3 + p2.x^2 + p1.x + p0), because N=4. Then you look up the actual value of the p4,p3,p2,p1,p0 values in the back of the book under 3502 (they'll be in floating point). Then you implement your algorithm in software in the form: (((p4.x + p3).x + p2).x + p1).x + p0 ....and this is how you'd calculate cosine to 7 decimal places on that hardware.

好吧,孩子们,专业人士的时间到了......这是我对缺乏经验的软件工程师最大的抱怨之一。他们从头开始计算超越函数(使用泰勒级数),就好像他们以前从未做过这些计算一样。不对。这是一个定义明确的问题,非常聪明的软件和硬件工程师已经解决了数千次,并且有一个明确定义的解决方案。基本上,大多数超越函数使用切比雪夫多项式来计算它们。至于使用哪些多项式取决于具体情况。首先,关于这个问题的圣经是 Hart 和 Cheney 的一本名为“Computer Approximations”的书。在该书中,您可以决定是否有硬件加法器、乘法器、除法器等,并决定哪些运算最快。例如,如果你有一个非常快的分频器,计算正弦的最快方法可能是 P1(x)/P2(x),其中 P1、P2 是切比雪夫多项式。如果没有快速除法器,它可能只是 P(x),其中 P 的项比 P1 或 P2 多得多……所以它会更慢。因此,第一步是确定您的硬件以及它可以做什么。然后您选择适当的切比雪夫多项式组合(例如,对于余弦,通常采用 cos(ax) = aP(x) 的形式,同样,其中 P 是切比雪夫多项式)。然后您决定所需的十进制精度。例如,如果您想要 7 位精度,您可以在我提到的书中的相应表格中查找,它会给您(精度 = 7.33)一个数 N = 4 和一个多项式数 3502。N 是多项式(所以它是 p4.x^4 + p3.x^3 + p2.x^2 + p1.x + p0),因为 N=4。然后你查找 p4,p3,p2,p1 的实际值,本书后面的 p0 值低于 3502(它们将是浮点数)。然后你在软件中以如下形式实现你的算法:(((p4.x + p3).x + p2).x + p1).x + p0 ....这就是你如何计算余弦到 7 位小数放在那个硬件上。

Note that most hardware implementations of transcendental operations in an FPU usually involve some microcode and operations like this (depends on the hardware). Chebyshev polynomials are used for most transcendentals but not all. e.g. Square root is faster to use a double iteration of Newton raphson method using a lookup table first. Again, that book "Computer Approximations" will tell you that.

请注意,FPU 中超越运算的大多数硬件实现通常涉及一些微代码和这样的操作(取决于硬件)。切比雪夫多项式用于大多数超越数,但不是全部。例如,平方根首先使用查找表使用牛顿拉夫森方法的双重迭代更快。同样,那本书“计算机近似”会告诉你。

If you plan on implmementing these functions, I'd recommend to anyone that they get a copy of that book. It really is the bible for these kinds of algorithms. Note that there are bunches of alternative means for calculating these values like cordics, etc, but these tend to be best for specific algorithms where you only need low precision. To guarantee the precision every time, the chebyshev polynomials are the way to go. Like I said, well defined problem. Has been solved for 50 years now.....and thats how it's done.

如果您打算实现这些功能,我会向任何人推荐他们获得该书的副本。它确实是这类算法的圣经。请注意,有许多替代方法可用于计算这些值,例如cordics 等,但这些方法往往最适合您只需要低精度的特定算法。为了保证每次的精度,切比雪夫多项式是要走的路。就像我说的,定义明确的问题。已经解决了 50 年了......这就是它的完成方式。

Now, that being said, there are techniques whereby the Chebyshev polynomials can be used to get a single precision result with a low degree polynomial (like the example for cosine above). Then, there are other techniques to interpolate between values to increase the accuracy without having to go to a much larger polynomial, such as "Gal's Accurate Tables Method". This latter technique is what the post referring to the ACM literature is referring to. But ultimately, the Chebyshev Polynomials are what are used to get 90% of the way there.

现在,话虽如此,有一些技术可以使用切比雪夫多项式来获得具有低次多项式的单精度结果(如上面的余弦示例)。然后,还有其他技术可以在值之间进行插值以提高准确性,而无需使用更大的多项式,例如“Gal 的准确表方法”。后一种技术就是引用 ACM 文献的帖子所指的内容。但最终,切比雪夫多项式是用来完成 90% 的。

Enjoy.

享受。

回答by Blindy

For sinspecifically, using Taylor expansion would give you:

对于sin具体而言,利用泰勒展开会给你:

sin(x) := x - x^3/3! + x^5/5! - x^7/7! + ... (1)

sin(x) := x - x^3/3! + x^5/5!- x^7/7!+ ... (1)

you would keep adding terms until either the difference between them is lower than an accepted tolerance level or just for a finite amount of steps (faster, but less precise). An example would be something like:

您将继续添加项,直到它们之间的差异低于可接受的容差水平或仅适用于有限数量的步骤(更快,但不太精确)。一个例子是这样的:

float sin(float x)
{
  float res=0, pow=x, fact=1;
  for(int i=0; i<5; ++i)
  {
    res+=pow/fact;
    pow*=-1*x*x;
    fact*=(2*(i+1))*(2*(i+1)+1);
  }

  return res;
}

Note: (1) works because of the aproximation sin(x)=x for small angles. For bigger angles you need to calculate more and more terms to get acceptable results. You can use a while argument and continue for a certain accuracy:

注意:(1) 之所以有效,是因为小角度的近似 sin(x)=x。对于更大的角度,您需要计算越来越多的项以获得可接受的结果。您可以使用 while 参数并继续以获得一定的准确性:

double sin (double x){
    int i = 1;
    double cur = x;
    double acc = 1;
    double fact= 1;
    double pow = x;
    while (fabs(acc) > .00000001 &&   i < 100){
        fact *= ((2*i)*(2*i+1));
        pow *= -1 * x*x; 
        acc =  pow / fact;
        cur += acc;
        i++;
    }
    return cur;

}

回答by Mehrdad Afshari

Yes, there are software algorithms for calculating sintoo. Basically, calculating these kind of stuff with a digital computer is usually done using numerical methodslike approximating the Taylor seriesrepresenting the function.

是的,也有用于计算的软件算法sin。基本上,用数字计算机计算这些东西通常是使用数值方法来完成的,比如近似表示函数的泰勒级数

Numerical methods can approximate functions to an arbitrary amount of accuracy and since the amount of accuracy you have in a floating number is finite, they suit these tasks pretty well.

数值方法可以将函数逼近任意精度,并且由于浮点数的精度是有限的,因此它们非常适合这些任务。

回答by Hannoun Yassir

Use Taylor seriesand try to find relation between terms of the series so you don't calculate things again and again

使用泰勒级数并尝试找到级数项之间的关系,这样您就不会一次又一次地计算事物

Here is an example for cosinus:

以下是余弦的示例:

double cosinus(double x, double prec)
{
    double t, s ;
    int p;
    p = 0;
    s = 1.0;
    t = 1.0;
    while(fabs(t/s) > prec)
    {
        p++;
        t = (-t * x * x) / ((2 * p - 1) * (2 * p));
        s += t;
    }
    return s;
}

using this we can get the new term of the sum using the already used one (we avoid the factorial and x2p)

使用这个,我们可以使用已经使用的和的新项(我们避免阶乘和 x 2p

explanation

解释

回答by Thomas Pornin

It is a complex question. Intel-like CPU of the x86 family have a hardware implementation of the sin()function, but it is part of the x87 FPU and not used anymore in 64-bit mode (where SSE2 registers are used instead). In that mode, a software implementation is used.

这是一个复杂的问题。x86 家族的类似 Intel 的 CPU 具有该sin()功能的硬件实现,但它是 x87 FPU 的一部分,不再用于 64 位模式(其中使用 SSE2 寄存器)。在该模式下,使用软件实现。

There are several such implementations out there. One is in fdlibmand is used in Java. As far as I know, the glibc implementation contains parts of fdlibm, and other parts contributed by IBM.

有几个这样的实现。一种是在fdlibm 中,用于 Java。据我所知,glibc 实现包含 fdlibm 的一部分,以及 IBM 贡献的其他部分。

Software implementations of transcendental functions such as sin()typically use approximations by polynomials, often obtained from Taylor series.

超越函数的软件实现,例如sin()通常使用多项式近似,通常从泰勒级数中获得。

回答by gnasher729

Chebyshev polynomials, as mentioned in another answer, are the polynomials where the largest difference between the function and the polynomial is as small as possible. That is an excellent start.

切比雪夫多项式,如另一个答案中所述,是函数与多项式之间的最大差异尽可能小的多项式。这是一个很好的开始。

In some cases, the maximum error is not what you are interested in, but the maximum relative error. For example for the sine function, the error near x = 0 should be much smaller than for larger values; you want a small relativeerror. So you would calculate the Chebyshev polynomial for sin x / x, and multiply that polynomial by x.

在某些情况下,最大误差不是您感兴趣的,而是最大相对误差。例如,对于正弦函数,x = 0 附近的误差应该比较大值小得多;你想要一个小的相对误差。因此,您将计算 sin x / x 的切比雪夫多项式,并将该多项式乘以 x。

Next you have to figure out how to evaluate the polynomial. You want to evaluate it in such a way that the intermediate values are small and therefore rounding errors are small. Otherwise the rounding errors might become a lot larger than errors in the polynomial. And with functions like the sine function, if you are careless then it may be possible that the result that you calculate for sin x is greater than the result for sin y even when x < y. So careful choice of the calculation order and calculation of upper bounds for the rounding error are needed.

接下来,您必须弄清楚如何计算多项式。您希望以中间值较小且因此舍入误差较小的方式对其进行评估。否则舍入误差可能会比多项式中的误差大很多。对于像正弦函数这样的函数,如果您不小心,那么即使 x < y,您对 sin x 的计算结果也可能大于 sin y 的结果。因此需要仔细选择计算顺序和计算舍入误差的上限。

For example, sin x = x - x^3/6 + x^5 / 120 - x^7 / 5040... If you calculate naively sin x = x * (1 - x^2/6 + x^4/120 - x^6/5040...), then that function in parentheses is decreasing, and it willhappen that if y is the next larger number to x, then sometimes sin y will be smaller than sin x. Instead, calculate sin x = x - x^3 * (1/6 - x^2 / 120 + x^4/5040...) where this cannot happen.

例如,sin x = x - x^3/6 + x^5 / 120 - x^7 / 5040... 如果你天真地计算 sin x = x * (1 - x^2/6 + x^4/ 120 - X ^5040分之6...),然后,在括号函数正在减小,并且它发生的是,如果y是一个较大数为x,则有时罪ý将比罪x小。相反,计算 sin x = x - x^3 * (1/6 - x^2 / 120 + x^4/5040...) 在这种情况下不会发生。

When calculating Chebyshev polynomials, you usually need to round the coefficients to double precision, for example. But while a Chebyshev polynomial is optimal, the Chebyshev polynomial with coefficients rounded to double precision is not the optimal polynomial with double precision coefficients!

例如,在计算切比雪夫多项式时,通常需要将系数四舍五入到双精度。但是,虽然切比雪夫多项式是最优的,但系数四舍五入到双精度的切比雪夫多项式不是具有双精度系数的最优多项式!

For example for sin (x), where you need coefficients for x, x^3, x^5, x^7 etc. you do the following: Calculate the best approximation of sin x with a polynomial (ax + bx^3 + cx^5 + dx^7) with higher than double precision, then round a to double precision, giving A. The difference between a and A would be quite large. Now calculate the best approximation of (sin x - Ax) with a polynomial (b x^3 + cx^5 + dx^7). You get different coefficients, because they adapt to the difference between a and A. Round b to double precision B. Then approximate (sin x - Ax - Bx^3) with a polynomial cx^5 + dx^7 and so on. You will get a polynomial that is almost as good as the original Chebyshev polynomial, but much better than Chebyshev rounded to double precision.

例如,对于 sin (x),您需要 x、x^3、x^5、x^7 等的系数,请执行以下操作: 使用多项式 (ax + bx^3 + cx^5 + dx^7) 高于双精度,然后将 a 舍入到双精度,得到 A。a 和 A 之间的差异会非常大。现在用多项式 (bx^3 + cx^5 + dx^7) 计算 (sin x - Ax) 的最佳近似值。你得到不同的系数,因为它们适应 a 和 A 之间的差异。将 b 舍入到双精度 B。然后用多项式 cx^5 + dx^7 等近似 (sin x - Ax - Bx^3)。您将得到一个与原始 Chebyshev 多项式几乎一样好的多项式,但比舍入到双精度的 Chebyshev 好得多。

Next you should take into account the rounding errors in the choice of polynomial. You found a polynomial with minimum error in the polynomial ignoring rounding error, but you want to optimise polynomial plus rounding error. Once you have the Chebyshev polynomial, you can calculate bounds for the rounding error. Say f (x) is your function, P (x) is the polynomial, and E (x) is the rounding error. You don't want to optimise | f (x) - P (x) |, you want to optimise | f (x) - P (x) +/- E (x) |. You will get a slightly different polynomial that tries to keep the polynomial errors down where the rounding error is large, and relaxes the polynomial errors a bit where the rounding error is small.

接下来,您应该考虑多项式选择中的舍入误差。您在忽略舍入误差的多项式中找到了具有最小误差的多项式,但您想优化多项式加上舍入误差。一旦你有了切比雪夫多项式,你就可以计算舍入误差的界限。假设 f (x) 是您的函数,P (x) 是多项式,而 E (x) 是舍入误差。你不想优化| f(x)-P(x)|,要优化| f (x) - P (x) +/- E (x) |。您将得到一个略有不同的多项式,它试图在舍入误差较大的地方降低多项式误差,并在舍入误差较小的地方稍微放宽多项式误差。

All this will get you easily rounding errors of at most 0.55 times the last bit, where +,-,*,/ have rounding errors of at most 0.50 times the last bit.

所有这些都可以让您轻松地舍入误差最多为最后一位的 0.55 倍,其中 +、-、*、/ 的舍入误差最多为最后一位的 0.50 倍。

回答by chux - Reinstate Monica

Concerning trigonometric function like sin(), cos(),tan()there has been no mention, after 5 years, of an important aspect of high quality trig functions: Range reduction.

关于像三角函数sin()cos()tan()一直没有提到,5年后,高品质的三角函数的一个重要方面:范围减少

An early step in any of these functions is to reduce the angle, in radians, to a range of a 2*π interval. But π is irrational so simple reductions like x = remainder(x, 2*M_PI)introduce error as M_PI, or machine pi, is an approximation of π. So, how to do x = remainder(x, 2*π)?

这些函数中的任何一个的早期步骤都是将角度(以弧度为单位)减小到 2*π 间隔的范围内。但是 π 是无理数,所以简单的减少,比如x = remainder(x, 2*M_PI)引入误差M_PI,或者机器 pi,是 π 的近似值。那么,怎么办x = remainder(x, 2*π)

Early libraries used extended precision or crafted programming to give quality results but still over a limited range of double. When a large value was requested like sin(pow(2,30)), the results were meaningless or 0.0and maybe with an error flagset to something like TLOSStotal loss of precision or PLOSSpartial loss of precision.

早期的库使用扩展精度或精心设计的程序来提供高质量的结果,但仍然在double. 当请求一个较大的值时,例如sin(pow(2,30)),结果毫无意义,或者0.0可能将错误标志设置为诸如TLOSS完全丢失精度或PLOSS部分精度丢失之类的内容。

Good range reduction of large values to an interval like -π to π is a challenging problem that rivals the challenges of the basic trig function, like sin(), itself.

将大值的范围缩小到像 -π 到 π 这样的区间是一个具有挑战性的问题,它可以与基本三角函数(如sin())本身的挑战相媲美。

A good report is Argument reduction for huge arguments: Good to the last bit (1992). It covers the issue well: discusses the need and how things were on various platforms (SPARC, PC, HP, 30+ other) and provides a solution algorithm the gives quality results for alldoublefrom -DBL_MAXto DBL_MAX.

一个很好的报告是Argument reduction for large arguments: Good to the last bit(1992)。它很好地涵盖了这个问题:讨论了各种平台(SPARC、PC、HP、30+ 其他)上的需求和情况,并提供了一个解决方案算法,为所有double-DBL_MAXDBL_MAX.



If the original arguments are in degrees, yet may be of a large value, use fmod()first for improved precision. A good fmod()will introduce no errorand so provide excellent range reduction.

如果原始参数以度为单位,但可能具有很大的值,请fmod()首先使用以提高精度。商品fmod()不会引入错误,因此提供出色的范围缩小。

// sin(degrees2radians(x))
sin(degrees2radians(fmod(x, 360.0))); // -360.0 < fmod(x,360) < +360.0

Various trig identities and remquo()offer even more improvement. Sample: sind()

各种触发标识并remquo()提供更多改进。示例:sind()

回答by John Bode

The actual implementation of library functions is up to the specific compiler and/or library provider. Whether it's done in hardware or software, whether it's a Taylor expansion or not, etc., will vary.

库函数的实际实现取决于特定的编译器和/或库提供者。无论是用硬件还是软件来完成,是否是泰勒展开式等等,都会有所不同。

I realize that's absolutely no help.

我意识到这绝对没有帮助。