javascript 使用 HTML5 WebGL 着色器进行计算
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/7410957/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Using HTML5 WebGL Shaders for Computation
提问by skeggse
It seems to me like one could theoretically use WebGL for computation--such as computing primes or π or something along those lines. However, from what little I've seen, the shader itself isn't written in Javascript, so I have a few questions:
在我看来,理论上可以使用 WebGL 进行计算——例如计算素数或 π 或类似的东西。但是,据我所知,着色器本身不是用 Javascript 编写的,所以我有几个问题:
What language arethe shaders written in?- Would it even be worthwhile to attempt to do such a thing, taking into account how shaders work?
- How does one pass variables back and forth during runtime? Or if not possible, how does one pass information back after the shader finishes executing?
- Since it isn't Javascript, how would one handle very large integers (BigInteger in Java or a ported version in Javascript)?
- I would assume this automatically compiles the script so that it runs across all the cores in the graphics card, can I get a confirmation?
什么语言的着色器写的?- 考虑到着色器的工作原理,尝试做这样的事情是否值得?
- 如何在运行时来回传递变量?或者如果不可能,在着色器完成执行后如何将信息传回?
- 既然它不是 Javascript,那么如何处理非常大的整数(Java 中的 BigInteger 或 Javascript 中的移植版本)?
- 我假设这会自动编译脚本,以便它在显卡中的所有核心上运行,我能得到确认吗?
If relevant, in this specific case, I'm trying to factor fairly large numbers as part of a [very] extended compsci project.
如果相关,在这种特定情况下,我试图将相当大的数字分解为 [非常] 扩展的 compsci 项目的一部分。
EDIT:
编辑:
- WebGL shaders are written in GLSL.
- WebGL 着色器是用 GLSL 编写的。
采纳答案by duskwuff -inactive-
There's a project currently being worked on to do pretty much exactly what you're doing - WebCL. I don't believe it's live in any browsers yet, though.
目前正在开展一个项目来完成您正在做的工作 - WebCL。不过,我不相信它在任何浏览器中都可用。
To answer your questions:
回答您的问题:
- Already answered I guess!
- Probably not worth doing in WebGL. If you want to play around with GPU computation, you'll probably have better luck doing it outside the browser for now, as the toolchains are much more mature there.
- If you're stuck with WebGL, one approach might be to write your results into a texture and read that back.
- With difficulty. Much like CPUs, GPUs can only work with certain size values natively, and everything else has to be emulated.
- Yep.
- 我猜已经回答了!
- 在 WebGL 中可能不值得做。如果您想玩转 GPU 计算,那么现在在浏览器之外进行计算可能会更好,因为那里的工具链更加成熟。
- 如果您坚持使用 WebGL,一种方法可能是将您的结果写入纹理并读取回来。
- 有困难。与 CPU 非常相似,GPU 只能在本机使用某些大小值,而其他一切都必须模拟。
- 是的。
回答by Adrian Seeley
I've used compute shaders from JavaScript in Chrome using WebGL to solve the travelling salesman problem as a distributed set of smaller optimization problems solved in the fragment shader, and in a few other genetic optimization problems.
我在 Chrome 中使用了来自 JavaScript 的计算着色器,使用 WebGL 来解决旅行商问题,作为在片段着色器中解决的一组较小的优化问题的分布式集合,以及其他一些遗传优化问题。
Problems:
问题:
You can put floats in (r: 1.00, g: 234.24234, b: -22.0) but you can only get integers out (r: 255, g: 255, b: 0). This can be overcome by encoding a single float into 4 integers as an output per fragment. This is actually so heavy an operation that it almost defeats the purpose for 99% of problems. Your better to solve problems with simple integer or boolean sub-solutions.
Debugging is a nightmare of epic proportions and the community is at the time of writing this actively.
Injecting data into the shader as pixel data is VERY slow, reading it out is even slower. To give you an example, reading and writing the data to solve a TSP problem takes 200 and 400 ms respectively, the actual 'draw' or 'compute' time of that data is 14 ms. In order to be usable your data set has to be large enough in the right way.
JavaScript is weakly typed (on the surface...), whereas OpenGL ES is strongly typed. In order to interoperate we have to use things like Int32Array or Float32Array in JavaScript, which feels awkward and constraining in a language normally touted for it's freedoms.
Big number support comes down to using 5 or 6 textures of input data, combining all that pixel data into a single number structure (somehow...), then operating on that big number in a meaningful way. Very hacky, not at all recommended.
你可以把浮点数放入 (r: 1.00, g: 234.24234, b: -22.0) 但你只能得到整数 (r: 255, g: 255, b: 0)。这可以通过将单个浮点数编码为 4 个整数作为每个片段的输出来克服。这实际上是一个如此繁重的操作,以至于几乎无法解决 99% 的问题。您最好使用简单的整数或布尔子解决方案来解决问题。
调试是一场史无前例的噩梦,社区正在积极撰写本文。
将数据作为像素数据注入着色器非常慢,读取它甚至更慢。举个例子,读取和写入解决 TSP 问题的数据分别需要 200 和 400 毫秒,该数据的实际“绘制”或“计算”时间为 14 毫秒。为了可用,您的数据集必须以正确的方式足够大。
JavaScript 是弱类型的(表面上......),而 OpenGL ES 是强类型的。为了互操作,我们必须在 JavaScript 中使用诸如 Int32Array 或 Float32Array 之类的东西,这在通常被吹捧为自由的语言中感觉尴尬和约束。
大数字支持归结为使用 5 或 6 个输入数据纹理,将所有像素数据组合成单个数字结构(不知何故......),然后以有意义的方式对大数字进行操作。非常hacky,完全不推荐。
回答by byteface
i mucked around with this kind of stuff once. In answer to your 3rd question I passed vars back n forth with 'uniforms'
我曾经在这种东西上捣乱过。在回答你的第三个问题时,我用“制服”来回传递 vars
*edit - looking back now also used vector 'attributes' to pass data in from outside.
*编辑 - 现在回过头来也使用向量“属性”从外部传递数据。
you'll need to run mamp or something for this to work locally... https://github.com/byteface/GTP/tree/master/play/simplified
你需要运行 mamp 或其他东西才能在本地工作...... https://github.com/byteface/GTP/tree/master/play/simplified
I used pixels to represent letters of the alphabet and did string searching with shaders. It was amazingly fast. faster than CPU based native search programs. i.e. searching an entire book for instances of a single word is faster in browser on GPU than in a lightweight program like textedit. and i was only using a single texture.
我使用像素来表示字母表中的字母,并使用着色器进行字符串搜索。这是惊人的快。比基于 CPU 的本机搜索程序更快。即在 GPU 上的浏览器中搜索整本书以查找单个单词的实例比在像 textedit 这样的轻量级程序中更快。我只使用了一个纹理。