Javascript 用于直播的网络音频 API?

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/28440262/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-23 01:47:01  来源:igfitidea点击:

Web Audio API for live streaming?

javascripthtmlhtml5-audioaudio-streamingweb-audio-api

提问by Tony

We need to streaming liveaudio (from a medical device) to web browsers with no more than 3-5s of end-to-end delay (assume 200mS or less network latency). Today we use a browser plugin(NPAPI) for decoding, filtering(high, low, band), and playbackof the audio stream (delivered via Web Sockets).

我们需要将实时音频(来自医疗设备)流式传输到网络浏览器,端到端延迟不超过 3-5 秒(假设网络延迟为 200 毫秒或更短)。今天,我们使用浏览器插件(NPAPI) 对音频流进行解码过滤(高、低、频段)和播放(通过 Web Sockets 提供)。

We want to replace the plugin.

我们想更换插件。

I was looking at various Web Audio API demosand the most of our required functionality (playback, gain control, filtering) appears to be available in Web Audio API. However, it is not clear to me if Web Audio API can be used for streamed sources as most of the Web Audio API makes use of short sounds and/or audio clips.

我正在查看各种Web Audio API 演示,我们所需的大部分功能(播放、增益控制、过滤)似乎在Web Audio API 中可用。但是,我不清楚 Web Audio API 是否可用于流媒体源,因为大多数 Web Audio API 使用短声音和/或音频剪辑。

Can Web Audio API be used to play live streamed audio?

可以使用 Web Audio API 播放实时流音频吗?

Update (11-Feb-2015):

更新(2015 年 2 月 11 日):

After a bit more research and local prototyping, I am not sure live audio streaming with Web Audio APIis possible. As Web Audio API's decodeAudioDataisn't really designed to handle random chunks of audio data (in our case delivered via WebSockets). It appears to need the whole 'file' in order to process it correctly.

经过更多的研究和本地原型设计,我不确定使用 Web Audio API 进行实时音频流传输是否可行。由于 Web Audio API 的decodeAudioData并不是真正设计来处理随机的音频数据块(在我们的例子中是通过 WebSockets 提供的)。它似乎需要整个“文件”才能正确处理它。

See stackoverflow:

见堆栈溢出:

Now it is possible with createMediaElementSourceto connect an <audio>element to Web Audio API, but it has been my experience that the <audio>element induces a huge amount of end-to-end delay (15-30s) and there doesn't appear to be any means to reduce the delay to below 3-5 seconds.

现在可以使用createMediaElementSource<audio>元素连接到 Web Audio API,但根据我的经验,该<audio>元素会导致大量端到端延迟(15-30 秒),而且似乎没有任何方法将延迟减少到 3-5 秒以下。

I thinkthe only solution is to use WebRTC with Web Audio API. I was hoping to avoid WebRTC as it will require significant changes to our server-side implementation.

认为唯一的解决方案是将 WebRTC 与 Web Audio API 一起使用。我希望避免使用 WebRTC,因为它需要对我们的服务器端实现进行重大更改。

Update (12-Feb-2015) Part I:

更新(2015 年 2 月 12 日)第一部分

I haven't completely eliminated the <audio>tag (need to finish my prototype). Once I have ruled it out, I suspect the createScriptProcessor (deprecated but still supported) will be a good choice for our environment as I could 'stream' (via WebSockets) our ADPCM data to the browser and then (in JavaScript) convert it to PCM. Similar to what to Scott's library (see below) does using the createScriptProcessor. This method doesn't require the data to be in properly sized 'chunks' and critical timing as the decodeAudioData approach.

我还没有完全消除<audio>标签(需要完成我的原型)。一旦我排除了它,我怀疑 createScriptProcessor(已弃用但仍受支持)将是我们环境的不错选择,因为我可以将我们的 ADPCM 数据“流”(通过 WebSockets)到浏览器,然后(在 JavaScript 中)将其转换为PCM。类似于 Scott 的库(见下文)使用 createScriptProcessor 所做的。这种方法不需要像 decodeAudioData 方法那样将数据放在适当大小的“块”和关键时间中。

Update (12-Feb-2015) Part II:

更新(2015 年 2 月 12 日)第二部分

After more testing, I eliminated the <audio>to Web Audio API interface because, depending on source type, compression and browser, the end-to-end delay can be 3-30s. That leaves the createScriptProcessor method (See Scott's post below) or WebRTC. After talking discussing with our decision makers, it has been decided we will take the WebRTC approach. I assumeit will work. But it will require changes to our server side code.

经过更多测试,我取消了<audio>to Web Audio API 接口,因为根据源类型、压缩和浏览器,端到端延迟可能为 3-30 秒。剩下的就是 createScriptProcessor 方法(参见下面 Scott 的帖子)或 WebRTC。在与我们的决策者讨论讨论后,我们决定采用 WebRTC 方法。我认为它会起作用。但这需要更改我们的服务器端代码。

I'm going to mark the first answer, just so the 'question' is closed.

我要标记第一个答案,这样“问题”就结束了。

Thanks for listening. Feel free to add comments as needed.

谢谢收听。随意根据需要添加评论。

采纳答案by Kevin Ennis

Yes, the Web Audio API (along with AJAX or Websockets) can be used for streaming.

是的,Web Audio API(连同 AJAX 或 Websockets)可用于流式传输。

Basically, you pull down (or send, in the case of Websockets) some chunks of nlength. Then you decode them with the Web Audio API and queue them up to be played, one after the other.

基本上,你拉下(或发送,在 Websockets 的情况下)一些n长度的块。然后您使用 Web Audio API 对它们进行解码,并将它们一个接一个地排队播放。

Because the Web Audio API has high-precision timing, you won't hear any "seams" between the playback of each buffer if you do the scheduling correctly.

由于 Web Audio API 具有高精度计时,如果您正确地进行调度,您将不会在每个缓冲区的播放之间听到任何“接缝”。

回答by Scott Stensland

I wrote a streaming Web Audio API system where I used web workers to do all the web socket management to communicate with node.js such that the browser thread simply renders audio ... works just fine on laptops, since mobiles are behind on their implementation of web sockets inside web workers you need no less than lollipop for it to run as coded ... I posted full source code here

我写了一个流式网络音频 API 系统,我使用网络工作者来完成所有网络套接字管理以与 node.js 通信,这样浏览器线程就可以简单地呈现音频......在笔记本电脑上工作得很好,因为手机在他们的实现上落后网络工作者内部的网络套接字,你需要不少于棒棒糖才能按编码运行......我在这里发布了完整的源代码

回答by Jan Swart

To elaborate on the comments on how to play a bunch of separate buffers stored in an array by shifting the latest one out everytime:

详细说明如何通过每次将最新的缓冲区移出来播放存储在数组中的一堆单独缓冲区的评论:

If you create a buffer through createBufferSource()then it has an onendedevent to which you can attach a callback, which will fire when the buffer has reached its end. You can do something like this to play the various chunks in the array one after the other:

如果您通过创建缓冲区,createBufferSource()那么它有一个onended事件,您可以将回调附加到该事件,该回调将在缓冲区到达终点时触发。你可以做这样的事情来一个接一个地播放数组中的各种块:

function play() {
  //end of stream has been reached
  if (audiobuffer.length === 0) { return; }
  let source = context.createBufferSource();

  //get the latest buffer that should play next
  source.buffer = audiobuffer.shift();
  source.connect(context.destination);

  //add this function as a callback to play next buffer
  //when current buffer has reached its end 
  source.onended = play;
  source.start();
}

Hope that helps. I'm still experimenting on how to get this all smooth and ironed out, but this is a good start and missing in a lot of the online posts.

希望有帮助。我仍在尝试如何使这一切顺利和解决,但这是一个良好的开端,并且在许多在线帖子中都没有。