javascript Web Audio API:如何播放 MP3 块流

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/20134384/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-10-27 17:49:04  来源:igfitidea点击:

Web Audio API: How to play a stream of MP3 chunks

javascripthtmlaudiostreamingweb-audio-api

提问by Jonathan Byrne

So I'm trying to use Web Audio APIto decode & play MP3 file chunks streamed to the browser using Node.js & Socket.IO.

因此,我尝试Web Audio API使用 Node.js 和 Socket.IO 解码和播放流式传输到浏览器的 MP3 文件块。

Is my only option, in this context, to create a new AudioBufferSourceNodefor each audio data chunk received or is it possible to create a single AudioBufferSourceNodefor all chunks and simply append the new audio data to the end of source node's bufferattribute?

在这种情况下,我唯一的选择是AudioBufferSourceNode为接收到的每个音频数据块创建一个新的,还是可以AudioBufferSourceNode为所有块创建一个并简单地将新的音频数据附加到源节点的buffer属性末尾?

Currently this is how I'm receiving my MP3 chunks, decoding them and scheduling them for playback. I have already verified that each chunk being received is a 'valid MP3 chunk' and is being successfully decoded by the Web Audio API.

目前这就是我接收 MP3 块、解码它们并安排它们进行播放的方式。我已经验证接收到的每个块都是“有效的 MP3 块”并且正在被 Web Audio API 成功解码。

audioContext = new AudioContext();
startTime = 0;

socket.on('chunk_received', function(chunk) {
    audioContext.decodeAudioData(toArrayBuffer(data.audio), function(buffer) {
        var source = audioContext.createBufferSource();
        source.buffer = buffer;
        source.connect(audioContext.destination);

        source.start(startTime);
        startTime += buffer.duration;
    });
});

Any advice or insight into how best to 'update' Web Audio API playback with new audio data would be greatly appreciated.

任何有关如何最好地使用新音频数据“更新”Web Audio API 播放的建议或见解将不胜感激。

采纳答案by Kevin Ennis

No, you can't reuse an AudioBufferSourceNode, and you cant pushonto an AudioBuffer. Their lengths are immutable.

不,您不能重用 AudioBufferSourceNode,也不能重用pushAudioBuffer。它们的长度是不可变的。

This article (http://www.html5rocks.com/en/tutorials/audio/scheduling/) has some good information about scheduling with the Web Audio API. But you're on the right track.

这篇文章 ( http://www.html5rocks.com/en/tutorials/audio/scheduling/) 提供了一些关于使用 Web Audio API 进行调度的好信息。但你走在正确的轨道上。

回答by AnthumChris

Currently, decodeAudioData()requires complete files and cannot provide chunk-based decoding on incomplete files. The next version of the Web Audio API should provide this feature: https://github.com/WebAudio/web-audio-api/issues/337

目前,decodeAudioData()需要完整的文件,不能对不完整的文件提供基于块的解码。下一个版本的 Web Audio API 应该提供这个功能:https: //github.com/WebAudio/web-audio-api/issues/337

Meanwhile, I've began writing examples for decoding audio in chunks until the new API version is available.

同时,在新的 API 版本可用之前,我已经开始编写以块为单位解码音频的示例。

https://github.com/AnthumChris/fetch-stream-audio

https://github.com/AnthumChris/fetch-stream-audio

回答by dy_

I see at least 2 possible approaches.

我看到至少 2 种可能的方法。

  1. Setting up a scriptProcessorNode, which will feed queue of received & decoded data to realtime flow of web-audio.

  2. Exploiting the property of audioBufferSource.loop- updating audioBuffer's content depending on the audio time.

  1. 设置一个scriptProcessorNode,它将接收和解码数据的队列馈送到网络音频的实时流。

  2. 利用audioBufferSource.loop- 根据音频时间更新 audioBuffer 的内容的属性。

Both approaches are implemented in https://github.com/audio-lab/web-audio-stream. You can technically use that to feed received data to web-audio.

这两种方法都在https://github.com/audio-lab/web-audio-stream中实现。从技术上讲,您可以使用它来将接收到的数据提供给网络音频。