javascript 从 getUserMedia 录制音频流

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/11979528/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-10-26 14:57:30  来源:igfitidea点击:

Record Audio Stream from getUserMedia

javascripthtmlaudiowebrtcgetusermedia

提问by Shih-En Chou

In recent days, I tried to use javascript to record audio stream. I found that there is no example code which works.

最近几天,我尝试使用javascript来录制音频流。我发现没有有效的示例代码。

Is there any browser supporting?

有支持的浏览器吗?

Here is my code

这是我的代码

navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia ||
                         navigator.mozGetUserMedia || navigator.msGetUserMedia; 

navigator.getUserMedia({ audio: true }, gotStream, null);
function gotStream(stream) {

        msgStream = stream;        
        msgStreamRecorder = stream.record(); // no method record :(
}

采纳答案by Mikael Holmgren

You could check this site: https://webaudiodemos.appspot.com/AudioRecorder/index.html

您可以查看此站点:https: //webaudiodemos.appspot.com/AudioRecorder/index.html

It stores the audio into a file (.wav) on the client side.

它将音频存储到客户端的文件 (.wav) 中。

回答by jrullmann

getUserMedia gives you access to the device, but it is up to you to record the audio. To do that, you'll want to 'listen' to the device, building a buffer of the data. Then when you stop listening to the device, you can format that data as a WAV file (or any other format). Once formatted you can upload it to your server, S3, or play it directly in the browser.

getUserMedia 可让您访问设备,但是否录制音频取决于您。为此,您需要“聆听”设备,构建数据缓冲区。然后,当您停止收听设备时,您可以将该数据格式化为 WAV 文件(或任何其他格式)。格式化后,您可以将其上传到您的服务器 S3,或直接在浏览器中播放。

To listen to the data in a way that is useful for building your buffer, you will need a ScriptProcessorNode. A ScriptProcessorNode basically sits between the input (microphone) and the output (speakers), and gives you a chance to manipulate the audio data as it streams. Unfortunately the implementation is not straightforward.

要以对构建缓冲区有用的方式收听数据,您将需要一个 ScriptProcessorNode。ScriptProcessorNode 基本上位于输入(麦克风)和输出(扬声器)之间,让您有机会在音频数据流式传输时对其进行操作。不幸的是,实现并不简单。

You'll need:

你需要:

  • getUserMediato access the device
  • AudioContextto create a MediaStreamAudioSourceNode and a ScriptProcessorNode
  • MediaStreamAudioSourceNode to represent the audio stream
  • ScriptProcessorNodeto get access to the streaming audio data via an onaudioprocessevent. The event exposes the channel data that you'll build your buffer with.
  • getUserMedia访问设备
  • AudioContext创建 MediaStreamAudioSourceNode 和 ScriptProcessorNode
  • MediaStreamAudioSourceNode 表示音频流
  • ScriptProcessorNode通过 onaudioprocessevent 访问流式音频数据。该事件公开了您将用来构建缓冲区的通道数据。

Putting it all together:

把它们放在一起:

navigator.getUserMedia({audio: true},
  function(stream) {
    // create the MediaStreamAudioSourceNode
    var context = new AudioContext();
    var source = context.createMediaStreamSource(stream);
    var recLength = 0,
      recBuffersL = [],
      recBuffersR = [];

    // create a ScriptProcessorNode
    if(!context.createScriptProcessor){
       node = context.createJavaScriptNode(4096, 2, 2);
    } else {
       node = context.createScriptProcessor(4096, 2, 2);
    }

    // listen to the audio data, and record into the buffer
    node.onaudioprocess = function(e){
      recBuffersL.push(e.inputBuffer.getChannelData(0));
      recBuffersR.push(e.inputBuffer.getChannelData(1));
      recLength += e.inputBuffer.getChannelData(0).length;
    }

    // connect the ScriptProcessorNode with the input audio
    source.connect(node);
    // if the ScriptProcessorNode is not connected to an output the "onaudioprocess" event is not triggered in chrome
    node.connect(context.destination);
  },
  function(e) {
    // do something about errors
});

Rather than building all of this yourself I suggest you use the AudioRecordercode, which is awesome. It also handles writing the buffer to a WAV file. Here is a demo.

与其自己构建所有这些,我建议您使用AudioRecorder代码,这很棒。它还处理将缓冲区写入 WAV 文件。 这是一个演示

Here's another great resource.

这是另一个很棒的资源

回答by mido

for browsers that support MediaRecorder API, use it.

对于支持MediaRecorder API 的浏览器,请使用它。

for older browsers that does not support MediaRecorder API, there are three ways to do it

对于不支持 MediaRecorder API 的旧浏览器,有三种方法可以做到

  1. as wav
  2. as mp3
  3. as opus packets(can get output as wav, mp3or ogg)
  1. 作为 wav
  2. 作为 mp3
  3. as opus packets(可以得到输出为wav,mp3ogg

回答by Todd

There is a bug that currently does not allow audio only. Please see http://code.google.com/p/chromium/issues/detail?id=112367

有一个错误,目前只允许音频。请参阅http://code.google.com/p/chromium/issues/detail?id=112367

回答by Vidhuran

Currently, this is not possible without sending the data over to the server side. However, this would soon become possible in the browser if they start supporting the MediaRecorder working draft.

目前,如果不将数据发送到服务器端,这是不可能的。但是,如果浏览器开始支持MediaRecorder 工作草案,这将很快在浏览器中成为可能。