javascript 我可以使用 nodejs 将麦克风音频从客户端传输到客户端吗?
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/30957587/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Can I stream microphone audio from client to client using nodejs?
提问by udidu
I'm trying to create a realtime voice chat. once a client is holding a button and talks, I want the sound to be sent over the socket to the nodejs backend, then I want to stream this data to another client.
我正在尝试创建实时语音聊天。一旦客户端按住按钮并讲话,我希望声音通过套接字发送到 nodejs 后端,然后我想将此数据流式传输到另一个客户端。
here is the sender client code:
这是发件人客户端代码:
socket.on('connect', function() {
var session = {
audio: true,
video: false
};
navigator.getUserMedia(session, function(stream){
var audioInput = context.createMediaStreamSource(stream);
var bufferSize = 2048;
recorder = context.createScriptProcessor(bufferSize, 1, 1);
recorder.onaudioprocess = onAudio;
audioInput.connect(recorder);
recorder.connect(context.destination);
},function(e){
});
function onAudio(e) {
if(!broadcast) return;
var mic = e.inputBuffer.getChannelData(0);
var converted = convertFloat32ToInt16(mic);
socket.emit('broadcast', converted);
}
});
The server then gets this buffer and stream it to another client (in this example, the same client)
服务器然后获取此缓冲区并将其流式传输到另一个客户端(在此示例中,同一个客户端)
Server Code
服务器代码
socket.on('broadcast', function(buffer) {
socket.emit('broadcast', new Int16Array(buffer));
});
And then, in order to play the sound at the other side (the receiver), the client code is like:
然后,为了在另一端(接收器)播放声音,客户端代码如下:
socket.on('broadcast', function(raw) {
var buffer = convertInt16ToFloat32(raw);
var src = context.createBufferSource();
var audioBuffer = context.createBuffer(1, buffer.byteLength, context.sampleRate);
audioBuffer.getChannelData(0).set(buffer);
src.buffer = audioBuffer;
src.connect(context.destination);
src.start(0);
});
My expected result is that the sound from client A will be heard in client B, I can see the buffer on the server, I can see the buffer back in the client but I hear nothing.
我的预期结果是客户端 A 的声音将在客户端 B 中听到,我可以看到服务器上的缓冲区,我可以在客户端看到缓冲区,但我什么也没听到。
I know socket.io 1.x supports binary data but I can't find any example of making a voice chat, I tried also using BinaryJS but the results are the same, also, I know that with WebRTC this is a simple task but I don't want to use WebRTC, can anyone point me to a good resource or tell me what am I missing?
我知道 socket.io 1.x 支持二进制数据,但我找不到任何进行语音聊天的例子,我也尝试使用 BinaryJS 但结果是一样的,而且,我知道使用 WebRTC 这是一项简单的任务,但是我不想使用 WebRTC,谁能给我指出一个好的资源或告诉我我错过了什么?
回答by Cracker0dks
I build something like this on my own a few weeks ago. Problems I ran into (you will at some point):
几周前我自己构建了这样的东西。我遇到的问题(您有时会遇到):
- To much Data without reducing bitrate and samplerate (over internet)
- bad audio quallity without interpolation or a better audio compression
- Even if its not shown to you, you will get different samplerates from different computers sound cards (my PC = 48kHz, my Laptop = 32Khz) that means you have to write a resampler
- In WebRTC they reduce audio quallity if a bad internet connection is detected. You can not do this because this is low level stuff!
- You have to implement this in a fast way because JS will block your frontent if not > use webworkers
- Audio codex translated to JS are very slow and you will get unexpected results (see one audiocodex question from me: here) I have tried Opusas well, but no good results yet.
- 在不降低比特率和采样率的情况下处理大量数据(通过互联网)
- 没有插值或更好的音频压缩的不良音频质量
- 即使它没有显示给您,您也会从不同的计算机声卡(我的 PC = 48kHz,我的笔记本电脑 = 32Khz)获得不同的采样率,这意味着您必须编写一个重采样器
- 在 WebRTC 中,如果检测到不良的互联网连接,它们会降低音频质量。你不能这样做,因为这是低级的东西!
- 您必须以快速的方式实现这一点,因为如果没有,JS 会阻塞您的前端 > 使用 webworkers
- Audio codex 翻译成 JS 非常慢,你会得到意想不到的结果(请参阅我的一个 audiocodex 问题:这里)我也尝试过Opus,但还没有好的结果。
I dont work on this project at the moment but you can get the code at: https://github.com/cracker0dks/nodeJsVoip
我目前不在这个项目上工作,但您可以在以下位置获取代码:https: //github.com/cracker0dks/nodeJsVoip
and the working example: (link removed) for multi user voip audio. (Not working anymore! Websocketserver is down!) If you go into settings>audio (on the page) you can choose a higher bit and samplerate for better audioquallity.
和工作示例:(链接已删除)用于多用户 voip 音频。(不再工作!Websocketserver 已关闭!)如果您进入设置>音频(在页面上),您可以选择更高的位和采样率以获得更好的音频质量。
EDIT: Can you tell me why u not want to use WebRTC?
编辑:你能告诉我为什么你不想使用 WebRTC 吗?