Java - 通过 Java 套接字广播语音
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/7728850/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Java - Broadcast voice over Java sockets
提问by redoc01
I have created a Server app that receives sound from client, i then broadcast this sound which is stored as bytes and send the bytes back to the clients that are connected to the server. now i am only using one client at the moment for testing and the client is receiving the voice back but the sound is stuttering all the time. Could some one please tell me what i am doing wrong?
我创建了一个从客户端接收声音的服务器应用程序,然后我广播这个以字节形式存储的声音并将字节发送回连接到服务器的客户端。现在我目前只使用一个客户端进行测试,客户端正在接收回音,但声音一直在断断续续。有人可以告诉我我做错了什么吗?
I think i understand some part of why the sound isn't playing smoothly but don't understand how to fix the problem.
我想我明白为什么声音播放不流畅的部分原因,但不明白如何解决这个问题。
the code is bellow.
代码如下。
The Client:
客户端:
The part that sends the voice to server
将语音发送到服务器的部分
public void captureAudio()
{
Runnable runnable = new Runnable(){
public void run()
{
first=true;
try {
final AudioFileFormat.Type fileType = AudioFileFormat.Type.AU;
final AudioFormat format = getFormat();
DataLine.Info info = new DataLine.Info(TargetDataLine.class, format);
line = (TargetDataLine)AudioSystem.getLine(info);
line.open(format);
line.start();
int bufferSize = (int) format.getSampleRate()* format.getFrameSize();
byte buffer[] = new byte[bufferSize];
out = new ByteArrayOutputStream();
objectOutputStream = new BufferedOutputStream(socket.getOutputStream());
running = true;
try {
while (running) {
int count = line.read(buffer, 0, buffer.length);
if (count > 0) {
objectOutputStream.write(buffer, 0, count);
out.write(buffer, 0, count);
InputStream input = new ByteArrayInputStream(buffer);
final AudioInputStream ais = new AudioInputStream(input, format, buffer.length /format.getFrameSize());
}
}
out.close();
objectOutputStream.close();
}
catch (IOException e) {
System.exit(-1);
System.out.println("exit");
}
}
catch(LineUnavailableException e) {
System.err.println("Line Unavailable:"+ e);
e.printStackTrace();
System.exit(-2);
}
catch (Exception e) {
System.out.println("Direct Upload Error");
e.printStackTrace();
}
}
};
Thread t = new Thread(runnable);
t.start();
}
The part that receives the bytes of data from the server
从服务器接收数据字节的部分
private void playAudio() {
//try{
Runnable runner = new Runnable() {
public void run() {
try {
InputStream in = socket.getInputStream();
Thread playTread = new Thread();
int count;
byte[] buffer = new byte[100000];
while((count = in.read(buffer, 0, buffer.length)) != -1) {
PlaySentSound(buffer,playTread);
}
}
catch(IOException e) {
System.err.println("I/O problems:" + e);
System.exit(-3);
}
}
};
Thread playThread = new Thread(runner);
playThread.start();
//}
//catch(LineUnavailableException e) {
//System.exit(-4);
//}
}//End of PlayAudio method
private void PlaySentSound(final byte buffer[], Thread playThread)
{
synchronized(playThread)
{
Runnable runnable = new Runnable(){
public void run(){
try
{
InputStream input = new ByteArrayInputStream(buffer);
final AudioFormat format = getFormat();
final AudioInputStream ais = new AudioInputStream(input, format, buffer.length /format.getFrameSize());
DataLine.Info info = new DataLine.Info(SourceDataLine.class, format);
sline = (SourceDataLine)AudioSystem.getLine(info);
sline.open(format);
sline.start();
Float audioLen = (buffer.length / format.getFrameSize()) * format.getFrameRate();
int bufferSize = (int) format.getSampleRate() * format.getFrameSize();
byte buffer2[] = new byte[bufferSize];
int count2;
ais.read( buffer2, 0, buffer2.length);
sline.write(buffer2, 0, buffer2.length);
sline.flush();
sline.drain();
sline.stop();
sline.close();
buffer2 = null;
}
catch(IOException e)
{
}
catch(LineUnavailableException e)
{
}
}
};
playThread = new Thread(runnable);
playThread.start();
}
}
采纳答案by HefferWolf
You split the sound packets into pieces of 1000000 bytes quite randomly and playback these on the client side not taking into account the sample rate and frame size which you calculated on the server side, so you will end up splitting peaces of sound into two which belong together.
您非常随机地将声音数据包分成 1000000 字节的片段,并在客户端播放这些片段,而不考虑您在服务器端计算的采样率和帧大小,因此您最终会将声音的和平分成两个属于一起。
You need to decode the same chunks on the server send as you send them on the client side. Maybe it is easier to send them using http multipart (where splitting up data is quite easy) then do it the basic way via sockets. Easiest way to this is to use apache commons http client, have a look here: http://hc.apache.org/httpclient-3.x/methods/multipartpost.html
您需要解码服务器发送的相同块,就像您在客户端发送它们一样。也许使用 http multipart 发送它们更容易(其中拆分数据很容易),然后通过套接字以基本方式发送它们。最简单的方法是使用 apache commons http 客户端,看看这里:http: //hc.apache.org/httpclient-3.x/methods/multipartpost.html
回答by edoloughlin
In addition to HefferWolf's answer, I'd add that you're wasting a lot of bandwidth by sending the audio samples that you read from the microphone. You don't say if your app is restricted to a local network but if you're going over the Internet, it's common to compress/decompress the audio when sending/receiving.
除了 HefferWolf 的回答之外,我还要补充一点,通过发送从麦克风读取的音频样本,您浪费了大量带宽。您不会说您的应用程序是否仅限于本地网络,但如果您通过 Internet 访问,则在发送/接收时压缩/解压缩音频是很常见的。
A commonly used compression scheme is the SPEEX codec(a Java implementation is available here), which is relatively easy to use despite the documentation looking a bit scary if you're not familiar with audio sampling/compression.
一种常用的压缩方案是SPEEX 编解码器(此处提供了 Java 实现),如果您不熟悉音频采样/压缩,尽管文档看起来有点吓人,但它相对易于使用。
On the client side, you can use org.xiph.speex.SpeexEncoder
to do the encoding:
在客户端,您可以使用org.xiph.speex.SpeexEncoder
进行编码:
- Use
SpeexEncoder.init()
to initialise an encoder (this will have to match the sample rate, number of channels and endianness of yourAudioFormat
) and then SpeexEncoder.processData()
to encode a frame,SpeexEncoder.getProcessedDataByteSize()
andSpeexEncoder.getProcessedData()
to get the encoded data
- 使用
SpeexEncoder.init()
初始化编码器(这将有相匹配的采样率,声道数和您的字节顺序AudioFormat
),然后 SpeexEncoder.processData()
对帧进行编码,SpeexEncoder.getProcessedDataByteSize()
并SpeexEncoder.getProcessedData()
获取编码数据
On the client side use org.xiph.speex.SpeexDecoder
to decode the frames you receive:
在客户端使用org.xiph.speex.SpeexDecoder
解码您收到的帧:
SpeexDecoder.init()
to initialise the decoder using the same parameters as the encoder,SpeexDecoder.processData()
to decode a frame,SpeexDecoder.getProcessedDataByteSize()
andSpeexDecoder.getProcessedData()
to get the encoded data
SpeexDecoder.init()
使用与编码器相同的参数初始化解码器,SpeexDecoder.processData()
解码一帧,SpeexDecoder.getProcessedDataByteSize()
并SpeexDecoder.getProcessedData()
获取编码数据
There's a bit more involved that I've outlined. E.g., you'll have to spit the data into the correct size for the encoding, which depends on the sample rate, channels and bits per sample, but you'll see a dramatic drop in the number of bytes you're sending over the network.
我已经概述了更多涉及的内容。例如,您必须将数据吐出到正确的编码大小,这取决于采样率、通道和每个样本的位数,但是您会看到发送的字节数急剧下降网络。