上回说到AudioTrack播放有两种模式,即MODE_STATIC和MODE_STREAM,至于区别,上回也说过,如下:
MODE_STREAM
在这种模式下,需要先play,然后通过write一次次把音频数据写到AudioTrack中(我在试验中可以先write再play,可能是数据太小了的原因)。每次都需要把数据从用户提供的Buffer中拷贝到AudioTrack内部的Buffer中,这在一定程度上会使引入延时。
适用于大多数的场景,将audio buffers从java层传递到native层即返回。
如果audio buffers占用内存多,应该使用MODE_STREAM。
比如播放时间很长的声音文件,
比如音频文件使用高采样率,
比如动态的处理audio buffer等
MODE_STATIC
这种模式下,需要先write,再play.。先把所有数据通过一次write调用传递到AudioTrack中的内部缓冲区,后续就不必再传递数据了。但它也有一个缺点,就是一次write的数据不能太多,否则系统无法分配足够的内存来存储全部数据。
一次性将全部的音频资源从java传递到native层,这种方式延迟低,但也有局限性。
音频文件短且占用内存小。
适用于短促的游戏音效,并且对播放延迟真的有很高要求。
下面展示我参考网上写的demo
音频文件是下载了网上 wav的,自己可以网上百度,同时要注意采样率,虽然大多数是44100,但还是存在不一样的。可以使用Cool Edit Pro 2.1查询一下。
MODE_STREAM实例1
private static int[] dms_alarm_sounds = new int[]{R.raw.mall, R.raw.mall, R.raw.mall, R.raw.mall};
private static byte[] audioData;
private static AudioTrack audioTrack;
private static boolean isPlaying = false;
private static final int SAMPLERATEINHZ = 44100;//44100 ;//16000
/**
* init sound
*
* @param soundID
*/
private static void initPlaySoundS(int soundID) {
InputStream inputStream = mContext.getResources().openRawResource(dms_alarm_sounds[soundID]);
try {
audioData = new byte[inputStream.available()];
} catch (IOException e) {
e.printStackTrace();
}
try {
inputStream.read(audioData);
inputStream.close();
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
return;
}
/**
* start play
*
* @param soundID
*/
public void play(final int soundID) {
if (soundID < 0 || soundID > MAX_SOUMD_NUM - 1) {
Log.d(TAG, "-----Error Sound ID---:" + soundID);
return;
}
initPlaySoundS(soundID);
int bufSize = android.media.AudioTrack.getMinBufferSize(SAMPLERATEINHZ,
AudioFormat.CHANNEL_OUT_MONO,
AudioFormat.ENCODING_PCM_16BIT);
Log.d(TAG, "--startAudioTrack---audioData:" + audioData.length + "---bufSize:" + bufSize);
audioTrack = new AudioTrack(AudioManager.STREAM_RING,
SAMPLERATEINHZ, AudioFormat.CHANNEL_OUT_MONO,
AudioFormat.ENCODING_PCM_16BIT, bufSize,
AudioTrack.MODE_STREAM);
new Thread(new Runnable() {
@Override
public void run() {
if (audioTrack != null) {
try {
audioTrack.play();
audioTrack.write(audioData, 0, audioData.length);
Thread.sleep(200);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
}).start();
Log.d(TAG, "Playing");
return;
}
/**
* stop play
*/
public void stop() {
isPlaying = false;
if (audioTrack != null) {
audioTrack.pause();
audioTrack.flush();
audioTrack.stop();
audioTrack.release();
audioTrack = null;
Log.d(TAG, "-----stop--------");
}
return;
}
PS:上面是先play再write
MODE_STREAM实例2
这里就只写一部分代码了
/**
* start play
*
* @param soundID
*/
public void play(final int soundID) {
if (soundID < 0 || soundID > MAX_SOUMD_NUM - 1) {
Log.d(TAG, "-----Error Sound ID---:" + soundID);
return;
}
isPlaying = true;
final int bufSize = android.media.AudioTrack.getMinBufferSize(SAMPLERATEINHZ,
AudioFormat.CHANNEL_OUT_MONO,
AudioFormat.ENCODING_PCM_16BIT);
Log.d(TAG, "-----play---444-----soundID:" + soundID + "----bufSize:" + bufSize);
audioTrack = new AudioTrack(AudioManager.STREAM_NOTIFICATION,
SAMPLERATEINHZ, AudioFormat.CHANNEL_OUT_MONO,
AudioFormat.ENCODING_PCM_16BIT, bufSize,
AudioTrack.MODE_STREAM);
new Thread(new Runnable() {
@Override
public void run() {
while (isPlaying) {
try {
InputStream inputStream = mContext.getResources().openRawResource(dms_alarm_sounds[soundID]);
try {
audioData = new byte[bufSize];
int lenght;
while ((lenght = inputStream.read(audioData)) > 0) {
int tag = audioTrack.write(audioData, 0, lenght);
if (tag == AudioTrack.ERROR_INVALID_OPERATION || tag == AudioTrack.ERROR_BAD_VALUE) {
continue;
}
audioTrack.play();
}
} catch (Exception e) {
e.printStackTrace();
}
Thread.sleep(300);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
}).start();
return;
}
PS:注意红色代码,这里是边write边play。。。(只试过小文件,也就是可以一次性写入。。。)
看文档,还是不建议这样写,,,最好先play,在write
MODE_STATIC实例
/**
* start play MODE_STATIC
* @param soundID
*/
public void play(final int soundID) {
if (soundID < 0 || soundID > MAX_SOUMD_NUM - 1) {
Log.d(TAG, "-----Error Sound ID---:" + soundID);
return;
}
try {
InputStream inputStream = mContext.getResources().openRawResource(dms_alarm_sounds[soundID]);
ByteArrayOutputStream out = new ByteArrayOutputStream();
int b;
while ((b = inputStream.read()) != -1) {
out.write(b);
}
audioData = out.toByteArray();
} catch (IOException e) {
e.printStackTrace();
}
new Thread(new Runnable() {
@Override
public void run() {
audioTrack = new AudioTrack(
new AudioAttributes.Builder()
.setUsage(AudioAttributes.USAGE_MEDIA)
.setContentType(AudioAttributes.CONTENT_TYPE_MUSIC)
.build(),
new AudioFormat.Builder()
.setSampleRate(SAMPLERATEINHZ)
.setEncoding(AudioFormat.ENCODING_PCM_16BIT)
.setChannelMask(AudioFormat.CHANNEL_OUT_MONO)
.build(),
audioData.length,
AudioTrack.MODE_STATIC, AudioManager.AUDIO_SESSION_ID_GENERATE
);
audioTrack.write(audioData, 0, audioData.length);
audioTrack.play();
}
}).start();
return;
}
PS:红色的是先write后play
上面三个demo都是验证ok,如有问题可以留言,谢谢。
本文摘抄《Android-音视频(3):用AudioTrack播放音频PCM》、《音频播放AudioTrack之入门篇》和《AudioTrack中MODE_STATIC和MODE_STREAM的差异》自己整理得出。不懂的可访问相关链接。
https://www.biumall.com/ 笔友城堡,为你导航!