AudioGrouppublic class AudioGroup extends Object An AudioGroup is an audio hub for the speaker, the microphone, and
{@link AudioStream}s. Each of these components can be logically turned on
or off by calling {@link #setMode(int)} or {@link RtpStream#setMode(int)}.
The AudioGroup will go through these components and process them one by one
within its execution loop. The loop consists of four steps. First, for each
AudioStream not in {@link RtpStream#MODE_SEND_ONLY}, decodes its incoming
packets and stores in its buffer. Then, if the microphone is enabled,
processes the recorded audio and stores in its buffer. Third, if the speaker
is enabled, mixes all AudioStream buffers and plays back. Finally, for each
AudioStream not in {@link RtpStream#MODE_RECEIVE_ONLY}, mixes all other
buffers and sends back the encoded packets. An AudioGroup does nothing if
there is no AudioStream in it.
Few things must be noticed before using these classes. The performance is
highly related to the system load and the network bandwidth. Usually a
simpler {@link AudioCodec} costs fewer CPU cycles but requires more network
bandwidth, and vise versa. Using two AudioStreams at the same time doubles
not only the load but also the bandwidth. The condition varies from one
device to another, and developers should choose the right combination in
order to get the best result.
It is sometimes useful to keep multiple AudioGroups at the same time. For
example, a Voice over IP (VoIP) application might want to put a conference
call on hold in order to make a new call but still allow people in the
conference call talking to each other. This can be done easily using two
AudioGroups, but there are some limitations. Since the speaker and the
microphone are globally shared resources, only one AudioGroup at a time is
allowed to run in a mode other than {@link #MODE_ON_HOLD}. The others will
be unable to acquire these resources and fail silently.
Using this class requires
{@link android.Manifest.permission#RECORD_AUDIO} permission. Developers
should set the audio mode to {@link AudioManager#MODE_IN_COMMUNICATION}
using {@link AudioManager#setMode(int)} and change it back when none of
the AudioGroups is in use. |
Fields Summary |
---|
public static final int | MODE_ON_HOLDThis mode is similar to {@link #MODE_NORMAL} except the speaker and
the microphone are both disabled. | public static final int | MODE_MUTEDThis mode is similar to {@link #MODE_NORMAL} except the microphone is
disabled. | public static final int | MODE_NORMALThis mode indicates that the speaker, the microphone, and all
{@link AudioStream}s in the group are enabled. First, the packets
received from the streams are decoded and mixed with the audio recorded
from the microphone. Then, the results are played back to the speaker,
encoded and sent back to each stream. | public static final int | MODE_ECHO_SUPPRESSIONThis mode is similar to {@link #MODE_NORMAL} except the echo suppression
is enabled. It should be only used when the speaker phone is on. | private static final int | MODE_LAST | private final Map | mStreams | private int | mMode | private long | mNative |
Constructors Summary |
---|
public AudioGroup()Creates an empty AudioGroup.
System.loadLibrary("rtp_jni");
mStreams = new HashMap<AudioStream, Long>();
|
Methods Summary |
---|
synchronized void | add(AudioStream stream)
if (!mStreams.containsKey(stream)) {
try {
AudioCodec codec = stream.getCodec();
String codecSpec = String.format(Locale.US, "%d %s %s", codec.type,
codec.rtpmap, codec.fmtp);
long id = nativeAdd(stream.getMode(), stream.getSocket(),
stream.getRemoteAddress().getHostAddress(),
stream.getRemotePort(), codecSpec, stream.getDtmfType());
mStreams.put(stream, id);
} catch (NullPointerException e) {
throw new IllegalStateException(e);
}
}
| public void | clear()Removes every {@link AudioStream} in this group.
for (AudioStream stream : getStreams()) {
stream.join(null);
}
| protected void | finalize()
nativeRemove(0L);
super.finalize();
| public int | getMode()Returns the current mode.
return mMode;
| public AudioStream[] | getStreams()Returns the {@link AudioStream}s in this group.
synchronized (this) {
return mStreams.keySet().toArray(new AudioStream[mStreams.size()]);
}
| private native long | nativeAdd(int mode, int socket, java.lang.String remoteAddress, int remotePort, java.lang.String codecSpec, int dtmfType)
| private native void | nativeRemove(long id)
| private native void | nativeSendDtmf(int event)
| private native void | nativeSetMode(int mode)
| synchronized void | remove(AudioStream stream)
Long id = mStreams.remove(stream);
if (id != null) {
nativeRemove(id);
}
| public void | sendDtmf(int event)Sends a DTMF digit to every {@link AudioStream} in this group. Currently
only event {@code 0} to {@code 15} are supported.
if (event < 0 || event > 15) {
throw new IllegalArgumentException("Invalid event");
}
synchronized (this) {
nativeSendDtmf(event);
}
| public void | setMode(int mode)Changes the current mode. It must be one of {@link #MODE_ON_HOLD},
{@link #MODE_MUTED}, {@link #MODE_NORMAL}, and
{@link #MODE_ECHO_SUPPRESSION}.
if (mode < 0 || mode > MODE_LAST) {
throw new IllegalArgumentException("Invalid mode");
}
synchronized (this) {
nativeSetMode(mode);
mMode = mode;
}
|
|