FileDocCategorySizeDatePackage
AudioGroup.javaAPI DocAndroid 5.1 API7373Thu Mar 12 22:22:52 GMT 2015android.net.rtp

AudioGroup

public class AudioGroup extends Object
An AudioGroup is an audio hub for the speaker, the microphone, and {@link AudioStream}s. Each of these components can be logically turned on or off by calling {@link #setMode(int)} or {@link RtpStream#setMode(int)}. The AudioGroup will go through these components and process them one by one within its execution loop. The loop consists of four steps. First, for each AudioStream not in {@link RtpStream#MODE_SEND_ONLY}, decodes its incoming packets and stores in its buffer. Then, if the microphone is enabled, processes the recorded audio and stores in its buffer. Third, if the speaker is enabled, mixes all AudioStream buffers and plays back. Finally, for each AudioStream not in {@link RtpStream#MODE_RECEIVE_ONLY}, mixes all other buffers and sends back the encoded packets. An AudioGroup does nothing if there is no AudioStream in it.

Few things must be noticed before using these classes. The performance is highly related to the system load and the network bandwidth. Usually a simpler {@link AudioCodec} costs fewer CPU cycles but requires more network bandwidth, and vise versa. Using two AudioStreams at the same time doubles not only the load but also the bandwidth. The condition varies from one device to another, and developers should choose the right combination in order to get the best result.

It is sometimes useful to keep multiple AudioGroups at the same time. For example, a Voice over IP (VoIP) application might want to put a conference call on hold in order to make a new call but still allow people in the conference call talking to each other. This can be done easily using two AudioGroups, but there are some limitations. Since the speaker and the microphone are globally shared resources, only one AudioGroup at a time is allowed to run in a mode other than {@link #MODE_ON_HOLD}. The others will be unable to acquire these resources and fail silently.

Using this class requires {@link android.Manifest.permission#RECORD_AUDIO} permission. Developers should set the audio mode to {@link AudioManager#MODE_IN_COMMUNICATION} using {@link AudioManager#setMode(int)} and change it back when none of the AudioGroups is in use.

see
AudioStream

Fields Summary
public static final int
MODE_ON_HOLD
This mode is similar to {@link #MODE_NORMAL} except the speaker and the microphone are both disabled.
public static final int
MODE_MUTED
This mode is similar to {@link #MODE_NORMAL} except the microphone is disabled.
public static final int
MODE_NORMAL
This mode indicates that the speaker, the microphone, and all {@link AudioStream}s in the group are enabled. First, the packets received from the streams are decoded and mixed with the audio recorded from the microphone. Then, the results are played back to the speaker, encoded and sent back to each stream.
public static final int
MODE_ECHO_SUPPRESSION
This mode is similar to {@link #MODE_NORMAL} except the echo suppression is enabled. It should be only used when the speaker phone is on.
private static final int
MODE_LAST
private final Map
mStreams
private int
mMode
private long
mNative
Constructors Summary
public AudioGroup()
Creates an empty AudioGroup.

     
        System.loadLibrary("rtp_jni");
    
        mStreams = new HashMap<AudioStream, Long>();
    
Methods Summary
synchronized voidadd(AudioStream stream)

        if (!mStreams.containsKey(stream)) {
            try {
                AudioCodec codec = stream.getCodec();
                String codecSpec = String.format(Locale.US, "%d %s %s", codec.type,
                        codec.rtpmap, codec.fmtp);
                long id = nativeAdd(stream.getMode(), stream.getSocket(),
                        stream.getRemoteAddress().getHostAddress(),
                        stream.getRemotePort(), codecSpec, stream.getDtmfType());
                mStreams.put(stream, id);
            } catch (NullPointerException e) {
                throw new IllegalStateException(e);
            }
        }
    
public voidclear()
Removes every {@link AudioStream} in this group.

        for (AudioStream stream : getStreams()) {
            stream.join(null);
        }
    
protected voidfinalize()

        nativeRemove(0L);
        super.finalize();
    
public intgetMode()
Returns the current mode.

        return mMode;
    
public AudioStream[]getStreams()
Returns the {@link AudioStream}s in this group.

        synchronized (this) {
            return mStreams.keySet().toArray(new AudioStream[mStreams.size()]);
        }
    
private native longnativeAdd(int mode, int socket, java.lang.String remoteAddress, int remotePort, java.lang.String codecSpec, int dtmfType)

private native voidnativeRemove(long id)

private native voidnativeSendDtmf(int event)

private native voidnativeSetMode(int mode)

synchronized voidremove(AudioStream stream)

        Long id = mStreams.remove(stream);
        if (id != null) {
            nativeRemove(id);
        }
    
public voidsendDtmf(int event)
Sends a DTMF digit to every {@link AudioStream} in this group. Currently only event {@code 0} to {@code 15} are supported.

throws
IllegalArgumentException if the event is invalid.

        if (event < 0 || event > 15) {
            throw new IllegalArgumentException("Invalid event");
        }
        synchronized (this) {
            nativeSendDtmf(event);
        }
    
public voidsetMode(int mode)
Changes the current mode. It must be one of {@link #MODE_ON_HOLD}, {@link #MODE_MUTED}, {@link #MODE_NORMAL}, and {@link #MODE_ECHO_SUPPRESSION}.

param
mode The mode to change to.
throws
IllegalArgumentException if the mode is invalid.

        if (mode < 0 || mode > MODE_LAST) {
            throw new IllegalArgumentException("Invalid mode");
        }
        synchronized (this) {
            nativeSetMode(mode);
            mMode = mode;
        }