FileDocCategorySizeDatePackage
AudioTrack.javaAPI DocAndroid 5.1 API72771Thu Mar 12 22:22:30 GMT 2015android.media

AudioTrack

public class AudioTrack extends Object
The AudioTrack class manages and plays a single audio resource for Java applications. It allows streaming of PCM audio buffers to the audio sink for playback. This is achieved by "pushing" the data to the AudioTrack object using one of the {@link #write(byte[], int, int)}, {@link #write(short[], int, int)}, and {@link #write(float[], int, int, int)} methods.

An AudioTrack instance can operate under two modes: static or streaming.
In Streaming mode, the application writes a continuous stream of data to the AudioTrack, using one of the {@code write()} methods. These are blocking and return when the data has been transferred from the Java layer to the native layer and queued for playback. The streaming mode is most useful when playing blocks of audio data that for instance are:

  • too big to fit in memory because of the duration of the sound to play,
  • too big to fit in memory because of the characteristics of the audio data (high sampling rate, bits per sample ...)
  • received or generated while previously queued audio is playing.
The static mode should be chosen when dealing with short sounds that fit in memory and that need to be played with the smallest latency possible. The static mode will therefore be preferred for UI and game sounds that are played often, and with the smallest overhead possible.

Upon creation, an AudioTrack object initializes its associated audio buffer. The size of this buffer, specified during the construction, determines how long an AudioTrack can play before running out of data.
For an AudioTrack using the static mode, this size is the maximum size of the sound that can be played from it.
For the streaming mode, data will be written to the audio sink in chunks of sizes less than or equal to the total buffer size. AudioTrack is not final and thus permits subclasses, but such use is not recommended.

Fields Summary
private static final float
GAIN_MIN
Minimum value for a linear gain or auxiliary effect level. This value must be exactly equal to 0.0f; do not change it.
private static final float
GAIN_MAX
Maximum value for a linear gain or auxiliary effect level. This value must be greater than or equal to 1.0f.
private static final int
SAMPLE_RATE_HZ_MIN
Minimum value for sample rate
private static final int
SAMPLE_RATE_HZ_MAX
Maximum value for sample rate
private static final int
CHANNEL_COUNT_MAX
Maximum value for AudioTrack channel count
public static final int
PLAYSTATE_STOPPED
indicates AudioTrack state is stopped
public static final int
PLAYSTATE_PAUSED
indicates AudioTrack state is paused
public static final int
PLAYSTATE_PLAYING
indicates AudioTrack state is playing
public static final int
MODE_STATIC
Creation mode where audio data is transferred from Java to the native layer only once before the audio starts playing.
public static final int
MODE_STREAM
Creation mode where audio data is streamed from Java to the native layer as the audio is playing.
public static final int
STATE_UNINITIALIZED
State of an AudioTrack that was not successfully initialized upon creation.
public static final int
STATE_INITIALIZED
State of an AudioTrack that is ready to be used.
public static final int
STATE_NO_STATIC_DATA
State of a successfully initialized AudioTrack that uses static data, but that hasn't received that data yet.
public static final int
SUCCESS
Denotes a successful operation.
public static final int
ERROR
Denotes a generic operation failure.
public static final int
ERROR_BAD_VALUE
Denotes a failure due to the use of an invalid value.
public static final int
ERROR_INVALID_OPERATION
Denotes a failure due to the improper use of a method.
private static final int
ERROR_NATIVESETUP_AUDIOSYSTEM
private static final int
ERROR_NATIVESETUP_INVALIDCHANNELMASK
private static final int
ERROR_NATIVESETUP_INVALIDFORMAT
private static final int
ERROR_NATIVESETUP_INVALIDSTREAMTYPE
private static final int
ERROR_NATIVESETUP_NATIVEINITFAILED
private static final int
NATIVE_EVENT_MARKER
Event id denotes when playback head has reached a previously set marker.
private static final int
NATIVE_EVENT_NEW_POS
Event id denotes when previously set update period has elapsed during playback.
private static final String
TAG
public static final int
WRITE_BLOCKING
The write mode indicating the write operation will block until all data has been written, to be used in {@link #write(ByteBuffer, int, int)}
public static final int
WRITE_NON_BLOCKING
The write mode indicating the write operation will return immediately after queuing as much audio data for playback as possible without blocking, to be used in {@link #write(ByteBuffer, int, int)}.
private int
mState
Indicates the state of the AudioTrack instance.
private int
mPlayState
Indicates the play state of the AudioTrack instance.
private final Object
mPlayStateLock
Lock to make sure mPlayState updates are reflecting the actual state of the object.
private int
mNativeBufferSizeInBytes
Sizes of the native audio buffer.
private int
mNativeBufferSizeInFrames
private NativeEventHandlerDelegate
mEventHandlerDelegate
Handler for events coming from the native code.
private final android.os.Looper
mInitializationLooper
Looper associated with the thread that creates the AudioTrack instance.
private int
mSampleRate
The audio data source sampling rate in Hz.
private int
mChannelCount
The number of audio output channels (1 is mono, 2 is stereo).
private int
mChannels
The audio channel mask.
private int
mStreamType
The type of the audio stream to play. See {@link AudioManager#STREAM_VOICE_CALL}, {@link AudioManager#STREAM_SYSTEM}, {@link AudioManager#STREAM_RING}, {@link AudioManager#STREAM_MUSIC}, {@link AudioManager#STREAM_ALARM}, {@link AudioManager#STREAM_NOTIFICATION}, and {@link AudioManager#STREAM_DTMF}.
private final AudioAttributes
mAttributes
private int
mDataLoadMode
The way audio is consumed by the audio sink, streaming or static.
private int
mChannelConfiguration
The current audio channel configuration.
private int
mAudioFormat
The encoding of the audio samples.
private int
mSessionId
Audio session ID
private final com.android.internal.app.IAppOpsService
mAppOps
Reference to the app-ops service.
private long
mNativeTrackInJavaObj
Accessed by native methods: provides access to C++ AudioTrack object.
private long
mJniData
Accessed by native methods: provides access to the JNI data (i.e. resources used by the native AudioTrack object, but not stored in it).
private static final int
SUPPORTED_OUT_CHANNELS
Constructors Summary
public AudioTrack(int streamType, int sampleRateInHz, int channelConfig, int audioFormat, int bufferSizeInBytes, int mode)
Class constructor.

param
streamType the type of the audio stream. See {@link AudioManager#STREAM_VOICE_CALL}, {@link AudioManager#STREAM_SYSTEM}, {@link AudioManager#STREAM_RING}, {@link AudioManager#STREAM_MUSIC}, {@link AudioManager#STREAM_ALARM}, and {@link AudioManager#STREAM_NOTIFICATION}.
param
sampleRateInHz the initial source sample rate expressed in Hz.
param
channelConfig describes the configuration of the audio channels. See {@link AudioFormat#CHANNEL_OUT_MONO} and {@link AudioFormat#CHANNEL_OUT_STEREO}
param
audioFormat the format in which the audio data is represented. See {@link AudioFormat#ENCODING_PCM_16BIT}, {@link AudioFormat#ENCODING_PCM_8BIT}, and {@link AudioFormat#ENCODING_PCM_FLOAT}.
param
bufferSizeInBytes the total size (in bytes) of the internal buffer where audio data is read from for playback. If track's creation mode is {@link #MODE_STREAM}, you can write data into this buffer in chunks less than or equal to this size, and it is typical to use chunks of 1/2 of the total size to permit double-buffering. If the track's creation mode is {@link #MODE_STATIC}, this is the maximum length sample, or audio clip, that can be played by this instance. See {@link #getMinBufferSize(int, int, int)} to determine the minimum required buffer size for the successful creation of an AudioTrack instance in streaming mode. Using values smaller than getMinBufferSize() will result in an initialization failure.
param
mode streaming or static buffer. See {@link #MODE_STATIC} and {@link #MODE_STREAM}
throws
java.lang.IllegalArgumentException

        this(streamType, sampleRateInHz, channelConfig, audioFormat,
                bufferSizeInBytes, mode, AudioSystem.AUDIO_SESSION_ALLOCATE);
    
public AudioTrack(int streamType, int sampleRateInHz, int channelConfig, int audioFormat, int bufferSizeInBytes, int mode, int sessionId)
Class constructor with audio session. Use this constructor when the AudioTrack must be attached to a particular audio session. The primary use of the audio session ID is to associate audio effects to a particular instance of AudioTrack: if an audio session ID is provided when creating an AudioEffect, this effect will be applied only to audio tracks and media players in the same session and not to the output mix. When an AudioTrack is created without specifying a session, it will create its own session which can be retrieved by calling the {@link #getAudioSessionId()} method. If a non-zero session ID is provided, this AudioTrack will share effects attached to this session with all other media players or audio tracks in the same session, otherwise a new session will be created for this track if none is supplied.

param
streamType the type of the audio stream. See {@link AudioManager#STREAM_VOICE_CALL}, {@link AudioManager#STREAM_SYSTEM}, {@link AudioManager#STREAM_RING}, {@link AudioManager#STREAM_MUSIC}, {@link AudioManager#STREAM_ALARM}, and {@link AudioManager#STREAM_NOTIFICATION}.
param
sampleRateInHz the initial source sample rate expressed in Hz.
param
channelConfig describes the configuration of the audio channels. See {@link AudioFormat#CHANNEL_OUT_MONO} and {@link AudioFormat#CHANNEL_OUT_STEREO}
param
audioFormat the format in which the audio data is represented. See {@link AudioFormat#ENCODING_PCM_16BIT} and {@link AudioFormat#ENCODING_PCM_8BIT}, and {@link AudioFormat#ENCODING_PCM_FLOAT}.
param
bufferSizeInBytes the total size (in bytes) of the buffer where audio data is read from for playback. If using the AudioTrack in streaming mode, you can write data into this buffer in smaller chunks than this size. If using the AudioTrack in static mode, this is the maximum size of the sound that will be played for this instance. See {@link #getMinBufferSize(int, int, int)} to determine the minimum required buffer size for the successful creation of an AudioTrack instance in streaming mode. Using values smaller than getMinBufferSize() will result in an initialization failure.
param
mode streaming or static buffer. See {@link #MODE_STATIC} and {@link #MODE_STREAM}
param
sessionId Id of audio session the AudioTrack must be attached to
throws
java.lang.IllegalArgumentException

        // mState already == STATE_UNINITIALIZED
        this((new AudioAttributes.Builder())
                    .setLegacyStreamType(streamType)
                    .build(),
                (new AudioFormat.Builder())
                    .setChannelMask(channelConfig)
                    .setEncoding(audioFormat)
                    .setSampleRate(sampleRateInHz)
                    .build(),
                bufferSizeInBytes,
                mode, sessionId);
    
public AudioTrack(AudioAttributes attributes, AudioFormat format, int bufferSizeInBytes, int mode, int sessionId)
Class constructor with {@link AudioAttributes} and {@link AudioFormat}.

param
attributes a non-null {@link AudioAttributes} instance.
param
format a non-null {@link AudioFormat} instance describing the format of the data that will be played through this AudioTrack. See {@link AudioFormat.Builder} for configuring the audio format parameters such as encoding, channel mask and sample rate.
param
bufferSizeInBytes the total size (in bytes) of the buffer where audio data is read from for playback. If using the AudioTrack in streaming mode, you can write data into this buffer in smaller chunks than this size. If using the AudioTrack in static mode, this is the maximum size of the sound that will be played for this instance. See {@link #getMinBufferSize(int, int, int)} to determine the minimum required buffer size for the successful creation of an AudioTrack instance in streaming mode. Using values smaller than getMinBufferSize() will result in an initialization failure.
param
mode streaming or static buffer. See {@link #MODE_STATIC} and {@link #MODE_STREAM}.
param
sessionId ID of audio session the AudioTrack must be attached to, or {@link AudioManager#AUDIO_SESSION_ID_GENERATE} if the session isn't known at construction time. See also {@link AudioManager#generateAudioSessionId()} to obtain a session ID before construction.
throws
IllegalArgumentException

        // mState already == STATE_UNINITIALIZED

        if (attributes == null) {
            throw new IllegalArgumentException("Illegal null AudioAttributes");
        }
        if (format == null) {
            throw new IllegalArgumentException("Illegal null AudioFormat");
        }

        // remember which looper is associated with the AudioTrack instantiation
        Looper looper;
        if ((looper = Looper.myLooper()) == null) {
            looper = Looper.getMainLooper();
        }

        int rate = 0;
        if ((format.getPropertySetMask() & AudioFormat.AUDIO_FORMAT_HAS_PROPERTY_SAMPLE_RATE) != 0)
        {
            rate = format.getSampleRate();
        } else {
            rate = AudioSystem.getPrimaryOutputSamplingRate();
            if (rate <= 0) {
                rate = 44100;
            }
        }
        int channelMask = AudioFormat.CHANNEL_OUT_FRONT_LEFT | AudioFormat.CHANNEL_OUT_FRONT_RIGHT;
        if ((format.getPropertySetMask() & AudioFormat.AUDIO_FORMAT_HAS_PROPERTY_CHANNEL_MASK) != 0)
        {
            channelMask = format.getChannelMask();
        }
        int encoding = AudioFormat.ENCODING_DEFAULT;
        if ((format.getPropertySetMask() & AudioFormat.AUDIO_FORMAT_HAS_PROPERTY_ENCODING) != 0) {
            encoding = format.getEncoding();
        }
        audioParamCheck(rate, channelMask, encoding, mode);
        mStreamType = AudioSystem.STREAM_DEFAULT;

        audioBuffSizeCheck(bufferSizeInBytes);

        mInitializationLooper = looper;
        IBinder b = ServiceManager.getService(Context.APP_OPS_SERVICE);
        mAppOps = IAppOpsService.Stub.asInterface(b);

        mAttributes = (new AudioAttributes.Builder(attributes).build());

        if (sessionId < 0) {
            throw new IllegalArgumentException("Invalid audio session ID: "+sessionId);
        }

        int[] session = new int[1];
        session[0] = sessionId;
        // native initialization
        int initResult = native_setup(new WeakReference<AudioTrack>(this), mAttributes,
                mSampleRate, mChannels, mAudioFormat,
                mNativeBufferSizeInBytes, mDataLoadMode, session);
        if (initResult != SUCCESS) {
            loge("Error code "+initResult+" when initializing AudioTrack.");
            return; // with mState == STATE_UNINITIALIZED
        }

        mSessionId = session[0];

        if (mDataLoadMode == MODE_STATIC) {
            mState = STATE_NO_STATIC_DATA;
        } else {
            mState = STATE_INITIALIZED;
        }
    
Methods Summary
public intattachAuxEffect(int effectId)
Attaches an auxiliary effect to the audio track. A typical auxiliary effect is a reverberation effect which can be applied on any sound source that directs a certain amount of its energy to this effect. This amount is defined by setAuxEffectSendLevel(). {@see #setAuxEffectSendLevel(float)}.

After creating an auxiliary effect (e.g. {@link android.media.audiofx.EnvironmentalReverb}), retrieve its ID with {@link android.media.audiofx.AudioEffect#getId()} and use it when calling this method to attach the audio track to the effect.

To detach the effect from the audio track, call this method with a null effect id.

param
effectId system wide unique id of the effect to attach
return
error code or success, see {@link #SUCCESS}, {@link #ERROR_INVALID_OPERATION}, {@link #ERROR_BAD_VALUE}

        if (mState == STATE_UNINITIALIZED) {
            return ERROR_INVALID_OPERATION;
        }
        return native_attachAuxEffect(effectId);
    
private voidaudioBuffSizeCheck(int audioBufferSize)

        // NB: this section is only valid with PCM data.
        //     To update when supporting compressed formats
        int frameSizeInBytes;
        if (AudioFormat.isEncodingLinearPcm(mAudioFormat)) {
            frameSizeInBytes = mChannelCount
                    * (AudioFormat.getBytesPerSample(mAudioFormat));
        } else {
            frameSizeInBytes = 1;
        }
        if ((audioBufferSize % frameSizeInBytes != 0) || (audioBufferSize < 1)) {
            throw new IllegalArgumentException("Invalid audio buffer size.");
        }

        mNativeBufferSizeInBytes = audioBufferSize;
        mNativeBufferSizeInFrames = audioBufferSize / frameSizeInBytes;
    
private voidaudioParamCheck(int sampleRateInHz, int channelConfig, int audioFormat, int mode)


    // Convenience method for the constructor's parameter checks.
    // This is where constructor IllegalArgumentException-s are thrown
    // postconditions:
    //    mChannelCount is valid
    //    mChannels is valid
    //    mAudioFormat is valid
    //    mSampleRate is valid
    //    mDataLoadMode is valid
       
                                       
        //--------------
        // sample rate, note these values are subject to change
        if (sampleRateInHz < SAMPLE_RATE_HZ_MIN || sampleRateInHz > SAMPLE_RATE_HZ_MAX) {
            throw new IllegalArgumentException(sampleRateInHz
                    + "Hz is not a supported sample rate.");
        }
        mSampleRate = sampleRateInHz;

        //--------------
        // channel config
        mChannelConfiguration = channelConfig;

        switch (channelConfig) {
        case AudioFormat.CHANNEL_OUT_DEFAULT: //AudioFormat.CHANNEL_CONFIGURATION_DEFAULT
        case AudioFormat.CHANNEL_OUT_MONO:
        case AudioFormat.CHANNEL_CONFIGURATION_MONO:
            mChannelCount = 1;
            mChannels = AudioFormat.CHANNEL_OUT_MONO;
            break;
        case AudioFormat.CHANNEL_OUT_STEREO:
        case AudioFormat.CHANNEL_CONFIGURATION_STEREO:
            mChannelCount = 2;
            mChannels = AudioFormat.CHANNEL_OUT_STEREO;
            break;
        default:
            if (!isMultichannelConfigSupported(channelConfig)) {
                // input channel configuration features unsupported channels
                throw new IllegalArgumentException("Unsupported channel configuration.");
            }
            mChannels = channelConfig;
            mChannelCount = Integer.bitCount(channelConfig);
        }

        //--------------
        // audio format
        if (audioFormat == AudioFormat.ENCODING_DEFAULT) {
            audioFormat = AudioFormat.ENCODING_PCM_16BIT;
        }

        if (!AudioFormat.isValidEncoding(audioFormat)) {
            throw new IllegalArgumentException("Unsupported audio encoding.");
        }
        mAudioFormat = audioFormat;

        //--------------
        // audio load mode
        if (((mode != MODE_STREAM) && (mode != MODE_STATIC)) ||
                ((mode != MODE_STREAM) && !AudioFormat.isEncodingLinearPcm(mAudioFormat))) {
            throw new IllegalArgumentException("Invalid mode.");
        }
        mDataLoadMode = mode;
    
private static floatclampGainOrLevel(float gainOrLevel)

        if (Float.isNaN(gainOrLevel)) {
            throw new IllegalArgumentException();
        }
        if (gainOrLevel < GAIN_MIN) {
            gainOrLevel = GAIN_MIN;
        } else if (gainOrLevel > GAIN_MAX) {
            gainOrLevel = GAIN_MAX;
        }
        return gainOrLevel;
    
protected voidfinalize()

        native_finalize();
    
public voidflush()
Flushes the audio data currently queued for playback. Any data that has not been played back will be discarded. No-op if not stopped or paused, or if the track's creation mode is not {@link #MODE_STREAM}.

        if (mState == STATE_INITIALIZED) {
            // flush the data in native layer
            native_flush();
        }

    
public intgetAudioFormat()
Returns the configured audio data format. See {@link AudioFormat#ENCODING_PCM_16BIT} and {@link AudioFormat#ENCODING_PCM_8BIT}.

        return mAudioFormat;
    
public intgetAudioSessionId()
Returns the audio session ID.

return
the ID of the audio session this AudioTrack belongs to.

        return mSessionId;
    
public intgetChannelConfiguration()
Returns the configured channel configuration. See {@link AudioFormat#CHANNEL_OUT_MONO} and {@link AudioFormat#CHANNEL_OUT_STEREO}.

        return mChannelConfiguration;
    
public intgetChannelCount()
Returns the configured number of channels.

        return mChannelCount;
    
public intgetLatency()
Returns this track's estimated latency in milliseconds. This includes the latency due to AudioTrack buffer size, AudioMixer (if any) and audio hardware driver. DO NOT UNHIDE. The existing approach for doing A/V sync has too many problems. We need a better solution.

hide

        return native_get_latency();
    
public static floatgetMaxVolume()
Returns the maximum gain value, which is greater than or equal to 1.0. Gain values greater than the maximum will be clamped to the maximum.

The word "volume" in the API name is historical; this is actually a gain. expressed as a linear multiplier on sample values, where a maximum value of 1.0 corresponds to a gain of 0 dB (sample values left unmodified).

return
the maximum value, which is greater than or equal to 1.0.

        return GAIN_MAX;
    
public static intgetMinBufferSize(int sampleRateInHz, int channelConfig, int audioFormat)
Returns the minimum buffer size required for the successful creation of an AudioTrack object to be created in the {@link #MODE_STREAM} mode. Note that this size doesn't guarantee a smooth playback under load, and higher values should be chosen according to the expected frequency at which the buffer will be refilled with additional data to play. For example, if you intend to dynamically set the source sample rate of an AudioTrack to a higher value than the initial source sample rate, be sure to configure the buffer size based on the highest planned sample rate.

param
sampleRateInHz the source sample rate expressed in Hz.
param
channelConfig describes the configuration of the audio channels. See {@link AudioFormat#CHANNEL_OUT_MONO} and {@link AudioFormat#CHANNEL_OUT_STEREO}
param
audioFormat the format in which the audio data is represented. See {@link AudioFormat#ENCODING_PCM_16BIT} and {@link AudioFormat#ENCODING_PCM_8BIT}, and {@link AudioFormat#ENCODING_PCM_FLOAT}.
return
{@link #ERROR_BAD_VALUE} if an invalid parameter was passed, or {@link #ERROR} if unable to query for output properties, or the minimum buffer size expressed in bytes.

        int channelCount = 0;
        switch(channelConfig) {
        case AudioFormat.CHANNEL_OUT_MONO:
        case AudioFormat.CHANNEL_CONFIGURATION_MONO:
            channelCount = 1;
            break;
        case AudioFormat.CHANNEL_OUT_STEREO:
        case AudioFormat.CHANNEL_CONFIGURATION_STEREO:
            channelCount = 2;
            break;
        default:
            if ((channelConfig & SUPPORTED_OUT_CHANNELS) != channelConfig) {
                // input channel configuration features unsupported channels
                loge("getMinBufferSize(): Invalid channel configuration.");
                return ERROR_BAD_VALUE;
            } else {
                channelCount = Integer.bitCount(channelConfig);
            }
        }

        if (!AudioFormat.isValidEncoding(audioFormat)) {
            loge("getMinBufferSize(): Invalid audio format.");
            return ERROR_BAD_VALUE;
        }

        // sample rate, note these values are subject to change
        if ( (sampleRateInHz < SAMPLE_RATE_HZ_MIN) || (sampleRateInHz > SAMPLE_RATE_HZ_MAX) ) {
            loge("getMinBufferSize(): " + sampleRateInHz + " Hz is not a supported sample rate.");
            return ERROR_BAD_VALUE;
        }

        int size = native_get_min_buff_size(sampleRateInHz, channelCount, audioFormat);
        if (size <= 0) {
            loge("getMinBufferSize(): error querying hardware");
            return ERROR;
        }
        else {
            return size;
        }
    
public static floatgetMinVolume()
Returns the minimum gain value, which is the constant 0.0. Gain values less than 0.0 will be clamped to 0.0.

The word "volume" in the API name is historical; this is actually a linear gain.

return
the minimum value, which is the constant 0.0.

        return GAIN_MIN;
    
protected intgetNativeFrameCount()
Returns the "native frame count", derived from the bufferSizeInBytes specified at creation time and converted to frame units. If track's creation mode is {@link #MODE_STATIC}, it is equal to the specified bufferSizeInBytes converted to frame units. If track's creation mode is {@link #MODE_STREAM}, it is typically greater than or equal to the specified bufferSizeInBytes converted to frame units; it may be rounded up to a larger value if needed by the target device implementation.

deprecated
Only accessible by subclasses, which are not recommended for AudioTrack. See {@link AudioManager#getProperty(String)} for key {@link AudioManager#PROPERTY_OUTPUT_FRAMES_PER_BUFFER}.

        return native_get_native_frame_count();
    
public static intgetNativeOutputSampleRate(int streamType)
Returns the output sample rate in Hz for the specified stream type.

        return native_get_output_sample_rate(streamType);
    
public intgetNotificationMarkerPosition()
Returns marker position expressed in frames.

return
marker position in wrapping frame units similar to {@link #getPlaybackHeadPosition}, or zero if marker is disabled.

        return native_get_marker_pos();
    
public intgetPlayState()
Returns the playback state of the AudioTrack instance.

see
#PLAYSTATE_STOPPED
see
#PLAYSTATE_PAUSED
see
#PLAYSTATE_PLAYING

        synchronized (mPlayStateLock) {
            return mPlayState;
        }
    
public intgetPlaybackHeadPosition()
Returns the playback head position expressed in frames. Though the "int" type is signed 32-bits, the value should be reinterpreted as if it is unsigned 32-bits. That is, the next position after 0x7FFFFFFF is (int) 0x80000000. This is a continuously advancing counter. It will wrap (overflow) periodically, for example approximately once every 27:03:11 hours:minutes:seconds at 44.1 kHz. It is reset to zero by flush(), reload(), and stop().

        return native_get_position();
    
public intgetPlaybackRate()
Returns the current playback rate in Hz.

        return native_get_playback_rate();
    
public intgetPositionNotificationPeriod()
Returns the notification update period expressed in frames. Zero means that no position update notifications are being delivered.

        return native_get_pos_update_period();
    
public intgetSampleRate()
Returns the configured audio data sample rate in Hz

        return mSampleRate;
    
public intgetState()
Returns the state of the AudioTrack instance. This is useful after the AudioTrack instance has been created to check if it was initialized properly. This ensures that the appropriate resources have been acquired.

see
#STATE_INITIALIZED
see
#STATE_NO_STATIC_DATA
see
#STATE_UNINITIALIZED

        return mState;
    
public intgetStreamType()
Returns the type of audio stream this AudioTrack is configured for. Compare the result against {@link AudioManager#STREAM_VOICE_CALL}, {@link AudioManager#STREAM_SYSTEM}, {@link AudioManager#STREAM_RING}, {@link AudioManager#STREAM_MUSIC}, {@link AudioManager#STREAM_ALARM}, {@link AudioManager#STREAM_NOTIFICATION}, or {@link AudioManager#STREAM_DTMF}.

        return mStreamType;
    
public booleangetTimestamp(AudioTimestamp timestamp)
Poll for a timestamp on demand.

If you need to track timestamps during initial warmup or after a routing or mode change, you should request a new timestamp once per second until the reported timestamps show that the audio clock is stable. Thereafter, query for a new timestamp approximately once every 10 seconds to once per minute. Calling this method more often is inefficient. It is also counter-productive to call this method more often than recommended, because the short-term differences between successive timestamp reports are not meaningful. If you need a high-resolution mapping between frame position and presentation time, consider implementing that at application level, based on low-resolution timestamps.

The audio data at the returned position may either already have been presented, or may have not yet been presented but is committed to be presented. It is not possible to request the time corresponding to a particular position, or to request the (fractional) position corresponding to a particular time. If you need such features, consider implementing them at application level.

param
timestamp a reference to a non-null AudioTimestamp instance allocated and owned by caller.
return
true if a timestamp is available, or false if no timestamp is available. If a timestamp if available, the AudioTimestamp instance is filled in with a position in frame units, together with the estimated time when that frame was presented or is committed to be presented. In the case that no timestamp is available, any supplied instance is left unaltered. A timestamp may be temporarily unavailable while the audio clock is stabilizing, or during and immediately after a route change.

        if (timestamp == null) {
            throw new IllegalArgumentException();
        }
        // It's unfortunate, but we have to either create garbage every time or use synchronized
        long[] longArray = new long[2];
        int ret = native_get_timestamp(longArray);
        if (ret != SUCCESS) {
            return false;
        }
        timestamp.framePosition = longArray[0];
        timestamp.nanoTime = longArray[1];
        return true;
    
private static booleanisMultichannelConfigSupported(int channelConfig)
Convenience method to check that the channel configuration (a.k.a channel mask) is supported

param
channelConfig the mask to validate
return
false if the AudioTrack can't be used with such a mask

        // check for unsupported channels
        if ((channelConfig & SUPPORTED_OUT_CHANNELS) != channelConfig) {
            loge("Channel configuration features unsupported channels");
            return false;
        }
        final int channelCount = Integer.bitCount(channelConfig);
        if (channelCount > CHANNEL_COUNT_MAX) {
            loge("Channel configuration contains too many channels " +
                    channelCount + ">" + CHANNEL_COUNT_MAX);
            return false;
        }
        // check for unsupported multichannel combinations:
        // - FL/FR must be present
        // - L/R channels must be paired (e.g. no single L channel)
        final int frontPair =
                AudioFormat.CHANNEL_OUT_FRONT_LEFT | AudioFormat.CHANNEL_OUT_FRONT_RIGHT;
        if ((channelConfig & frontPair) != frontPair) {
                loge("Front channels must be present in multichannel configurations");
                return false;
        }
        final int backPair =
                AudioFormat.CHANNEL_OUT_BACK_LEFT | AudioFormat.CHANNEL_OUT_BACK_RIGHT;
        if ((channelConfig & backPair) != 0) {
            if ((channelConfig & backPair) != backPair) {
                loge("Rear channels can't be used independently");
                return false;
            }
        }
        final int sidePair =
                AudioFormat.CHANNEL_OUT_SIDE_LEFT | AudioFormat.CHANNEL_OUT_SIDE_RIGHT;
        if ((channelConfig & sidePair) != 0
                && (channelConfig & sidePair) != sidePair) {
            loge("Side channels can't be used independently");
            return false;
        }
        return true;
    
private booleanisRestricted()

        try {
            final int usage = AudioAttributes.usageForLegacyStreamType(mStreamType);
            final int mode = mAppOps.checkAudioOperation(AppOpsManager.OP_PLAY_AUDIO, usage,
                    Process.myUid(), ActivityThread.currentPackageName());
            return mode != AppOpsManager.MODE_ALLOWED;
        } catch (RemoteException e) {
            return false;
        }
    
private static voidlogd(java.lang.String msg)

        Log.d(TAG, msg);
    
private static voidloge(java.lang.String msg)

        Log.e(TAG, msg);
    
private final native intnative_attachAuxEffect(int effectId)

private final native voidnative_finalize()

private final native voidnative_flush()

private final native intnative_get_latency()

private final native intnative_get_marker_pos()

private static final native intnative_get_min_buff_size(int sampleRateInHz, int channelConfig, int audioFormat)

private final native intnative_get_native_frame_count()

private static final native intnative_get_output_sample_rate(int streamType)

private final native intnative_get_playback_rate()

private final native intnative_get_pos_update_period()

private final native intnative_get_position()

private final native intnative_get_timestamp(long[] longArray)

private final native voidnative_pause()

private final native voidnative_release()

private final native intnative_reload_static()

private final native intnative_setAuxEffectSendLevel(float level)

private final native voidnative_setVolume(float leftVolume, float rightVolume)

private final native intnative_set_loop(int start, int end, int loopCount)

private final native intnative_set_marker_pos(int marker)

private final native intnative_set_playback_rate(int sampleRateInHz)

private final native intnative_set_pos_update_period(int updatePeriod)

private final native intnative_set_position(int position)

private final native intnative_setup(java.lang.Object audiotrack_this, java.lang.Object attributes, int sampleRate, int channelMask, int audioFormat, int buffSizeInBytes, int mode, int[] sessionId)

private final native voidnative_start()

private final native voidnative_stop()

private final native intnative_write_byte(byte[] audioData, int offsetInBytes, int sizeInBytes, int format, boolean isBlocking)

private final native intnative_write_float(float[] audioData, int offsetInFloats, int sizeInFloats, int format, boolean isBlocking)

private final native intnative_write_native_bytes(java.lang.Object audioData, int positionInBytes, int sizeInBytes, int format, boolean blocking)

private final native intnative_write_short(short[] audioData, int offsetInShorts, int sizeInShorts, int format)

public voidpause()
Pauses the playback of the audio data. Data that has not been played back will not be discarded. Subsequent calls to {@link #play} will play this data back. See {@link #flush()} to discard this data.

throws
IllegalStateException

        if (mState != STATE_INITIALIZED) {
            throw new IllegalStateException("pause() called on uninitialized AudioTrack.");
        }
        //logd("pause()");

        // pause playback
        synchronized(mPlayStateLock) {
            native_pause();
            mPlayState = PLAYSTATE_PAUSED;
        }
    
public voidplay()
Starts playing an AudioTrack. If track's creation mode is {@link #MODE_STATIC}, you must have called write() prior.

throws
IllegalStateException

        if (mState != STATE_INITIALIZED) {
            throw new IllegalStateException("play() called on uninitialized AudioTrack.");
        }
        if (isRestricted()) {
            setVolume(0);
        }
        synchronized(mPlayStateLock) {
            native_start();
            mPlayState = PLAYSTATE_PLAYING;
        }
    
private static voidpostEventFromNative(java.lang.Object audiotrack_ref, int what, int arg1, int arg2, java.lang.Object obj)

        //logd("Event posted from the native side: event="+ what + " args="+ arg1+" "+arg2);
        AudioTrack track = (AudioTrack)((WeakReference)audiotrack_ref).get();
        if (track == null) {
            return;
        }

        NativeEventHandlerDelegate delegate = track.mEventHandlerDelegate;
        if (delegate != null) {
            Handler handler = delegate.getHandler();
            if (handler != null) {
                Message m = handler.obtainMessage(what, arg1, arg2, obj);
                handler.sendMessage(m);
            }
        }

    
public voidrelease()
Releases the native AudioTrack resources.

        // even though native_release() stops the native AudioTrack, we need to stop
        // AudioTrack subclasses too.
        try {
            stop();
        } catch(IllegalStateException ise) {
            // don't raise an exception, we're releasing the resources.
        }
        native_release();
        mState = STATE_UNINITIALIZED;
    
public intreloadStaticData()
Notifies the native resource to reuse the audio data already loaded in the native layer, that is to rewind to start of buffer. The track's creation mode must be {@link #MODE_STATIC}.

return
error code or success, see {@link #SUCCESS}, {@link #ERROR_BAD_VALUE}, {@link #ERROR_INVALID_OPERATION}

        if (mDataLoadMode == MODE_STREAM || mState != STATE_INITIALIZED) {
            return ERROR_INVALID_OPERATION;
        }
        return native_reload_static();
    
public intsetAuxEffectSendLevel(float level)
Sets the send level of the audio track to the attached auxiliary effect {@link #attachAuxEffect(int)}. Effect levels are clamped to the closed interval [0.0, max] where max is the value of {@link #getMaxVolume}. A value of 0.0 results in no effect, and a value of 1.0 is full send.

By default the send level is 0.0f, so even if an effect is attached to the player this method must be called for the effect to be applied.

Note that the passed level value is a linear scalar. UI controls should be scaled logarithmically: the gain applied by audio framework ranges from -72dB to at least 0dB, so an appropriate conversion from linear UI input x to level is: x == 0 -> level = 0 0 < x <= R -> level = 10^(72*(x-R)/20/R)

param
level linear send level
return
error code or success, see {@link #SUCCESS}, {@link #ERROR_INVALID_OPERATION}, {@link #ERROR}

        if (isRestricted()) {
            return SUCCESS;
        }
        if (mState == STATE_UNINITIALIZED) {
            return ERROR_INVALID_OPERATION;
        }
        level = clampGainOrLevel(level);
        int err = native_setAuxEffectSendLevel(level);
        return err == 0 ? SUCCESS : ERROR;
    
public intsetLoopPoints(int startInFrames, int endInFrames, int loopCount)
Sets the loop points and the loop count. The loop can be infinite. Similarly to setPlaybackHeadPosition, the track must be stopped or paused for the loop points to be changed, and must use the {@link #MODE_STATIC} mode.

param
startInFrames loop start marker expressed in frames Zero corresponds to start of buffer. The start marker must not be greater than or equal to the buffer size in frames, or negative.
param
endInFrames loop end marker expressed in frames The total buffer size in frames corresponds to end of buffer. The end marker must not be greater than the buffer size in frames. For looping, the end marker must not be less than or equal to the start marker, but to disable looping it is permitted for start marker, end marker, and loop count to all be 0.
param
loopCount the number of times the loop is looped. A value of -1 means infinite looping, and 0 disables looping.
return
error code or success, see {@link #SUCCESS}, {@link #ERROR_BAD_VALUE}, {@link #ERROR_INVALID_OPERATION}

        if (mDataLoadMode == MODE_STREAM || mState == STATE_UNINITIALIZED ||
                getPlayState() == PLAYSTATE_PLAYING) {
            return ERROR_INVALID_OPERATION;
        }
        if (loopCount == 0) {
            ;   // explicitly allowed as an exception to the loop region range check
        } else if (!(0 <= startInFrames && startInFrames < mNativeBufferSizeInFrames &&
                startInFrames < endInFrames && endInFrames <= mNativeBufferSizeInFrames)) {
            return ERROR_BAD_VALUE;
        }
        return native_set_loop(startInFrames, endInFrames, loopCount);
    
public intsetNotificationMarkerPosition(int markerInFrames)
Sets the position of the notification marker. At most one marker can be active.

param
markerInFrames marker position in wrapping frame units similar to {@link #getPlaybackHeadPosition}, or zero to disable the marker. To set a marker at a position which would appear as zero due to wraparound, a workaround is to use a non-zero position near zero, such as -1 or 1.
return
error code or success, see {@link #SUCCESS}, {@link #ERROR_BAD_VALUE}, {@link #ERROR_INVALID_OPERATION}

        if (mState == STATE_UNINITIALIZED) {
            return ERROR_INVALID_OPERATION;
        }
        return native_set_marker_pos(markerInFrames);
    
public intsetPlaybackHeadPosition(int positionInFrames)
Sets the playback head position. The track must be stopped or paused for the position to be changed, and must use the {@link #MODE_STATIC} mode.

param
positionInFrames playback head position expressed in frames Zero corresponds to start of buffer. The position must not be greater than the buffer size in frames, or negative.
return
error code or success, see {@link #SUCCESS}, {@link #ERROR_BAD_VALUE}, {@link #ERROR_INVALID_OPERATION}

        if (mDataLoadMode == MODE_STREAM || mState == STATE_UNINITIALIZED ||
                getPlayState() == PLAYSTATE_PLAYING) {
            return ERROR_INVALID_OPERATION;
        }
        if (!(0 <= positionInFrames && positionInFrames <= mNativeBufferSizeInFrames)) {
            return ERROR_BAD_VALUE;
        }
        return native_set_position(positionInFrames);
    
public voidsetPlaybackPositionUpdateListener(android.media.AudioTrack$OnPlaybackPositionUpdateListener listener)
Sets the listener the AudioTrack notifies when a previously set marker is reached or for each periodic playback head position update. Notifications will be received in the same thread as the one in which the AudioTrack instance was created.

param
listener

        setPlaybackPositionUpdateListener(listener, null);
    
public voidsetPlaybackPositionUpdateListener(android.media.AudioTrack$OnPlaybackPositionUpdateListener listener, android.os.Handler handler)
Sets the listener the AudioTrack notifies when a previously set marker is reached or for each periodic playback head position update. Use this method to receive AudioTrack events in the Handler associated with another thread than the one in which you created the AudioTrack instance.

param
listener
param
handler the Handler that will receive the event notification messages.

        if (listener != null) {
            mEventHandlerDelegate = new NativeEventHandlerDelegate(this, listener, handler);
        } else {
            mEventHandlerDelegate = null;
        }
    
public intsetPlaybackRate(int sampleRateInHz)
Sets the playback sample rate for this track. This sets the sampling rate at which the audio data will be consumed and played back (as set by the sampleRateInHz parameter in the {@link #AudioTrack(int, int, int, int, int, int)} constructor), not the original sampling rate of the content. For example, setting it to half the sample rate of the content will cause the playback to last twice as long, but will also result in a pitch shift down by one octave. The valid sample rate range is from 1 Hz to twice the value returned by {@link #getNativeOutputSampleRate(int)}.

param
sampleRateInHz the sample rate expressed in Hz
return
error code or success, see {@link #SUCCESS}, {@link #ERROR_BAD_VALUE}, {@link #ERROR_INVALID_OPERATION}

        if (mState != STATE_INITIALIZED) {
            return ERROR_INVALID_OPERATION;
        }
        if (sampleRateInHz <= 0) {
            return ERROR_BAD_VALUE;
        }
        return native_set_playback_rate(sampleRateInHz);
    
public intsetPositionNotificationPeriod(int periodInFrames)
Sets the period for the periodic notification event.

param
periodInFrames update period expressed in frames
return
error code or success, see {@link #SUCCESS}, {@link #ERROR_INVALID_OPERATION}

        if (mState == STATE_UNINITIALIZED) {
            return ERROR_INVALID_OPERATION;
        }
        return native_set_pos_update_period(periodInFrames);
    
protected voidsetState(int state)
Sets the initialization state of the instance. This method was originally intended to be used in an AudioTrack subclass constructor to set a subclass-specific post-initialization state. However, subclasses of AudioTrack are no longer recommended, so this method is obsolete.

param
state the state of the AudioTrack instance
deprecated
Only accessible by subclasses, which are not recommended for AudioTrack.

        mState = state;
    
public intsetStereoVolume(float leftGain, float rightGain)
Sets the specified left and right output gain values on the AudioTrack.

Gain values are clamped to the closed interval [0.0, max] where max is the value of {@link #getMaxVolume}. A value of 0.0 results in zero gain (silence), and a value of 1.0 means unity gain (signal unchanged). The default value is 1.0 meaning unity gain.

The word "volume" in the API name is historical; this is actually a linear gain.

param
leftGain output gain for the left channel.
param
rightGain output gain for the right channel
return
error code or success, see {@link #SUCCESS}, {@link #ERROR_INVALID_OPERATION}
deprecated
Applications should use {@link #setVolume} instead, as it more gracefully scales down to mono, and up to multi-channel content beyond stereo.

        if (isRestricted()) {
            return SUCCESS;
        }
        if (mState == STATE_UNINITIALIZED) {
            return ERROR_INVALID_OPERATION;
        }

        leftGain = clampGainOrLevel(leftGain);
        rightGain = clampGainOrLevel(rightGain);

        native_setVolume(leftGain, rightGain);

        return SUCCESS;
    
public intsetVolume(float gain)
Sets the specified output gain value on all channels of this track.

Gain values are clamped to the closed interval [0.0, max] where max is the value of {@link #getMaxVolume}. A value of 0.0 results in zero gain (silence), and a value of 1.0 means unity gain (signal unchanged). The default value is 1.0 meaning unity gain.

This API is preferred over {@link #setStereoVolume}, as it more gracefully scales down to mono, and up to multi-channel content beyond stereo.

The word "volume" in the API name is historical; this is actually a linear gain.

param
gain output gain for all channels.
return
error code or success, see {@link #SUCCESS}, {@link #ERROR_INVALID_OPERATION}

        return setStereoVolume(gain, gain);
    
public voidstop()
Stops playing the audio data. When used on an instance created in {@link #MODE_STREAM} mode, audio will stop playing after the last buffer that was written has been played. For an immediate stop, use {@link #pause()}, followed by {@link #flush()} to discard audio data that hasn't been played back yet.

throws
IllegalStateException

        if (mState != STATE_INITIALIZED) {
            throw new IllegalStateException("stop() called on uninitialized AudioTrack.");
        }

        // stop playing
        synchronized(mPlayStateLock) {
            native_stop();
            mPlayState = PLAYSTATE_STOPPED;
        }
    
public intwrite(byte[] audioData, int offsetInBytes, int sizeInBytes)
Writes the audio data to the audio sink for playback (streaming mode), or copies audio data for later playback (static buffer mode). In streaming mode, will block until all data has been written to the audio sink. In static buffer mode, copies the data to the buffer starting at offset 0. Note that the actual playback of this data might occur after this function returns. This function is thread safe with respect to {@link #stop} calls, in which case all of the specified data might not be written to the audio sink.

param
audioData the array that holds the data to play.
param
offsetInBytes the offset expressed in bytes in audioData where the data to play starts.
param
sizeInBytes the number of bytes to read in audioData after the offset.
return
the number of bytes that were written or {@link #ERROR_INVALID_OPERATION} if the object wasn't properly initialized, or {@link #ERROR_BAD_VALUE} if the parameters don't resolve to valid data and indexes, or {@link AudioManager#ERROR_DEAD_OBJECT} if the AudioTrack is not valid anymore and needs to be recreated.


        if (mState == STATE_UNINITIALIZED || mAudioFormat == AudioFormat.ENCODING_PCM_FLOAT) {
            return ERROR_INVALID_OPERATION;
        }

        if ( (audioData == null) || (offsetInBytes < 0 ) || (sizeInBytes < 0)
                || (offsetInBytes + sizeInBytes < 0)    // detect integer overflow
                || (offsetInBytes + sizeInBytes > audioData.length)) {
            return ERROR_BAD_VALUE;
        }

        int ret = native_write_byte(audioData, offsetInBytes, sizeInBytes, mAudioFormat,
                true /*isBlocking*/);

        if ((mDataLoadMode == MODE_STATIC)
                && (mState == STATE_NO_STATIC_DATA)
                && (ret > 0)) {
            // benign race with respect to other APIs that read mState
            mState = STATE_INITIALIZED;
        }

        return ret;
    
public intwrite(short[] audioData, int offsetInShorts, int sizeInShorts)
Writes the audio data to the audio sink for playback (streaming mode), or copies audio data for later playback (static buffer mode). In streaming mode, will block until all data has been written to the audio sink. In static buffer mode, copies the data to the buffer starting at offset 0. Note that the actual playback of this data might occur after this function returns. This function is thread safe with respect to {@link #stop} calls, in which case all of the specified data might not be written to the audio sink.

param
audioData the array that holds the data to play.
param
offsetInShorts the offset expressed in shorts in audioData where the data to play starts.
param
sizeInShorts the number of shorts to read in audioData after the offset.
return
the number of shorts that were written or {@link #ERROR_INVALID_OPERATION} if the object wasn't properly initialized, or {@link #ERROR_BAD_VALUE} if the parameters don't resolve to valid data and indexes.


        if (mState == STATE_UNINITIALIZED || mAudioFormat == AudioFormat.ENCODING_PCM_FLOAT) {
            return ERROR_INVALID_OPERATION;
        }

        if ( (audioData == null) || (offsetInShorts < 0 ) || (sizeInShorts < 0)
                || (offsetInShorts + sizeInShorts < 0)  // detect integer overflow
                || (offsetInShorts + sizeInShorts > audioData.length)) {
            return ERROR_BAD_VALUE;
        }

        int ret = native_write_short(audioData, offsetInShorts, sizeInShorts, mAudioFormat);

        if ((mDataLoadMode == MODE_STATIC)
                && (mState == STATE_NO_STATIC_DATA)
                && (ret > 0)) {
            // benign race with respect to other APIs that read mState
            mState = STATE_INITIALIZED;
        }

        return ret;
    
public intwrite(float[] audioData, int offsetInFloats, int sizeInFloats, int writeMode)
Writes the audio data to the audio sink for playback (streaming mode), or copies audio data for later playback (static buffer mode). In static buffer mode, copies the data to the buffer starting at offset 0, and the write mode is ignored. In streaming mode, the blocking behavior will depend on the write mode.

Note that the actual playback of this data might occur after this function returns. This function is thread safe with respect to {@link #stop} calls, in which case all of the specified data might not be written to the audio sink.

param
audioData the array that holds the data to play. The implementation does not clip for sample values within the nominal range [-1.0f, 1.0f], provided that all gains in the audio pipeline are less than or equal to unity (1.0f), and in the absence of post-processing effects that could add energy, such as reverb. For the convenience of applications that compute samples using filters with non-unity gain, sample values +3 dB beyond the nominal range are permitted. However such values may eventually be limited or clipped, depending on various gains and later processing in the audio path. Therefore applications are encouraged to provide samples values within the nominal range.
param
offsetInFloats the offset, expressed as a number of floats, in audioData where the data to play starts.
param
sizeInFloats the number of floats to read in audioData after the offset.
param
writeMode one of {@link #WRITE_BLOCKING}, {@link #WRITE_NON_BLOCKING}. It has no effect in static mode.
With {@link #WRITE_BLOCKING}, the write will block until all data has been written to the audio sink.
With {@link #WRITE_NON_BLOCKING}, the write will return immediately after queuing as much audio data for playback as possible without blocking.
return
the number of floats that were written, or {@link #ERROR_INVALID_OPERATION} if the object wasn't properly initialized, or {@link #ERROR_BAD_VALUE} if the parameters don't resolve to valid data and indexes.


        if (mState == STATE_UNINITIALIZED) {
            Log.e(TAG, "AudioTrack.write() called in invalid state STATE_UNINITIALIZED");
            return ERROR_INVALID_OPERATION;
        }

        if (mAudioFormat != AudioFormat.ENCODING_PCM_FLOAT) {
            Log.e(TAG, "AudioTrack.write(float[] ...) requires format ENCODING_PCM_FLOAT");
            return ERROR_INVALID_OPERATION;
        }

        if ((writeMode != WRITE_BLOCKING) && (writeMode != WRITE_NON_BLOCKING)) {
            Log.e(TAG, "AudioTrack.write() called with invalid blocking mode");
            return ERROR_BAD_VALUE;
        }

        if ( (audioData == null) || (offsetInFloats < 0 ) || (sizeInFloats < 0)
                || (offsetInFloats + sizeInFloats < 0)  // detect integer overflow
                || (offsetInFloats + sizeInFloats > audioData.length)) {
            Log.e(TAG, "AudioTrack.write() called with invalid array, offset, or size");
            return ERROR_BAD_VALUE;
        }

        int ret = native_write_float(audioData, offsetInFloats, sizeInFloats, mAudioFormat,
                writeMode == WRITE_BLOCKING);

        if ((mDataLoadMode == MODE_STATIC)
                && (mState == STATE_NO_STATIC_DATA)
                && (ret > 0)) {
            // benign race with respect to other APIs that read mState
            mState = STATE_INITIALIZED;
        }

        return ret;
    
public intwrite(java.nio.ByteBuffer audioData, int sizeInBytes, int writeMode)
Writes the audio data to the audio sink for playback (streaming mode), or copies audio data for later playback (static buffer mode). In static buffer mode, copies the data to the buffer starting at its 0 offset, and the write mode is ignored. In streaming mode, the blocking behavior will depend on the write mode.

param
audioData the buffer that holds the data to play, starting at the position reported by audioData.position().
Note that upon return, the buffer position (audioData.position()) will have been advanced to reflect the amount of data that was successfully written to the AudioTrack.
param
sizeInBytes number of bytes to write.
Note this may differ from audioData.remaining(), but cannot exceed it.
param
writeMode one of {@link #WRITE_BLOCKING}, {@link #WRITE_NON_BLOCKING}. It has no effect in static mode.
With {@link #WRITE_BLOCKING}, the write will block until all data has been written to the audio sink.
With {@link #WRITE_NON_BLOCKING}, the write will return immediately after queuing as much audio data for playback as possible without blocking.
return
0 or a positive number of bytes that were written, or {@link #ERROR_BAD_VALUE}, {@link #ERROR_INVALID_OPERATION}


        if (mState == STATE_UNINITIALIZED) {
            Log.e(TAG, "AudioTrack.write() called in invalid state STATE_UNINITIALIZED");
            return ERROR_INVALID_OPERATION;
        }

        if ((writeMode != WRITE_BLOCKING) && (writeMode != WRITE_NON_BLOCKING)) {
            Log.e(TAG, "AudioTrack.write() called with invalid blocking mode");
            return ERROR_BAD_VALUE;
        }

        if ( (audioData == null) || (sizeInBytes < 0) || (sizeInBytes > audioData.remaining())) {
            Log.e(TAG, "AudioTrack.write() called with invalid size (" + sizeInBytes + ") value");
            return ERROR_BAD_VALUE;
        }

        int ret = 0;
        if (audioData.isDirect()) {
            ret = native_write_native_bytes(audioData,
                    audioData.position(), sizeInBytes, mAudioFormat,
                    writeMode == WRITE_BLOCKING);
        } else {
            ret = native_write_byte(NioUtils.unsafeArray(audioData),
                    NioUtils.unsafeArrayOffset(audioData) + audioData.position(),
                    sizeInBytes, mAudioFormat,
                    writeMode == WRITE_BLOCKING);
        }

        if ((mDataLoadMode == MODE_STATIC)
                && (mState == STATE_NO_STATIC_DATA)
                && (ret > 0)) {
            // benign race with respect to other APIs that read mState
            mState = STATE_INITIALIZED;
        }

        if (ret > 0) {
            audioData.position(audioData.position() + ret);
        }

        return ret;