Fields Summary |
---|
private static final String | TAG |
private long | mNativeContext |
private android.view.Surface | mSurface |
private String | mPath |
private FileDescriptor | mFd |
private EventHandler | mEventHandler |
private OnErrorListener | mOnErrorListener |
private OnInfoListener | mOnInfoListener |
public static final int | MEDIA_RECORDER_ERROR_UNKNOWNUnspecified media recorder error. |
public static final int | MEDIA_ERROR_SERVER_DIEDMedia server died. In this case, the application must release the
MediaRecorder object and instantiate a new one. |
public static final int | MEDIA_RECORDER_INFO_UNKNOWNUnspecified media recorder error. |
public static final int | MEDIA_RECORDER_INFO_MAX_DURATION_REACHEDA maximum duration had been setup and has now been reached. |
public static final int | MEDIA_RECORDER_INFO_MAX_FILESIZE_REACHEDA maximum filesize had been setup and has now been reached. |
public static final int | MEDIA_RECORDER_TRACK_INFO_LIST_STARTinformational events for individual tracks, for testing purpose.
The track informational event usually contains two parts in the ext1
arg of the onInfo() callback: bit 31-28 contains the track id; and
the rest of the 28 bits contains the informational event defined here.
For example, ext1 = (1 << 28 | MEDIA_RECORDER_TRACK_INFO_TYPE) if the
track id is 1 for informational event MEDIA_RECORDER_TRACK_INFO_TYPE;
while ext1 = (0 << 28 | MEDIA_RECORDER_TRACK_INFO_TYPE) if the track
id is 0 for informational event MEDIA_RECORDER_TRACK_INFO_TYPE. The
application should extract the track id and the type of informational
event from ext1, accordingly.
FIXME:
Please update the comment for onInfo also when these
events are unhidden so that application knows how to extract the track
id and the informational event type from onInfo callback.
{@hide} |
public static final int | MEDIA_RECORDER_TRACK_INFO_COMPLETION_STATUSSignal the completion of the track for the recording session.
{@hide} |
public static final int | MEDIA_RECORDER_TRACK_INFO_PROGRESS_IN_TIMEIndicate the recording progress in time (ms) during recording.
{@hide} |
public static final int | MEDIA_RECORDER_TRACK_INFO_TYPEIndicate the track type: 0 for Audio and 1 for Video.
{@hide} |
public static final int | MEDIA_RECORDER_TRACK_INFO_DURATION_MSProvide the track duration information.
{@hide} |
public static final int | MEDIA_RECORDER_TRACK_INFO_MAX_CHUNK_DUR_MSProvide the max chunk duration in time (ms) for the given track.
{@hide} |
public static final int | MEDIA_RECORDER_TRACK_INFO_ENCODED_FRAMESProvide the total number of recordd frames.
{@hide} |
public static final int | MEDIA_RECORDER_TRACK_INTER_CHUNK_TIME_MSProvide the max spacing between neighboring chunks for the given track.
{@hide} |
public static final int | MEDIA_RECORDER_TRACK_INFO_INITIAL_DELAY_MSProvide the elapsed time measuring from the start of the recording
till the first output frame of the given track is received, excluding
any intentional start time offset of a recording session for the
purpose of eliminating the recording sound in the recorded file.
{@hide} |
public static final int | MEDIA_RECORDER_TRACK_INFO_START_OFFSET_MSProvide the start time difference (delay) betweeen this track and
the start of the movie.
{@hide} |
public static final int | MEDIA_RECORDER_TRACK_INFO_DATA_KBYTESProvide the total number of data (in kilo-bytes) encoded.
{@hide} |
public static final int | MEDIA_RECORDER_TRACK_INFO_LIST_END{@hide} |
Methods Summary |
---|
private native void | _prepare()
|
private native void | _setOutputFile(java.io.FileDescriptor fd, long offset, long length)
|
protected void | finalize() native_finalize();
|
public static final int | getAudioSourceMax()Gets the maximum value for audio sources.
return AudioSource.REMOTE_SUBMIX;
|
public native int | getMaxAmplitude()Returns the maximum absolute amplitude that was sampled since the last
call to this method. Call this only after the setAudioSource().
|
public native android.view.Surface | getSurface()Gets the surface to record from when using SURFACE video source.
May only be called after {@link #prepare}. Frames rendered to the Surface before
{@link #start} will be discarded.
|
private final native void | native_finalize()
|
private static final native void | native_init()
|
private native void | native_reset()
|
private final native void | native_setup(java.lang.Object mediarecorder_this, java.lang.String clientName)
|
private static void | postEventFromNative(java.lang.Object mediarecorder_ref, int what, int arg1, int arg2, java.lang.Object obj)Called from native code when an interesting event happens. This method
just uses the EventHandler system to post the event back to the main app thread.
We use a weak reference to the original MediaRecorder object so that the native
code is safe from the object disappearing from underneath it. (This is
the cookie passed to native_setup().)
MediaRecorder mr = (MediaRecorder)((WeakReference)mediarecorder_ref).get();
if (mr == null) {
return;
}
if (mr.mEventHandler != null) {
Message m = mr.mEventHandler.obtainMessage(what, arg1, arg2, obj);
mr.mEventHandler.sendMessage(m);
}
|
public void | prepare()Prepares the recorder to begin capturing and encoding data. This method
must be called after setting up the desired audio and video sources,
encoders, file format, etc., but before start().
if (mPath != null) {
RandomAccessFile file = new RandomAccessFile(mPath, "rws");
try {
_setOutputFile(file.getFD(), 0, 0);
} finally {
file.close();
}
} else if (mFd != null) {
_setOutputFile(mFd, 0, 0);
} else {
throw new IOException("No valid output file");
}
_prepare();
|
public native void | release()Releases resources associated with this MediaRecorder object.
It is good practice to call this method when you're done
using the MediaRecorder. In particular, whenever an Activity
of an application is paused (its onPause() method is called),
or stopped (its onStop() method is called), this method should be
invoked to release the MediaRecorder object, unless the application
has a special need to keep the object around. In addition to
unnecessary resources (such as memory and instances of codecs)
being held, failure to call this method immediately if a
MediaRecorder object is no longer needed may also lead to
continuous battery consumption for mobile devices, and recording
failure for other applications if no multiple instances of the
same codec are supported on a device. Even if multiple instances
of the same codec are supported, some performance degradation
may be expected when unnecessary multiple instances are used
at the same time.
|
public void | reset()Restarts the MediaRecorder to its idle state. After calling
this method, you will have to configure it again as if it had just been
constructed.
native_reset();
// make sure none of the listeners get called anymore
mEventHandler.removeCallbacksAndMessages(null);
|
public void | setAudioChannels(int numChannels)Sets the number of audio channels for recording. Call this method before prepare().
Prepare() may perform additional checks on the parameter to make sure whether the
specified number of audio channels are applicable.
if (numChannels <= 0) {
throw new IllegalArgumentException("Number of channels is not positive");
}
setParameter("audio-param-number-of-channels=" + numChannels);
|
public native void | setAudioEncoder(int audio_encoder)Sets the audio encoder to be used for recording. If this method is not
called, the output file will not contain an audio track. Call this after
setOutputFormat() but before prepare().
|
public void | setAudioEncodingBitRate(int bitRate)Sets the audio encoding bit rate for recording. Call this method before prepare().
Prepare() may perform additional checks on the parameter to make sure whether the
specified bit rate is applicable, and sometimes the passed bitRate will be clipped
internally to ensure the audio recording can proceed smoothly based on the
capabilities of the platform.
if (bitRate <= 0) {
throw new IllegalArgumentException("Audio encoding bit rate is not positive");
}
setParameter("audio-param-encoding-bitrate=" + bitRate);
|
public void | setAudioSamplingRate(int samplingRate)Sets the audio sampling rate for recording. Call this method before prepare().
Prepare() may perform additional checks on the parameter to make sure whether
the specified audio sampling rate is applicable. The sampling rate really depends
on the format for the audio recording, as well as the capabilities of the platform.
For instance, the sampling rate supported by AAC audio coding standard ranges
from 8 to 96 kHz, the sampling rate supported by AMRNB is 8kHz, and the sampling
rate supported by AMRWB is 16kHz. Please consult with the related audio coding
standard for the supported audio sampling rate.
if (samplingRate <= 0) {
throw new IllegalArgumentException("Audio sampling rate is not positive");
}
setParameter("audio-param-sampling-rate=" + samplingRate);
|
public native void | setAudioSource(int audio_source)Sets the audio source to be used for recording. If this method is not
called, the output file will not contain an audio track. The source needs
to be specified before setting recording-parameters or encoders. Call
this only before setOutputFormat().
|
public void | setAuxiliaryOutputFile(java.io.FileDescriptor fd)Currently not implemented. It does nothing.
Log.w(TAG, "setAuxiliaryOutputFile(FileDescriptor) is no longer supported.");
|
public void | setAuxiliaryOutputFile(java.lang.String path)Currently not implemented. It does nothing.
Log.w(TAG, "setAuxiliaryOutputFile(String) is no longer supported.");
|
public native void | setCamera(android.hardware.Camera c)Sets a {@link android.hardware.Camera} to use for recording.
Use this function to switch quickly between preview and capture mode without a teardown of
the camera object. {@link android.hardware.Camera#unlock()} should be called before
this. Must call before {@link #prepare}.
|
public void | setCaptureRate(double fps)Set video frame capture rate. This can be used to set a different video frame capture
rate than the recorded video's playback rate. This method also sets the recording mode
to time lapse. In time lapse video recording, only video is recorded. Audio related
parameters are ignored when a time lapse recording session starts, if an application
sets them.
// Make sure that time lapse is enabled when this method is called.
setParameter("time-lapse-enable=1");
double timeBetweenFrameCapture = 1 / fps;
long timeBetweenFrameCaptureUs = (long) (1000000 * timeBetweenFrameCapture);
setParameter("time-between-time-lapse-frame-capture=" + timeBetweenFrameCaptureUs);
|
public void | setLocation(float latitude, float longitude)Set and store the geodata (latitude and longitude) in the output file.
This method should be called before prepare(). The geodata is
stored in udta box if the output format is OutputFormat.THREE_GPP
or OutputFormat.MPEG_4, and is ignored for other output formats.
The geodata is stored according to ISO-6709 standard.
int latitudex10000 = (int) (latitude * 10000 + 0.5);
int longitudex10000 = (int) (longitude * 10000 + 0.5);
if (latitudex10000 > 900000 || latitudex10000 < -900000) {
String msg = "Latitude: " + latitude + " out of range.";
throw new IllegalArgumentException(msg);
}
if (longitudex10000 > 1800000 || longitudex10000 < -1800000) {
String msg = "Longitude: " + longitude + " out of range";
throw new IllegalArgumentException(msg);
}
setParameter("param-geotag-latitude=" + latitudex10000);
setParameter("param-geotag-longitude=" + longitudex10000);
|
public native void | setMaxDuration(int max_duration_ms)Sets the maximum duration (in ms) of the recording session.
Call this after setOutFormat() but before prepare().
After recording reaches the specified duration, a notification
will be sent to the {@link android.media.MediaRecorder.OnInfoListener}
with a "what" code of {@link #MEDIA_RECORDER_INFO_MAX_DURATION_REACHED}
and recording will be stopped. Stopping happens asynchronously, there
is no guarantee that the recorder will have stopped by the time the
listener is notified.
|
public native void | setMaxFileSize(long max_filesize_bytes)Sets the maximum filesize (in bytes) of the recording session.
Call this after setOutFormat() but before prepare().
After recording reaches the specified filesize, a notification
will be sent to the {@link android.media.MediaRecorder.OnInfoListener}
with a "what" code of {@link #MEDIA_RECORDER_INFO_MAX_FILESIZE_REACHED}
and recording will be stopped. Stopping happens asynchronously, there
is no guarantee that the recorder will have stopped by the time the
listener is notified.
|
public void | setOnErrorListener(android.media.MediaRecorder$OnErrorListener l)Register a callback to be invoked when an error occurs while
recording.
mOnErrorListener = l;
|
public void | setOnInfoListener(android.media.MediaRecorder$OnInfoListener listener)Register a callback to be invoked when an informational event occurs while
recording.
mOnInfoListener = listener;
|
public void | setOrientationHint(int degrees)Sets the orientation hint for output video playback.
This method should be called before prepare(). This method will not
trigger the source video frame to rotate during video recording, but to
add a composition matrix containing the rotation angle in the output
video if the output format is OutputFormat.THREE_GPP or
OutputFormat.MPEG_4 so that a video player can choose the proper
orientation for playback. Note that some video players may choose
to ignore the compostion matrix in a video during playback.
if (degrees != 0 &&
degrees != 90 &&
degrees != 180 &&
degrees != 270) {
throw new IllegalArgumentException("Unsupported angle: " + degrees);
}
setParameter("video-param-rotation-angle-degrees=" + degrees);
|
public void | setOutputFile(java.io.FileDescriptor fd)Pass in the file descriptor of the file to be written. Call this after
setOutputFormat() but before prepare().
mPath = null;
mFd = fd;
|
public void | setOutputFile(java.lang.String path)Sets the path of the output file to be produced. Call this after
setOutputFormat() but before prepare().
mFd = null;
mPath = path;
|
public native void | setOutputFormat(int output_format)Sets the format of the output file produced during recording. Call this
after setAudioSource()/setVideoSource() but before prepare().
It is recommended to always use 3GP format when using the H.263
video encoder and AMR audio encoder. Using an MPEG-4 container format
may confuse some desktop players.
|
private native void | setParameter(java.lang.String nameValuePair)
|
public void | setPreviewDisplay(android.view.Surface sv)Sets a Surface to show a preview of recorded media (video). Calls this
before prepare() to make sure that the desirable preview display is
set. If {@link #setCamera(Camera)} is used and the surface has been
already set to the camera, application do not need to call this. If
this is called with non-null surface, the preview surface of the camera
will be replaced by the new surface. If this method is called with null
surface or not called at all, media recorder will not change the preview
surface of the camera.
mSurface = sv;
|
public void | setProfile(CamcorderProfile profile)Uses the settings from a CamcorderProfile object for recording. This method should
be called after the video AND audio sources are set, and before setOutputFile().
If a time lapse CamcorderProfile is used, audio related source or recording
parameters are ignored.
setOutputFormat(profile.fileFormat);
setVideoFrameRate(profile.videoFrameRate);
setVideoSize(profile.videoFrameWidth, profile.videoFrameHeight);
setVideoEncodingBitRate(profile.videoBitRate);
setVideoEncoder(profile.videoCodec);
if (profile.quality >= CamcorderProfile.QUALITY_TIME_LAPSE_LOW &&
profile.quality <= CamcorderProfile.QUALITY_TIME_LAPSE_QVGA) {
// Nothing needs to be done. Call to setCaptureRate() enables
// time lapse video recording.
} else {
setAudioEncodingBitRate(profile.audioBitRate);
setAudioChannels(profile.audioChannels);
setAudioSamplingRate(profile.audioSampleRate);
setAudioEncoder(profile.audioCodec);
}
|
public native void | setVideoEncoder(int video_encoder)Sets the video encoder to be used for recording. If this method is not
called, the output file will not contain an video track. Call this after
setOutputFormat() and before prepare().
|
public void | setVideoEncodingBitRate(int bitRate)Sets the video encoding bit rate for recording. Call this method before prepare().
Prepare() may perform additional checks on the parameter to make sure whether the
specified bit rate is applicable, and sometimes the passed bitRate will be
clipped internally to ensure the video recording can proceed smoothly based on
the capabilities of the platform.
if (bitRate <= 0) {
throw new IllegalArgumentException("Video encoding bit rate is not positive");
}
setParameter("video-param-encoding-bitrate=" + bitRate);
|
public native void | setVideoFrameRate(int rate)Sets the frame rate of the video to be captured. Must be called
after setVideoSource(). Call this after setOutFormat() but before
prepare().
|
public native void | setVideoSize(int width, int height)Sets the width and height of the video to be captured. Must be called
after setVideoSource(). Call this after setOutFormat() but before
prepare().
|
public native void | setVideoSource(int video_source)Sets the video source to be used for recording. If this method is not
called, the output file will not contain an video track. The source needs
to be specified before setting recording-parameters or encoders. Call
this only before setOutputFormat().
|
public native void | start()Begins capturing and encoding data to the file specified with
setOutputFile(). Call this after prepare().
Since API level 13, if applications set a camera via
{@link #setCamera(Camera)}, the apps can use the camera after this method
call. The apps do not need to lock the camera again. However, if this
method fails, the apps should still lock the camera back. The apps should
not start another recording session during recording.
|
public native void | stop()Stops recording. Call this after start(). Once recording is stopped,
you will have to configure it again as if it has just been constructed.
Note that a RuntimeException is intentionally thrown to the
application, if no valid audio/video data has been received when stop()
is called. This happens if stop() is called immediately after
start(). The failure lets the application take action accordingly to
clean up the output file (delete the output file, for instance), since
the output file is not properly constructed when this happens.
|