AudioFormatpublic class AudioFormat extends Object AudioFormat is the class that specifies a particular arrangement of data in a sound stream.
By examing the information stored in the audio format, you can discover how to interpret the bits in the
binary sound data.
Every data line has an audio format associated with its data stream. The audio format of a source (playback) data line indicates
what kind of data the data line expects to receive for output. For a target (capture) data line, the audio format specifies the kind
of the data that can be read from the line.
Sound files also have audio formats, of course. The {@link AudioFileFormat}
class encapsulates an AudioFormat in addition to other,
file-specific information. Similarly, an {@link AudioInputStream} has an
AudioFormat .
The AudioFormat class accommodates a number of common sound-file encoding techniques, including
pulse-code modulation (PCM), mu-law encoding, and a-law encoding. These encoding techniques are predefined,
but service providers can create new encoding types.
The encoding that a specific format uses is named by its encoding field.
In addition to the encoding, the audio format includes other properties that further specify the exact
arrangement of the data.
These include the number of channels, sample rate, sample size, byte order, frame rate, and frame size.
Sounds may have different numbers of audio channels: one for mono, two for stereo.
The sample rate measures how many "snapshots" (samples) of the sound pressure are taken per second, per channel.
(If the sound is stereo rather than mono, two samples are actually measured at each instant of time: one for the left channel,
and another for the right channel; however, the sample rate still measures the number per channel, so the rate is the same
regardless of the number of channels. This is the standard use of the term.)
The sample size indicates how many bits are used to store each snapshot; 8 and 16 are typical values.
For 16-bit samples (or any other sample size larger than a byte),
byte order is important; the bytes in each sample are arranged in
either the "little-endian" or "big-endian" style.
For encodings like PCM, a frame consists of the set of samples for all channels at a given
point in time, and so the size of a frame (in bytes) is always equal to the size of a sample (in bytes) times
the number of channels. However, with some other sorts of encodings a frame can contain
a bundle of compressed data for a whole series of samples, as well as additional, non-sample
data. For such encodings, the sample rate and sample size refer to the data after it is decoded into PCM,
and so they are completely different from the frame rate and frame size.
An AudioFormat object can include a set of
properties. A property is a pair of key and value: the key
is of type String , the associated property
value is an arbitrary object. Properties specify
additional format specifications, like the bit rate for
compressed formats. Properties are mainly used as a means
to transport additional information of the audio format
to and from the service providers. Therefore, properties
are ignored in the {@link #matches(AudioFormat)} method.
However, methods which rely on the installed service
providers, like {@link AudioSystem#isConversionSupported
(AudioFormat, AudioFormat) isConversionSupported} may consider
properties, depending on the respective service provider
implementation.
The following table lists some common properties which
service providers should use, if applicable:
Property key |
Value type |
Description |
"bitrate" |
{@link java.lang.Integer Integer} |
average bit rate in bits per second |
"vbr" |
{@link java.lang.Boolean Boolean} |
true , if the file is encoded in variable bit
rate (VBR) |
"quality" |
{@link java.lang.Integer Integer} |
encoding/conversion quality, 1..100 |
Vendors of service providers (plugins) are encouraged
to seek information about other already established
properties in third party plugins, and follow the same
conventions. |
Fields Summary |
---|
protected Encoding | encodingThe audio encoding technique used by this format. | protected float | sampleRateThe number of samples played or recorded per second, for sounds that have this format. | protected int | sampleSizeInBitsThe number of bits in each sample of a sound that has this format. | protected int | channelsThe number of audio channels in this format (1 for mono, 2 for stereo). | protected int | frameSizeThe number of bytes in each frame of a sound that has this format. | protected float | frameRateThe number of frames played or recorded per second, for sounds that have this format. | protected boolean | bigEndianIndicates whether the audio data is stored in big-endian or little-endian order. | private HashMap | propertiesThe set of properties |
Constructors Summary |
---|
public AudioFormat(Encoding encoding, float sampleRate, int sampleSizeInBits, int channels, int frameSize, float frameRate, boolean bigEndian)Constructs an AudioFormat with the given parameters.
The encoding specifies the convention used to represent the data.
The other parameters are further explained in the {@link AudioFormat
class description}.
this.encoding = encoding;
this.sampleRate = sampleRate;
this.sampleSizeInBits = sampleSizeInBits;
this.channels = channels;
this.frameSize = frameSize;
this.frameRate = frameRate;
this.bigEndian = bigEndian;
this.properties = null;
| public AudioFormat(Encoding encoding, float sampleRate, int sampleSizeInBits, int channels, int frameSize, float frameRate, boolean bigEndian, Map properties)Constructs an AudioFormat with the given parameters.
The encoding specifies the convention used to represent the data.
The other parameters are further explained in the {@link AudioFormat
class description}.
this(encoding, sampleRate, sampleSizeInBits, channels,
frameSize, frameRate, bigEndian);
this.properties = new HashMap<String, Object>(properties);
| public AudioFormat(float sampleRate, int sampleSizeInBits, int channels, boolean signed, boolean bigEndian)Constructs an AudioFormat with a linear PCM encoding and
the given parameters. The frame size is set to the number of bytes
required to contain one sample from each channel, and the frame rate
is set to the sample rate.
this((signed == true ? Encoding.PCM_SIGNED : Encoding.PCM_UNSIGNED),
sampleRate,
sampleSizeInBits,
channels,
(channels == AudioSystem.NOT_SPECIFIED || sampleSizeInBits == AudioSystem.NOT_SPECIFIED)?
AudioSystem.NOT_SPECIFIED:
((sampleSizeInBits + 7) / 8) * channels,
sampleRate,
bigEndian);
|
Methods Summary |
---|
public int | getChannels()Obtains the number of channels.
When this AudioFormat is used for queries (e.g. {@link
AudioSystem#isConversionSupported(AudioFormat, AudioFormat)
AudioSystem.isConversionSupported}) or capabilities (e.g. {@link
DataLine.Info#getFormats() DataLine.Info.getFormats}), a return value of
AudioSystem.NOT_SPECIFIED means that any (positive) number of channels is
acceptable.
return channels;
| public javax.sound.sampled.AudioFormat$Encoding | getEncoding()Obtains the type of encoding for sounds in this format.
return encoding;
| public float | getFrameRate()Obtains the frame rate in frames per second.
When this AudioFormat is used for queries (e.g. {@link
AudioSystem#isConversionSupported(AudioFormat, AudioFormat)
AudioSystem.isConversionSupported}) or capabilities (e.g. {@link
DataLine.Info#getFormats() DataLine.Info.getFormats}), a frame rate of
AudioSystem.NOT_SPECIFIED means that any frame rate is
acceptable. AudioSystem.NOT_SPECIFIED is also returned when
the frame rate is not defined for this audio format.
return frameRate;
| public int | getFrameSize()Obtains the frame size in bytes.
When this AudioFormat is used for queries (e.g. {@link
AudioSystem#isConversionSupported(AudioFormat, AudioFormat)
AudioSystem.isConversionSupported}) or capabilities (e.g. {@link
DataLine.Info#getFormats() DataLine.Info.getFormats}), a frame size of
AudioSystem.NOT_SPECIFIED means that any frame size is
acceptable. AudioSystem.NOT_SPECIFIED is also returned when
the frame size is not defined for this audio format.
return frameSize;
| public java.lang.Object | getProperty(java.lang.String key)Obtain the property value specified by the key.
The concept of properties is further explained in
the {@link AudioFileFormat class description}.
If the specified property is not defined for a
particular file format, this method returns
null .
if (properties == null) {
return null;
}
return properties.get(key);
| public float | getSampleRate()Obtains the sample rate.
For compressed formats, the return value is the sample rate of the uncompressed
audio data.
When this AudioFormat is used for queries (e.g. {@link
AudioSystem#isConversionSupported(AudioFormat, AudioFormat)
AudioSystem.isConversionSupported}) or capabilities (e.g. {@link
DataLine.Info#getFormats() DataLine.Info.getFormats}), a sample rate of
AudioSystem.NOT_SPECIFIED means that any sample rate is
acceptable. AudioSystem.NOT_SPECIFIED is also returned when
the sample rate is not defined for this audio format.
return sampleRate;
| public int | getSampleSizeInBits()Obtains the size of a sample.
For compressed formats, the return value is the sample size of the
uncompressed audio data.
When this AudioFormat is used for queries (e.g. {@link
AudioSystem#isConversionSupported(AudioFormat, AudioFormat)
AudioSystem.isConversionSupported}) or capabilities (e.g. {@link
DataLine.Info#getFormats() DataLine.Info.getFormats}), a sample size of
AudioSystem.NOT_SPECIFIED means that any sample size is
acceptable. AudioSystem.NOT_SPECIFIED is also returned when
the sample size is not defined for this audio format.
return sampleSizeInBits;
| public boolean | isBigEndian()Indicates whether the audio data is stored in big-endian or little-endian
byte order. If the sample size is not more than one byte, the return value is
irrelevant.
return bigEndian;
| public boolean | matches(javax.sound.sampled.AudioFormat format)Indicates whether this format matches the one specified. To match,
two formats must have the same encoding, the same number of channels,
and the same number of bits per sample and bytes per frame.
The two formats must also have the same sample rate,
unless the specified format has the sample rate value AudioSystem.NOT_SPECIFIED ,
which any sample rate will match. The frame rates must
similarly be equal, unless the specified format has the frame rate
value AudioSystem.NOT_SPECIFIED . The byte order (big-endian or little-endian)
must match if the sample size is greater than one byte.
if (format.getEncoding().equals(getEncoding()) &&
( (format.getSampleRate() == (float)AudioSystem.NOT_SPECIFIED) || (format.getSampleRate() == getSampleRate()) ) &&
(format.getSampleSizeInBits() == getSampleSizeInBits()) &&
(format.getChannels() == getChannels() &&
(format.getFrameSize() == getFrameSize()) &&
( (format.getFrameRate() == (float)AudioSystem.NOT_SPECIFIED) || (format.getFrameRate() == getFrameRate()) ) &&
( (format.getSampleSizeInBits() <= 8) || (format.isBigEndian() == isBigEndian()) ) ) )
return true;
return false;
| public java.util.Map | properties()Obtain an unmodifiable map of properties.
The concept of properties is further explained in
the {@link AudioFileFormat class description}.
Map<String,Object> ret;
if (properties == null) {
ret = new HashMap<String,Object>(0);
} else {
ret = (Map<String,Object>) (properties.clone());
}
return (Map<String,Object>) Collections.unmodifiableMap(ret);
| public java.lang.String | toString()Returns a string that describes the format, such as:
"PCM SIGNED 22050 Hz 16 bit mono big-endian". The contents of the string
may vary between implementations of Java Sound.
String sEncoding = "";
if (getEncoding() != null) {
sEncoding = getEncoding().toString() + " ";
}
String sSampleRate;
if (getSampleRate() == (float) AudioSystem.NOT_SPECIFIED) {
sSampleRate = "unknown sample rate, ";
} else {
sSampleRate = "" + getSampleRate() + " Hz, ";
}
String sSampleSizeInBits;
if (getSampleSizeInBits() == (float) AudioSystem.NOT_SPECIFIED) {
sSampleSizeInBits = "unknown bits per sample, ";
} else {
sSampleSizeInBits = "" + getSampleSizeInBits() + " bit, ";
}
String sChannels;
if (getChannels() == 1) {
sChannels = "mono, ";
} else
if (getChannels() == 2) {
sChannels = "stereo, ";
} else {
if (getChannels() == AudioSystem.NOT_SPECIFIED) {
sChannels = " unknown number of channels, ";
} else {
sChannels = ""+getChannels()+" channels, ";
}
}
String sFrameSize;
if (getFrameSize() == (float) AudioSystem.NOT_SPECIFIED) {
sFrameSize = "unknown frame size, ";
} else {
sFrameSize = "" + getFrameSize()+ " bytes/frame, ";
}
String sFrameRate = "";
if (Math.abs(getSampleRate() - getFrameRate()) > 0.00001) {
if (getFrameRate() == (float) AudioSystem.NOT_SPECIFIED) {
sFrameRate = "unknown frame rate, ";
} else {
sFrameRate = getFrameRate() + " frames/second, ";
}
}
String sEndian = "";
if ((getEncoding().equals(Encoding.PCM_SIGNED)
|| getEncoding().equals(Encoding.PCM_UNSIGNED))
&& ((getSampleSizeInBits() > 8)
|| (getSampleSizeInBits() == AudioSystem.NOT_SPECIFIED))) {
if (isBigEndian()) {
sEndian = "big-endian";
} else {
sEndian = "little-endian";
}
}
return sEncoding
+ sSampleRate
+ sSampleSizeInBits
+ sChannels
+ sFrameSize
+ sFrameRate
+ sEndian;
|
|