ChunkFactorypublic class ChunkFactory extends Object Factor class to create the appropriate chunks, which
needs the version of the file to process the chunk header
and trailer areas.
Makes use of chunks_parse_cmds.tbl from vsdump to be able
to process the chunk value area |
Fields Summary |
---|
private int | versionThe version of the currently open document | private Hashtable | chunkCommandDefinitionsKey is a Chunk's type, value is an array of its CommandDefinitions | private static String | chunkTableNameWhat the name is of the chunk table | private POILogger | loggerFor logging problems we spot with the file |
Constructors Summary |
---|
public ChunkFactory(int version)
this.version = version;
processChunkParseCommands();
|
Methods Summary |
---|
public org.apache.poi.hdgf.chunks.Chunk | createChunk(byte[] data, int offset)Creates the appropriate chunk at the given location.
// Create the header
ChunkHeader header =
ChunkHeader.createChunkHeader(version, data, offset);
int endOfDataPos = offset + header.getLength() + header.getSizeInBytes();
// Check we have enough data, and tweak the header size
// as required
if(endOfDataPos > data.length) {
logger.log(POILogger.WARN,
"Header called for " + header.getLength() +" bytes, but that would take us passed the end of the data!");
endOfDataPos = data.length;
header.length = data.length - offset - header.getSizeInBytes();
if(header.hasTrailer()) {
header.length -= 8;
endOfDataPos -= 8;
}
if(header.hasSeparator()) {
header.length -= 4;
endOfDataPos -= 4;
}
}
// Create the trailer and separator, if required
ChunkTrailer trailer = null;
ChunkSeparator separator = null;
if(header.hasTrailer()) {
if(endOfDataPos <= data.length-8) {
trailer = new ChunkTrailer(
data, endOfDataPos);
endOfDataPos += 8;
} else {
System.err.println("Header claims a length to " + endOfDataPos + " there's then no space for the trailer in the data (" + data.length + ")");
}
}
if(header.hasSeparator()) {
if(endOfDataPos <= data.length-4) {
separator = new ChunkSeparator(
data, endOfDataPos);
} else {
System.err.println("Header claims a length to " + endOfDataPos + " there's then no space for the separator in the data (" + data.length + ")");
}
}
// Now, create the chunk
byte[] contents = new byte[header.getLength()];
System.arraycopy(data, offset+header.getSizeInBytes(), contents, 0, contents.length);
Chunk chunk = new Chunk(header, trailer, separator, contents);
// Feed in the stuff from chunks_parse_cmds.tbl
CommandDefinition[] defs = (CommandDefinition[])
chunkCommandDefinitions.get(new Integer(header.getType()));
if(defs == null) defs = new CommandDefinition[0];
chunk.commandDefinitions = defs;
// Now get the chunk to process its commands
chunk.processCommands();
// All done
return chunk;
| public int | getVersion() return version;
| private void | processChunkParseCommands()Open chunks_parse_cmds.tbl and process it, to get the definitions
of all the different possible chunk commands.
String line;
InputStream cpd = ChunkFactory.class.getResourceAsStream(chunkTableName);
BufferedReader inp = new BufferedReader(new InputStreamReader(cpd));
while( (line = inp.readLine()) != null ) {
if(line.startsWith("#")) continue;
if(line.startsWith(" ")) continue;
if(line.startsWith("\t")) continue;
if(line.length() == 0) continue;
// Start xxx
if(!line.startsWith("start")) {
throw new IllegalStateException("Expecting start xxx, found " + line);
}
int chunkType = Integer.parseInt(line.substring(6));
ArrayList defsL = new ArrayList();
// Data entries
while( ! (line = inp.readLine()).startsWith("end") ) {
StringTokenizer st = new StringTokenizer(line, " ");
int defType = Integer.parseInt(st.nextToken());
int offset = Integer.parseInt(st.nextToken());
String name = st.nextToken("\uffff").substring(1);
CommandDefinition def = new CommandDefinition(defType,offset,name);
defsL.add(def);
}
CommandDefinition[] defs = (CommandDefinition[])
defsL.toArray(new CommandDefinition[defsL.size()]);
// Add to the hashtable
chunkCommandDefinitions.put(new Integer(chunkType), defs);
}
inp.close();
cpd.close();
|
|