因此,在接收到用于创建视频文件的Java包的一些建议之后,我决定尝试修改JPEGIGESTEMOVIVE文件,它是JMF的一个示例。我成功地编译了所有的东西,似乎它应该运行。新的类应该采用bufferedImages的向量,然后我直接从bufferedImage转换为字节数组,而不是像以前那样从文件读取并转换为字节数组。问题在于处理器不会进行配置(它会锁定或类似的配置)。有人能看到导致这种情况的任何明显缺陷吗?
另外,我仍然完全愿意接受关于更好/更容易/更简单框架的建议。
编辑:这是一些我实际做了一些事情的代码。为了可读性,这里清理了一点。
class ImageDataSource extends PullBufferDataSource {
ImageSourceStream streams[];
ImageDataSource(int width, int height, int frameRate, Vector images) {
streams = new ImageSourceStream[1];
streams[0] = new ImageSourceStream(width, height, frameRate, images);
}
/**
* The source stream to go along with ImageDataSource.
*/
class ImageSourceStream implements PullBufferStream {
Vector images;
int width, height;
VideoFormat format;
int nextImage = 0; // index of the next image to be read.
boolean ended = false;
public ImageSourceStream(int width, int height, int frameRate, Vector images) {
this.width = width;
this.height = height;
this.images = images;
format = new VideoFormat(null,
new Dimension(width, height),
Format.NOT_SPECIFIED,
Format.byteArray,
(float)frameRate);
}
/**
* This is called from the Processor to read a frame worth
* of video data.
*/
public void read(Buffer buf) throws IOException {
// Check if we've finished all the frames.
if (nextImage >= images.size()) {
// We are done. Set EndOfMedia.
System.err.println("Done reading all images.");
buf.setEOM(true);
buf.setOffset(0);
buf.setLength(0);
ended = true;
return;
}
BufferedImage imageFile = (BufferedImage)images.elementAt(nextImage);
nextImage++;
byte data[] = ImageToByteArray.convertToBytes(imageFile);
// Check the input buffer type & size.
if (buf.getData() instanceof byte[])
data = (byte[])buf.getData();
// Check to see the given buffer is big enough for the frame.
buf.setData(data);
buf.setOffset(0);
buf.setLength((int)data.length);
buf.setFormat(format);
buf.setFlags(buf.getFlags() | buf.FLAG_KEY_FRAME);
}
有一件事我不确定,那就是用什么作为视频格式的编码。