12

Edit: If anyone also has any other recommendations for increasing performance of screen capture please feel free to share as it might fully address my problem!

Hello Fellow Developers,

I'm working on some basic screen capture software for myself. As of right now I've got some proof of concept/tinkering code that uses java.awt.Robot to capture the screen as a BufferedImage. Then I do this capture for a specified amount of time and afterwards dump all of the pictures to disk. From my tests I'm getting about 17 frames per second.

Trial #1

Length: 15 seconds Images Captured: 255

Trial #2

Length: 15 seconds Images Captured: 229

Obviously this isn't nearly good enough for a real screen capture application. Especially since these capture were me just selecting some text in my IDE and nothing that was graphically intensive.

I have two classes right now a Main class and a "Monitor" class. The Monitor class contains the method for capturing the screen. My Main class has a loop based on time that calls the Monitor class and stores the BufferedImage it returns into an ArrayList of BufferedImages. If I modify my main class to spawn several threads that each execute that loop and also collect information about the system time of when the image was captured could I increase performance? My idea is to use a shared data structure that will automatically sort the frames based on capture time as I insert them, instead of a single loop that inserts successive images into an arraylist.

Code:

Monitor

public class Monitor {

/**
 * Returns a BufferedImage
 * @return
 */
public BufferedImage captureScreen() {
    Rectangle screenRect = new Rectangle(Toolkit.getDefaultToolkit().getScreenSize());
    BufferedImage capture = null;

    try {
        capture = new Robot().createScreenCapture(screenRect);
    } catch (AWTException e) {
        e.printStackTrace();
    }

    return capture;
}
}

Main

public class Main {


public static void main(String[] args) throws InterruptedException {
    String outputLocation = "C:\\Users\\ewillis\\Pictures\\screenstreamer\\";
    String namingScheme = "image";
    String mediaFormat = "jpeg";
    DiscreteOutput output = DiscreteOutputFactory.createOutputObject(outputLocation, namingScheme, mediaFormat);

    ArrayList<BufferedImage> images = new ArrayList<BufferedImage>();
    Monitor m1 = new Monitor();

    long startTimeMillis = System.currentTimeMillis();
    long recordTimeMillis = 15000;

    while( (System.currentTimeMillis() - startTimeMillis) <= recordTimeMillis ) {
        images.add( m1.captureScreen() );
    }

    output.saveImages(images);

}
}
EthanLWillis
  • 930
  • 4
  • 14
  • 27
  • You keep creating new rectangles and new robots. Try making those final for the instance and just use the field instead of making new ones. That should help a little. – Obicere Nov 07 '13 at 19:12
  • @Obicere I did that and I'm still getting similar performance. But do you know or think the multithreading idea is worth pursuing? – EthanLWillis Nov 07 '13 at 19:49
  • 1
    I'm not well versed in what you are doing but it would seem that if you created one robot/core that you'd be able to loop through each robot and take a screenshot on a different core while the primary core is busy. – Colton Nov 12 '13 at 16:16
  • 1
    If you do the multi-threading perhaps you can use a Queue and synchronize on it when you go to add the images to it. You may be able to avoid sorting the images as you enter them this way. Alternatively you can sort after all the images are done. If your concern is FPS you don't want to perform any sort of sorting calculation while you add images. – hankd Nov 12 '13 at 16:31
  • I guess I should update this question. I'm doing what Hank suggests and saving sorting until after the capture is done. So far the ideas I've read from other sources that I haven't implemented yet are using multiple Robots in different threads to capture different portions of the screen. I'm working on converting bufferedimages to byte[] and storing all images after the first image as delta compressed/coded images. Then I'm also multithreading Output to write those byte[] to disk. Other than this, I'm not sure what else I can do. – EthanLWillis Nov 12 '13 at 16:43
  • 2
    It's not with awt.Robot, but is faster http://stackoverflow.com/a/4843247/1018903. – André Nov 12 '13 at 19:33
  • I will write later a small API with JNI to capture the screen using C++ for Windows, if you need other OS tell me. – André Nov 19 '13 at 15:47

2 Answers2

4

Re-using the screen rectangle and robot class instances will save you a little overhead. The real bottleneck is storing all your BufferedImage's into an array list.

I would first benchmark how fast your robot.createScreenCapture(screenRect); call is without any IO (no saving or storing the buffered image). This will give you an ideal throughput for the robot class.

long frameCount = 0;
while( (System.currentTimeMillis() - startTimeMillis) <= recordTimeMillis ) {
    image = m1.captureScreen();
    if(image !== null) {
        frameCount++;
    }
    try {
        Thread.yield();
    } catch (Exception ex) {
    }
}

If it turns out that captureScreen can reach the FPS you want there is no need to multi-thread robot instances.

Rather than having an array list of buffered images I'd have an array list of Futures from the AsynchronousFileChannel.write.

  • Capture loop
    • Get BufferedImage
    • Convert BufferedImage to byte array containing JPEG data
    • Create an async channel to the output file
    • Start a write and add the immediate return value (the future) to your ArrayList
  • Wait loop
    • Go through your ArrayList of Futures and make sure they all finished
Louis Ricci
  • 20,804
  • 5
  • 48
  • 62
  • Good answer. One modification that I feel needs to be made to the capture loop is to convert the byte array into a delta-encoded byte array of the first image captured. Then you should have a sparse array of data that can be compressed and stored much more easily. – EthanLWillis Nov 22 '13 at 16:54
  • @EthanWillis do you know of a delta-encoded byte array implemntation for Java that you would recommend? – eSniff Jan 04 '14 at 22:20
2

I guess that the intensive memory usage is an issue here. You are capturing in your tests about 250 screenshots. Depending on the screen resolution, this is:

1280x800 : 250 * 1280*800  * 3/1024/1024 ==  732 MB data
1920x1080: 250 * 1920*1080 * 3/1024/1024 == 1483 MB data

Try caputuring without keeping all those images in memory.

As @Obicere said, it is a good idea to keep the Robot instance alive.

Martijn Courteaux
  • 67,591
  • 47
  • 198
  • 287
  • Yea. From all of the reading I've been doing it seems what needs to be done is to capture the initial image. Then get the byte[] data for that image and all subsequent images. For all subsequent images convert their byte[] data into a delta-encoded version of the first byte[] substantially lowering space requirements. Then if you need to write to disk it will also be faster. – EthanLWillis Nov 22 '13 at 16:53