2

I want to make twin screen using built-in camera on iOS

I tried following code, but it shows just one view.

It's a natural result, I know.

Here's the code what I used..

- (void)prepareCameraView:(UIView *)window
{
    AVCaptureSession *session = [[AVCaptureSession alloc] init];
    session.sessionPreset = AVCaptureSessionPresetMedium;

    CALayer *viewLayer = window.layer;
    NSLog(@"viewLayer = %@", viewLayer);

    AVCaptureVideoPreviewLayer *captureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] 
                                                            initWithSession:session];
    captureVideoPreviewLayer.frame = window.bounds;
    [window.layer addSublayer:captureVideoPreviewLayer];    
    AVCaptureDevice *captureDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];

    NSError *error = nil;
    AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:captureDevice error:&error];
    if (!input) 
    {
        NSLog(@"ERROR : trying to open camera : %@", error);
    }

    [session addInput:input];

    [session startRunning];
}

How can I get double screen on iOS?

J.K Jeon
  • 270
  • 4
  • 15

2 Answers2

0
 // Use this code





  AVCaptureSession *session = [AVCaptureSession new];
  AVCaptureDevice *inputDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSError *error;
AVCaptureDeviceInput *deviceInput = [AVCaptureDeviceInput deviceInputWithDevice:inputDevice error:&error];

if ( [session canAddInput:deviceInput])
{
    [session addInput:deviceInput];
}
AVCaptureVideoPreviewLayer *previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
[previewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];
[previewLayer setFrame:CGRectMake(0.0, 0.0, self.view.bounds.size.width, self.view.bounds.size.height)];

NSUInteger replicatorInstances = 2;
CGFloat replicatorViewHeight = (self.view.bounds.size.height - 64)/replicatorInstances;
CAReplicatorLayer *replicatorLayer = [CAReplicatorLayer layer];
replicatorLayer.frame = CGRectMake(0, 0.0, self.view.bounds.size.width, replicatorViewHeight);
replicatorLayer.instanceCount = replicatorInstances;
replicatorLayer.instanceTransform = CATransform3DMakeTranslation(0.0, replicatorViewHeight, 0.0);

[replicatorLayer addSublayer:previewLayer];
[self.view.layer addSublayer:replicatorLayer];
[session startRunning];
Ramani Hitesh
  • 214
  • 3
  • 15
-3

Try this:

- (void)prepareCameraView:(UIView *)window
{
    NSArray *captureDevices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];

    {
        AVCaptureSession *session = [[AVCaptureSession alloc] init];
        session.sessionPreset = AVCaptureSessionPresetMedium;

        CALayer *viewLayer = window.layer;
        NSLog(@"viewLayer = %@", viewLayer);

        AVCaptureVideoPreviewLayer *captureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
        captureVideoPreviewLayer.frame = CGRectMake(0.0f, 0.0f, window.bounds.size.width/2.0f, window.bounds.size.height);
        [window.layer addSublayer:captureVideoPreviewLayer];

        NSError *error = nil;
        AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:[captureDevices objectAtIndex:0] error:&error];
        if (!input) 
        {
            NSLog(@"ERROR : trying to open camera : %@", error);
        }

        [session addInput:input];

        [session startRunning];
    }

    {
        AVCaptureSession *session = [[AVCaptureSession alloc] init];
        session.sessionPreset = AVCaptureSessionPresetMedium;

        CALayer *viewLayer = window.layer;
        NSLog(@"viewLayer = %@", viewLayer);

        AVCaptureVideoPreviewLayer *captureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
        captureVideoPreviewLayer.frame = CGRectMake(window.bounds.size.width/2.0f, 0.0f, window.bounds.size.width/2.0f, window.bounds.size.height);
        [window.layer addSublayer:captureVideoPreviewLayer];

        NSError *error = nil;
        AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:[captureDevices objectAtIndex:1] error:&error];
        if (!input) 
        {
            NSLog(@"ERROR : trying to open camera : %@", error);
        }

        [session addInput:input];

        [session startRunning];
    }

}

Note that it makes absolutely no checks that there are actually 2 cameras and it splits it vertically so this is probably best viewed in landscape. You'll want to add some checks into that code and work out exactly how you want to lay out the layers of each camera before using it.

mattjgalloway
  • 34,792
  • 12
  • 100
  • 110
  • If you run this code, your first session is going to be interrupted immediately by your second session (assuming your second session even allows you to start adding inputs yet). Also, this code won't compile. – Jason Coco Jan 04 '12 at 09:06
  • It compiles for me. And what do you mean by the first session being interrupted immediately? Are you sure? To be honest I've not actually tried this out properly but I don't see why it won't work. – mattjgalloway Jan 04 '12 at 09:08
  • It compiles for you because you edited so that it would :p and yes, I'm sure. You can't run two video capture sessions at the same time. Either the first would get an interruption notification or the second would fail to start depending on what mediaserverd decides to do with you. – Jason Coco Jan 04 '12 at 09:11
  • mattjgalloway, your answer is ver welcome, but it's not work. Jason Coco is right. But, I really appreciate your answer^^ – J.K Jeon Jan 04 '12 at 09:12
  • @JasonCoco, How about the facetime app? How can it possible to show two video capture session? – J.K Jeon Jan 04 '12 at 09:14
  • 2
    FaceTime app doesn't show two video capture sessions. It will stop the session, change inputs, start it again. Apologies for thinking 2 sessions would work, I could have sworn I'd done that before but obviously not! – mattjgalloway Jan 04 '12 at 09:15
  • 2
    @J.KJeon The FaceTime app doesn't take video from both cameras at the same time. It takes video from one or the other and uses a single session and manages multiple inputs (however, only one video stream from a camera on the device at a time is going to work). The other image is a super-imposed video stream from another device, which is doable. I suggest you use Matt's answer as a starting point. You'll be able to get most of the way just reading through the AVFoundation stuff. I suggest you look at the AVCaptureVideoDataOutput class as well... it has links to decent sample code. – Jason Coco Jan 04 '12 at 09:18
  • @mattjgalloway, You don't have to apology ^^ It's really hard work to make double video screen. I don't think there is no way, but I don't know how. – J.K Jeon Jan 04 '12 at 09:23
  • @JasonCoco, Your answers are really helpful!! I appreciate it !! – J.K Jeon Jan 04 '12 at 09:24