The video recording process of the camera system involves several layers of camera driver, camera Hal, camera service, and camera Java.
The video recording function of camera is generally performed at the same time as the preview function. The recording thread of Samsung Hal is implemented using the preview thread. In this thread, fimc0 gets the original data used by the preview, at the same time, fimc2 obtains the raw data required for the video. Here, we use two fimc to obtain the data, because the original data required for preview and video is different, s5pv210 supports two fimc controllers to simultaneously output a sensor's bt656 signal.
Libcamera/seccamerahwinterface. cpp
802 status_t CameraHardwareSec::startRecording() 803 { 804 LOGV("%s :", __func__); 805 806 if (mRecordRunning == false) { 807 if (mSecCamera->startRecord() < 0) { 808 LOGE("ERR(%s):Fail on mSecCamera->startRecord()", __func__); 809 // sw5771.park : temporary fix 810 // with HDMI, fimc2 is conflict... 811 #ifdef BOARD_USES_HDMI 812 #else 813 return UNKNOWN_ERROR; 814 #endif 815 } 816 mRecordRunning = true; 817 } 818 return NO_ERROR; 819 }
807 mseccamera-> startrecord: Set and start the fimc controller corresponding to recording
816 set the startup flag. In the previewthread, mrecordrunning is used to determine whether video processing is required.
Libcamera/seccamera. cpp: seccamera: startrecord ()
1084 int SecCamera::startRecord(void)1085 {1086 int ret, i;1087 1088 LOGV("%s :", __func__);1089 1090 // aleady started1091 if (m_flag_record_start > 0) {1092 LOGE("ERR(%s):Recording was already started\n", __func__);1093 return 0;1094 }1095 1096 if (m_cam_fd2 <= 0) {1097 LOGE("ERR(%s):Camera was closed\n", __func__);1098 return -1;1099 }1100 1101 /* enum_fmt, s_fmt sample */1102 ret = fimc_v4l2_enum_fmt(m_cam_fd2, V4L2_PIX_FMT_NV12T);1103 CHECK(ret);1104 1105 LOGI("%s: m_recording_width = %d, m_recording_height = %d\n",1106 __func__, m_recording_width, m_recording_height);1107 1108 ret = fimc_v4l2_s_fmt(m_cam_fd2, m_recording_width, m_recording_height,1109 V4L2_PIX_FMT_NV12T, 0);1110 CHECK(ret);1111 1112 ret = fimc_v4l2_reqbufs(m_cam_fd2, V4L2_BUF_TYPE_VIDEO_CAPTURE, MAX_BUFFERS);1113 CHECK(ret);1114 1115 ret = this->m_setCameraAngle(m_cam_fd2);1116 CHECK(ret);1117 1118 /* start with all buffers in queue */1119 for (i = 0; i < MAX_BUFFERS; i++) {1120 ret = fimc_v4l2_qbuf(m_cam_fd2, i);1121 CHECK(ret);1122 }1123 1124 ret = fimc_v4l2_streamon(m_cam_fd2);1125 CHECK(ret);1126 1127 // Get and throw away the first frame since it is often garbled.1128 memset(&m_events_c2, 0, sizeof(m_events_c2));1129 m_events_c2.fd = m_cam_fd2;1130 m_events_c2.events = POLLIN | POLLERR;1131 ret = fimc_poll(&m_events_c2);1132 CHECK(ret);1133 1134 m_flag_record_start = 1;1135 1136 LOGE("(%s): end\n", __func__);1137 return 0;1138 }
1102 check whether the device supports the nv12t format. Here m_cam_fd2 is used, which corresponds to the fimc2 controller.
The subsequent process is similar to the initialization process of the camera device during normal photography.
1108 ~ 1109 set the video width, height, and format
1112 queue buffer for photo application
1118 ~ 1112 add these buffers to the waiting queue
1124 start the capture stream on Mode of fimc2, and fimc2 starts to capture data. Every time fimc2 acquires a frame of data, it will wake up the process that calls fimc_poll blocking.
1128 ~ 1132 wait for the arrival of the first frame and ignore it because the data of the first frame is often disordered.
821 void CameraHardwareSec::stopRecording() 822 { 823 LOGV("%s :", __func__); 824 825 if (mRecordRunning == true) { 826 if (mSecCamera->stopRecord() < 0) { 827 LOGE("ERR(%s):Fail on mSecCamera->stopRecord()", __func__); 828 return; 829 } 830 mRecordRunning = false; 831 } 832 }
826 stop the capture stream of the fimc2 Device
830 mrecordrunning = false: the processing of record in previewthread is stopped.
841 void CameraHardwareSec::releaseRecordingFrame(const sp<IMemory>& mem) 842 { 843 ssize_t offset; 844 sp<IMemoryHeap> heap = mem->getMemory(&offset, NULL); 845 struct ADDRS *addrs = (struct ADDRS *)((uint8_t *)heap->base() + offset); 846 847 mSecCamera->releaseRecordFrame(addrs->buf_idx); 848 }
Buffer data must be transferred to the upper layer for processing. Before the upper layer completes processing, the driver layer and Hal cannot modify the buffer until the upper layer application calls the releaserecordingframe interface to release the buffer.
847 mseccamera-> releaserecordframe adds the buffer corresponding to addrs-> buf_idx to the queue.
542 int CameraHardwareSec::previewThread() 543 { 544 int index; 545 nsecs_t timestamp; 546 unsigned int phyYAddr; 547 unsigned int phyCAddr; 548 struct ADDRS *addrs; 549 550 index = mSecCamera->getPreview(); 551 if (index < 0) { 552 LOGE("ERR(%s):Fail on SecCamera->getPreview()", __func__); 553 return UNKNOWN_ERROR; 554 } 555 mSkipFrameLock.lock(); 556 if (mSkipFrame > 0) { 557 mSkipFrame--; 558 mSkipFrameLock.unlock(); 559 return NO_ERROR; 560 } 561 mSkipFrameLock.unlock(); 562 gInterlace++; 563 //if (gInterlace % 8) { 564 // return NO_ERROR; 565 //} 566 567 timestamp = systemTime(SYSTEM_TIME_MONOTONIC); 568 569 phyYAddr = mSecCamera->getPhyAddrY(index); 570 phyCAddr = mSecCamera->getPhyAddrC(index); 571 572 if (phyYAddr == 0xffffffff || phyCAddr == 0xffffffff) { 573 LOGE("ERR(%s):Fail on SecCamera getPhyAddr Y addr = %0x C addr = %0x", __func__, phyYAddr, phyCAddr); 574 return UNKNOWN_ERROR; 575 } 576 577 int width, height, frame_size, offset; 578 579 mSecCamera->getPreviewSize(&width, &height, &frame_size); 580 581 offset = (frame_size + mSizeOfADDRS) * index; 582 sp<MemoryBase> buffer = new MemoryBase(mPreviewHeap, offset, frame_size); 583 584 memcpy(static_cast<unsigned char *>(mPreviewHeap->base()) + (offset + frame_size ), &phyYAddr, 4); 585 memcpy(static_cast<unsigned char *>(mPreviewHeap->base()) + (offset + frame_size + 4), &phyCAddr, 4); 586 587 #if defined(BOARD_USES_OVERLAY) 588 if (mUseOverlay) { 589 int ret; 590 overlay_buffer_t overlay_buffer; 591 592 mOverlayBufferIdx ^= 1; 593 memcpy(static_cast<unsigned char*>(mPreviewHeap->base()) + offset + frame_size + sizeof(phyYAddr) + sizeof(phyCA ddr), 594 &mOverlayBufferIdx, sizeof(mOverlayBufferIdx)); 595 596 ret = mOverlay->queueBuffer((void*)(static_cast<unsigned char *>(mPreviewHeap->base()) + (offset + frame_size))) ; 597 598 if (ret == -1 ) { 599 LOGE("ERR(%s):overlay queueBuffer fail", __func__); 600 } else if (ret != ALL_BUFFERS_FLUSHED) { 601 ret = mOverlay->dequeueBuffer(&overlay_buffer); 602 if (ret == -1) { 603 LOGE("ERR(%s):overlay dequeueBuffer fail", __func__); 604 } 605 } 606 } 607 #endif 608 609 // Notify the client of a new frame. 610 if (mMsgEnabled & CAMERA_MSG_PREVIEW_FRAME) { 611 mDataCb(CAMERA_MSG_PREVIEW_FRAME, buffer, mCallbackCookie); 612 } 613 614 if (mRecordRunning == true) { 615 // sw5771.park : temporary fix 616 // with HDMI, fimc2 is conflict... 617 #ifdef BOARD_USES_HDMI 618 int preview_index = index; 619 620 index = mSecCamera->getRecordFrame(); 621 if (index < 0) { 622 LOGE("ERR(%s):Fail on SecCamera->getRecordFrame()", __func__); 623 index = preview_index; 624 //return UNKNOWN_ERROR; 625 } else { 626 phyYAddr = mSecCamera->getRecPhyAddrY(index); 627 phyCAddr = mSecCamera->getRecPhyAddrC(index); 628 } 629 #else 630 index = mSecCamera->getRecordFrame(); 631 if (index < 0) { 632 LOGE("ERR(%s):Fail on SecCamera->getRecordFrame()", __func__); 633 return UNKNOWN_ERROR; 634 } 635 636 phyYAddr = mSecCamera->getRecPhyAddrY(index); 637 phyCAddr = mSecCamera->getRecPhyAddrC(index); 638 #endif 639 640 if (phyYAddr == 0xffffffff || phyCAddr == 0xffffffff) { 641 LOGE("ERR(%s):Fail on SecCamera getRectPhyAddr Y addr = %0x C addr = %0x", __func__, phyYAddr, phyCAddr); 642 return UNKNOWN_ERROR; 643 } 644 645 addrs = (struct ADDRS *)mRecordHeap->base(); 646 647 sp<MemoryBase> buffer = new MemoryBase(mRecordHeap, mSizeOfADDRS * index, mSizeOfADDRS); 648 addrs[index].addr_y = phyYAddr; 649 addrs[index].addr_cbcr = phyCAddr; 650 addrs[index].buf_idx = index; 651 652 // Notify the client of a new frame. 653 if (mMsgEnabled & CAMERA_MSG_VIDEO_FRAME) { 654 mDataCbTimestamp(timestamp, CAMERA_MSG_VIDEO_FRAME, buffer, mCallbackCookie); 655 } else { 656 mSecCamera->releaseRecordFrame(index); 657 } 658 } 659 660 return NO_ERROR; 661 }
614 ~ 658 is the record-related code.
630 get the record frame. getrecordframe () will be blocked on the poll function of the fimc2 device node until available data exists. The returned value index is the index of the available buffer.
636 ~ 637 obtain the physical address corresponding to the buffer. I don't quite understand why the video needs to obtain the physical address. Maybe the video encoding process can process the physical address after it is passed to the upper layer.
653 if the upper layer needs to process data, set mmsgenable to camera_msg_video_frame so that the Hal layer uploads data through mdatacbtimestamp. After the upper layer completes processing, it is responsible for calling the releaserecordframe interface to release the buffer.
656 if the upper layer does not handle the issue, the buffer will be directly released (requeued), but it cannot be imagined that there will be any scenarios where the buffer will not be processed.
Due to the special hardware features of Samsung s5pv210, two camera capture streams can be started in the preview, and two fimc controllers fimc1 fime2 get data from the same camera sensor, both copies are used for preview and recording, while the preview and recording of other platforms generally share the same raw data.