Camera hardware and Linux
The Linux operating system supports real-time video and audio hardware, such as cameras, TV tuner, video capture card, fmbroadcast tuner, and videoOutputDevices and other major API applications to access these devices are video4linux.
Video4linux isKernelAPI, so it must be the kernel driver for each supported device. At the user level, device access is standardized through device files. This file focuses on video capture devices, such as cameras./
Dev/video0,/
Dev/video1To connect as many devices as possible.
Standard format for data exchange between device files of each device type and user-level applications. This allows the Linux driver of each video capture device of the application to be immediately compatible.
Built-in cameras are currently compatible with videos on some Nokia Internet tablet devices-4-linux Version 2
API http://www.thedirks.org/v4l2. In principle, any application compatible with this API is easily ported to the maemo platform.
Because the maemo platform processes all the multimedia processing gstreamer frameworks, the app needs to access the built-in camera using gstreamer for this, instead of directly accessing the video4linux device throughV4l2srcGstreamer module.
With the flexibility of gstreamer, developers can fully test any given application in a common gossip PC with connected cameras, and then perform the final test on the Internet tablet itself, there is no single change in the source code, because gstreamer refers to the text name of the module.
An important note for a camera on a tablet: Only one applicant can use it at any given time. Therefore, when a camera is used in an application, tasks that utilize it (such as video calls) will be blocked.
A sample application and discussion are provided to demonstrate how to perform camera operations.
In this example, the initialize_pipeline () function is the most interesting because it is responsible for creating the gstreamer pipeline and purchasing data from video4linux and sinkingXvimagesink,(X is an optimized frame buffer ). The pipeline scheme is as follows:
|Screen| |Screen| ->|queue |->|sink |-> Display |Camera| |CSP | |Tee|/ |src |->|Filter|->| |\ |Image| |Image | |Image| ->|queue|-> |filter|->|sink |-> JPEG file
This sample application is not different from other gstreamerapplications, be it Linux generic or maemo specific apps:
static gboolean initialize_pipeline(AppData *appdata,int *argc, char ***argv){GstElement *pipeline, *camera_src, *screen_sink, *image_sink;GstElement *screen_queue, *image_queue;GstElement *csp_filter, *image_filter, *tee;GstCaps *caps;GstBus *bus;/* Initialize Gstreamer */gst_init(argc, argv);/* Create elements *//* Camera video stream comes from a Video4Linux driver */camera_src = gst_element_factory_make(VIDEO_SRC, "camera_src");/* Colorspace filter is needed to make sure that sinks understands * the stream coming from the camera */csp_filter = gst_element_factory_make("ffmpegcolorspace", "csp_filter");/* Tee that copies the stream to multiple outputs */tee = gst_element_factory_make("tee", "tee");/* Queue creates new thread for the stream */screen_queue = gst_element_factory_make("queue", "screen_queue");/* Sink that shows the image on screen. Xephyr doesn't support XVideo * extension, so it needs to use ximagesink, but the device uses * xvimagesink */screen_sink = gst_element_factory_make(VIDEO_SINK, "screen_sink");/* Creates separate thread for the stream from which the image * is captured */image_queue = gst_element_factory_make("queue", "image_queue");/* Filter to convert stream to use format that the gdkpixbuf library * can use */image_filter = gst_element_factory_make("ffmpegcolorspace", "image_filter");/* A dummy sink for the image stream. Goes to bitheaven */image_sink = gst_element_factory_make("fakesink", "image_sink");/* Check that elements are correctly initialized */if(!(pipeline && camera_src && screen_sink && csp_filter && screen_queue&& image_queue && image_filter && image_sink)){g_critical("Couldn't create pipeline elements");return FALSE;}/* Set image sink to emit handoff-signal before throwing away * it's buffer */g_object_set(G_OBJECT(image_sink),"signal-handoffs", TRUE, NULL);/* Add elements to the pipeline. This has to be done prior to * linking them */gst_bin_add_many(GST_BIN(pipeline), camera_src, csp_filter,tee, screen_queue, screen_sink, image_queue,image_filter, image_sink, NULL);/* Specify what kind of video is wanted from the camera */caps = gst_caps_new_simple("video/x-raw-rgb","width", G_TYPE_INT, 640,"height", G_TYPE_INT, 480,NULL);/* Link the camera source and colorspace filter using capabilities * specified */if(!gst_element_link_filtered(camera_src, csp_filter, caps)){return FALSE;}gst_caps_unref(caps);/* Connect Colorspace Filter -> Tee -> Screen Queue -> Screen Sink * This finalizes the initialization of the screen-part of the pipeline */if(!gst_element_link_many(csp_filter, tee, screen_queue, screen_sink, NULL)){return FALSE;}/* gdkpixbuf requires 8 bits per sample which is 24 bits per * pixel */caps = gst_caps_new_simple("video/x-raw-rgb","width", G_TYPE_INT, 640,"height", G_TYPE_INT, 480,"bpp", G_TYPE_INT, 24,"depth", G_TYPE_INT, 24,"framerate", GST_TYPE_FRACTION, 15, 1,NULL);/* Link the image-branch of the pipeline. The pipeline is * ready after this */if(!gst_element_link_many(tee, image_queue, image_filter, NULL)) return FALSE;if(!gst_element_link_filtered(image_filter, image_sink, caps)) return FALSE;gst_caps_unref(caps);/* As soon as screen is exposed, window ID will be advised to the sink */g_signal_connect(appdata->screen, "expose-event", G_CALLBACK(expose_cb), screen_sink);gst_element_set_state(pipeline, GST_STATE_PLAYING);return TRUE;}
When the user has pressed the "photo" button and image receiver data, the following function is called back. It forwards the image buffer.Create_jpeg ():
static gboolean buffer_probe_callback(GstElement *image_sink,GstBuffer *buffer, GstPad *pad, AppData *appdata){GstMessage *message;gchar *message_name;/* This is the raw RGB-data that image sink is about * to discard */unsigned char *data_photo = (unsigned char *) GST_BUFFER_DATA(buffer);/* Create a JPEG of the data and check the status */if(!create_jpeg(data_photo))message_name = "photo-failed";elsemessage_name = "photo-taken";/* Disconnect the handler so no more photos * are taken */g_signal_handler_disconnect(G_OBJECT(image_sink),appdata->buffer_cb_id);/* Create and send an application message which will be * catched in the bus watcher function. This has to be * sent as a message because this callback is called in * a gstreamer thread and calling GUI-functions here would * lead to X-server synchronization problems */message = gst_message_new_application(GST_OBJECT(appdata->pipeline),gst_structure_new(message_name, NULL));gst_element_post_message(appdata->pipeline, message);/* Returning TRUE means that the buffer can is OK to be * sent forward. When using fakesink this doesn't really * matter because the data is discarded anyway */return TRUE;}
XvimagesinkThe gstreamer module usually only creates a new window for itself. Because the video should be shown in the main application window, the window ID of X-window needs to be passed to this module, and the ID exists as soon as possible:
static gboolean expose_cb(GtkWidget * widget, GdkEventExpose * event, gpointer data){/* Tell the xvimagesink/ximagesink the x-window-id of the screen * widget in which the video is shown. After this the video * is shown in the correct widget */gst_x_overlay_set_xwindow_id(GST_X_OVERLAY(data), GDK_WINDOW_XWINDOW(widget->window));return FALSE;}
For the sake of integrity, it provides the following JPEG encoding functions. This is a buffer that is worth mentioning. The buffer from gstreamer is a simple linear Frame Buffer:
static gboolean create_jpeg(unsigned char *data){GdkPixbuf *pixbuf = NULL;GError *error = NULL;guint height, width, bpp;const gchar *directory;GString *filename;guint base_len, i;struct stat statbuf;width = 640; height = 480; bpp = 24;/* Define the save folder */directory = SAVE_FOLDER_DEFAULT;if(directory == NULL){directory = g_get_tmp_dir();}/* Create an unique file name */filename = g_string_new(g_build_filename(directory, PHOTO_NAME_DEFAULT, NULL));base_len = filename->len;g_string_append(filename, PHOTO_NAME_SUFFIX_DEFAULT);for(i = 1; !stat(filename->str, &statbuf); ++i){g_string_truncate(filename, base_len);g_string_append_printf(filename, "%d%s", i, PHOTO_NAME_SUFFIX_DEFAULT);}/* Create a pixbuf object from the data */pixbuf = gdk_pixbuf_new_from_data(data,GDK_COLORSPACE_RGB, /* RGB-colorspace */FALSE, /* No alpha-channel */bpp/3, /* Bits per RGB-component */width, height, /* Dimensions */3*width, /* Number of bytes between lines (ie stride) */NULL, NULL); /* Callbacks *//* Save the pixbuf content's in to a jpeg file and check for * errors */if(!gdk_pixbuf_save(pixbuf, filename->str, "jpeg", &error, NULL)){g_warning("%s\n", error->message);g_error_free(error);gdk_pixbuf_unref(pixbuf);g_string_free(filename, TRUE);return FALSE;}/* Free allocated resources and return TRUE which means * that the operation was succesful */g_string_free(filename, TRUE);gdk_pixbuf_unref(pixbuf);return TRUE;}
For more information, visit the official website.
Official website address: http://maemo.org/development/documentation/manuals/4-0-x/how_to_use_camera_api/