AI: Tenth Chapter Machine Vision _ Artificial Intelligence

Source: Internet
Author: User
Tags knowledge base

Tenth Chapter Machine Vision

Teaching content: The machine vision studied in this chapter is one of the most abundant, complex and most important senses in many sensing information, and it is also one of the most widely used machine senses. The content includes image understanding and analysis, visual knowledge representation and control strategy and object shape analysis and recognition.

Teaching emphases: The calculation of edge distance of object, calculation of surface direction and method of object shape recognition

Teaching Difficulties: Graph matching method, relaxation labeling method, multilayer matching method, etc.

Teaching methods: In the more popular language of machine vision related knowledge thoroughly, at the same time, combined with the chart, the different lines of the marking method to explain. Many of the common phenomenon in daily life, so that students have a deeper understanding of the knowledge.

Teaching Request: The key grasps the visual information expression method, it includes initial diagram, two-dimensional half-sketch and three-dimensional model, the physiological basis and calculation principle and calculation method of the edge distance and surface direction of object, understanding the representation of complex shape object and the shape description method of three-dimensional object; general understanding of the structure of machine vision application system, The design idea of vision system.

10.1 Understanding and analysis of image

Teaching Content: Understanding and interpreting the image is the research center of Computer vision, and it is also one of the focuses of artificial intelligence research.

Teaching emphases: Initial sketch, two-dimensional half diagram and three-dimensional model

Teaching Difficulty: Relaxation algorithm, edge distance calculation

Teaching method: Take the Classroom book knowledge as the main, adopt the question, the discussion and so on to raise the student study enthusiasm, the independence and the creativity.

Teaching Requirements: Focus on the expression of visual information, including the initial diagram, two-dimensional half-diagram and three-dimensional model, grasp the edge distance and surface direction of the physiological basis and computational principles and calculation methods

 

The expression method of 10.1.1 visual information

 

According to the hypothesis presented by Marsi (Marr), the process of visual information processing includes 3 main levels of expression, namely, initial, two-dimensional, and three-dimensional diagrams, as shown in Figure 10.1.

Fig. 10.1 The level of expression of visual information

 

1, the basic concept of the initial sketch:

The brightness image contains two kinds of important information: the brightness change of the image and the local geometrical feature. The initial diagram is a primitive expression which can fully and clearly represent the above information. Most of the information contained in the initial sketch is concentrated on the drastic gray changes related to the actual edge and the end point of the edge. For each edge brightness change, there is a corresponding description on the initial sketch. These descriptions include: the brightness change rate associated with the edge, the total brightness change, the edge length, the curvature, and the direction. Roughly speaking, the initial diagram represents the brightness change in the image in the form of a sketched sketch.


Fig. 10.2 An example of a two-dimensional semi-schematic representation of a grayscale change diagram with an initial diagram 10.3

 

2, the basic concept of two-dimensional half diagram:

The two-dimensional half diagram contains the information of the surface of the scene, which can be regarded as the mixed information of some intrinsic characteristics. The two-dimensional half diagram clearly represents the direction of the object's surface. The surface of the object is worn from the inside of the object to make it look like a puncture.

3, three-dimensional model of the representation method

The three-dimensional expression can fully and clearly express the information about the shape of the object, one of the methods is the generalized column. The concept of a generalized cylinder is very important, and its presentation is very simple, as shown in Figure 10.4. In the figure, the cross section of the cylinder is projected unchanged along the axis. A common cylinder can be thought of as a circle moving along its central vertical line; a wedge is a triangle that moves along its perpendicular bisector, and so on. Generally speaking, a generalized cylinder is a two-dimensional contour map moving along its axis. In the moving process, the contour and the axis remain fixed between the same angle. The outline can be of any shape, and its size may vary during the movement, and its axis is not necessarily perpendicular or straight, as shown in Figure 10.4.


Fig. 10.4 Generalized cylinder when the shape change of the General Cone 10.5 section or the axis is curved

 

Calculation of 10.1.2 Edge distance

1, the average and difference of the brightness edge of the image

The noise edge problem is caused by the sensor's brightness fluctuation, image coordinate information error, electronic noise, light source disturbance and the brightness information which is unable to receive the wide range of changes when the image is obtained. Another reason is that the image itself is complex, the actual edge is not steep, but the gradual transition, there may be mutual lighting effects, accidental scratches and dust.

A method for dealing with noise edges consists of the following four steps:

(1) An average luminance array is established from the image.

(2) An average first order difference array is produced from the average luminance array.

(3) A two-time mean difference array is established from an average differential array.

(4) According to the obtained array, note the peak point, mutation slope and over 0 points to seek a set of edge signals.

2. Characteristics of the retina of primate animals

          

Fig. 10.6 The retinal input-output characteristics of primates 10.7 comparison between the experimental characteristics of retina and the results of Mexican straw hat-shaped filter

  

The Mexican hat-shaped filter is consistent with some of the experiments that understand the early vision of primates. The key experiments are shown in Figure 10.6. The subjects looked at a variety of color (stimuli) that had been removed from the white background. These colors include a narrow black belt, a wide black belt, and a single white black edge. The recorded probes were measured for various nerve reactions. Compare this nerve reaction to the predictions made by the Mexican straw hat filter.

Figure 10.7 shows the comparison results. In Figure 10.7, (a) a luminance distribution curve representing 3 of the colors moving from left to right; (b) The result of filtering the given luminance distribution by the appropriate width of the Mexican straw-shaped filter; (c) The experimental data recorded on the so-called X-ganglion cells. The comparison of figures 10.7 (b) and (c) shows that the two are extremely similar. This suggests that the primate's visual network does have some very similar processing work with the Mexican straw-shaped filter. If you modify the Mexican straw hat filter slightly, you can improve the similarity, as shown in Figure 10.7 (d).

The comparative results are highly similar in that we have sufficient basis to make the following assumptions:

(1) The filter processing function of the primate film is similar to that of the filter made by the Mexican straw-shaped point diffusion function.

(2) There are two kinds of membrane cells, one for transmitting the positive part of the filtering image and the other for transmitting the negative part of the filtering image. (3) For each cell, the Mexican hat-shaped filter is achieved by stimulating and prohibiting the combination of the two operations. This filter is equivalent to the difference of two images obtained by Ivigos filter filter.

3. Measurement of object distance

Figure 10.8 shows the relative position relationship in the stereo vision of the two eyes. In the diagram, the P point is an object. The axes of the two lenses are parallel. F is the distance between the two lens and the image plane, which is the focal length. b is the distance of the two lens axis at the baseline, that is, the distance between the two eyes. L and R are respectively the distance between P point and left and right lens axes. α and β are respectively the distance between the left and right images and their corresponding lens axes.

From two similar triangles, the observer's eyes can be obtained.

Distance to the object:

 

Since the binocular distance B is known, the focal length f is also determined, so that an object is inversely proportional to the distance between the eyes and (α + β). (α + β) a displacement of the position of an image point of the point relative to another image point, called parallax (disparity).

The real problem with stereo vision is to find the corresponding object according to the left and right two images so as to measure parallax. Many different stereo vision systems have been able to successfully find the corresponding objects in varying degrees.

Calculation of 10.1.3 surface method

1, reflect the image of light constraints

The same surface as the brightness observed from all possible locations is defined as the Lambert surface (Lambertian Surface), whose brightness is determined only by the direction of the light source. This relationship follows the following formula: E=ρcosi. In the formula, the E is observed brightness, and the surface reflectance (for the specific surface material, the Rho is a constant), and I is the angle of incidence.

2, the determination of the surface direction

Above we study the use of surface direction to predict the brightness of the surface. The following study measured the brightness from the sensing to calculate the surface parameters F and G.

By F and G to determine the surface direction, at first it seems impossible. Because a small surface can only determine a curve on the cut FG, not a single point. However, in fact this is possible because most of the surfaces are smooth, with only a few discontinuities in different depths and directions. Therefore, you can take advantage of the following two constraints:

(1) Brightness. The surface direction determined by F and G shall be not much different from the surface direction required by the surface brightness.

(2) smoothness of the surface. The surface direction of a point should not vary much from the surface direction of the adjacent points.

For each point, the computed F and G values should take into account the values derived from these two constraints. According to the brightness requirements of the special point of the F and G values should fall on the equal brightness line, and according to the surface smoothness is required f and G values close to the adjacent points F and G average.

3. Relaxation algorithm

(1) For all non-boundary points, make f=0 and g=0. For all boundary points, F and G specify a vector that is perpendicular to the boundary for a length of 2. says the input array is listed as the current array.

(2) Perform the following steps (until all values change slowly enough):

(a) For each point in the current array:

i) if it is a boundary point, do nothing;

II If it is a non boundary point, the new F and G values are computed using the relaxation formula.

(b) The resulting new array is referred to as the current array.

Scene analysis of 10.2 building blocks World

Teaching Content: The sensor code of the visible scenery, the detector searches the main components of the image (such as segment, simple curve and angle), and uses the knowledge to infer the three-dimensional characteristic information of the scenery.

Teaching emphases: The method of marking the vertices of three sides without break and shadow, and the analysis of the line graph with break and shadow.

Teaching Difficulty: The method of marking the vertices of three sides without break or shadow.

Teaching method: Take classroom education as the main, develop student's study enthusiasm through various ways, combine practice.

Teaching Requirements: Basic understanding of the building blocks world scenery Line marking method, master No break and shadow of the three vertices of the marking method and the break and shadow when the line diagram analysis.

The method of line marking of 10.2.1 building blocks world scenery

Figure 10.9 A few typical line drawings

The main goal of the World Vision Research is to understand the description of the scene from the image of a pile of toy building blocks. The so-called description is the appearance of a large number of lines appear in the image to represent the various blocks in the scene of the line group. When studying the building block world scenery, the input image can be a picture of the building block scenery, a television photograph image or a line chart. If it belongs to the top two, then the first step is to get the line graph from the image. This belongs to the range of the Markov initial diagram, but it is not as complex as the edge detection operator. In the following discussion, we all assume that we have got the picture of the building block world.

The object of the block world scene analysis is narrow, and it is deliberately simplified, but it is still the preliminary target of computer vision research. Research in this field has yielded some useful results. The building block world can be promoted as a polyhedron of similar industrial parts, and the understanding of simple three-dimensional engineering drawings is the first step in establishing a visual industrial robot assembly system.

 

A method for marking the vertices of three surfaces with 10.2.2 without break and shadow

1. Classification of lines and contacts

The three vertices without faults are studied first, and the suitable illumination conditions are conceived to avoid all the shadows. In such an environment, all the lines in the diagram represent a variety of naturally occurring edges. The simple classification of these lines is as follows.

2, marking the three-side contact method

In order to classify the lines that surround the contacts, you need to look at each of the three vertices that are actually possible from every possible direction. But doing so would have the difficulty of having too much of an alternative direction, excluding the general observation position, in order to reduce the likelihood of occurrence. Suppose that the remainder of this section only discusses line graphs that contain only three-side vertices. The 3 surfaces of any three-plane vertex specify 3 intersecting planes, and the 3 intersecting planes divide the space into 8 intervals. It is obvious that an object forming a vertex occupies one or several of the 8 intervals (or eight). The contact sign shows how the object occupies eight of the body. You can use the following two steps to form a complete dictionary that contains all the possibilities of connectivity: first consider all the ways in which the 8 eight-minute bodies are filled with objects; Then, the vertices that have never been filled with eight-point observations.

 

Analysis of line graphs with 10.2.3 and shadows

Improving the line description can increase the number of constraints, thus increasing the speed of the analysis. Further research is made on whether there are other ways to further classify the interpretation of lines. Before introducing specific methods, there is a problem to note that with the extension of the line label set, the collection of actual contact flags will increase significantly. There will be thousands of legitimate contact signs, not just 18 of them. It is therefore not possible to establish a legal contact sign table and attempt to get the computer to use this form to do anything.

Here are two ways to further classify line interpretations:

1. Further classification of concave signs and introduction of fault line marks

Consider that objects are often put together. Therefore, concave markings can be divided into 3 categories, these 3 represent the number of objects and identify which object is in the front. Set a concave edge to indicate where two objects are exposed to each other. Then imagine pulling the two objects slightly. In this way, the concave edge becomes the boundary, with the flag pointing to one of two possible directions. The two possibilities are represented by a composite flag consisting of the original minus sign and a new arrow flag. If 3 objects are contacted, a composite marker can also be used to indicate what can be seen if the object is slightly away. The fracture line can also be handled similarly: each rupture line is labeled with 1 C and an arrow indicating how the two related objects fit together.

2. Increase the number of signs and strict restrictions by lighting conditions

Another way to improve line description is to combine the illumination conditions of a single light source.

To sum up, every improvement in line interpretation has prompted a large expansion of line labels. Initially, only basic lines, line boundaries, internal concave lines, and convex lines are considered. These initial line types extend to include hatch lines. The concave line is divided into four categories to reflect the number of objects that are in contact with each other and how these objects can be obscured from each other. This introduces the fracture line and divides it into 2 classes in a manner similar to that of the concave line. Finally, the line of information and lighting information combined. From this last extension, 50 lines of line markings are generated.

Thinking: The number of legitimate signs increases relative to the number of illegal signs.

 

10.3 Visual knowledge representation and control strategy

Teaching content: To study the knowledge expression methods developed in other fields of artificial intelligence, mainly the application of semantic network in the field of vision.

Teaching Focus: Semantic network, location network

Teaching Difficulties: Location network

Teaching methods: Mainly in classroom education, through a variety of ways to develop students ' enthusiasm for learning, such as: classroom practice, thinking, discussion and questioning, and combined with practice, deepen the understanding of classroom knowledge.

Teaching requirements: To understand the semantic network and location network, the general understanding of the visual system control strategy.

Semantic network representation of 10.3.1 visual information

This paper focuses on the semantic network, which has the following characteristics:

(1) The data structure can be used as an expression method of accessing analog knowledge conveniently and knowledge expression of propositional logic.

(2) can be used as a simulation structure reflecting the relationship between things in the relevant field.

(3) can be used as a propositional logic expression method with special inference rules.

Exercise: The semantic network represents the following scenery:

The bridge at the intersection of the road (road57) and the River 3 (RIVER3) is located near the building (BUILDING30). ”

 

10.3.2 Location Network representation

In general applications, the relative position of the desired feature in the scene has been expressed in the network, so that the network has modeled the desired structure of the image. The basic operations of geometrical relationships between objects are as follows: 4.

(1) Directional operation (left, reflection, north, Upper, inferior): The  point set is specified in relation to the position and direction of the other point set.

(2) The region operation (near, in the quadrilateral, in the circumference, etc.): Establish a point set that has no direction relationship with other points.

(3) Set operation: Completion and intersection, and poor set operation.

(4) Predicate operations: predicate operations on a region can remove certain point sets by measuring the characteristics of some data.

Control strategy of 10.3.3 Vision system

The visual control strategy governs the flow of information and activities through the various levels of expression, and which triggering mechanism is being processed? Is it a low-level input like a retinal color block, or is it a high-level expectation that the different emphasis on the two extremes is a fundamental control problem, and these two extremes are characterized as follows:

(1) Image data drive. Here the process of the control is from establishing the generalized image to the segmented image structure, finally for the description, which is called by the bottom upward control (bottom-up).

(2) Internal model driven. The high-level model in the knowledge base produces expectations or predictions of the imported geometry, split or generalized image, which is the validation of this prediction, also known as Top-down Control (Top-down).

(3) Non-hierarchical control. The term appears to have been proposed by McCulloch, who uses the term to describe the nature of the reactions implied by the neuronal response connectedness, and the idea is to use the most helpful expert at any given time using the means to accomplish the final task.

Analysis and identification of 10.4 object shape

Teaching content: Polyhedron into the description of non-polyhedron scenery, and based on these descriptions, the shape of the object is analyzed and identified.

Teaching Focus: Discuss the analysis of non faceted objects, and focus on shape analysis in particular.

Teaching Difficulties: Relaxation labeling method, multi-layer matching method.

Teaching Method: Classroom explanation

Teaching Requirements: Understanding the basic concepts of object shape analysis and recognition

 

10.4.1 the representation of complex shape objects

 

A good shape indicates that an object can be identified by a partial view of the object, and small changes in the shape of the object cause only small changes in shape descriptions. The connection representation of each part of the object should be convenient, and it can compare the differences and similarities of two objects, not just the simple classification.

The above requirements are more easily met if the complex objects are expressed as the simpler parts of the partition and the relationships between those parts.

The recognition of shapes is obtained by matching two related descriptions. A partial view of an object produces a description diagram that is a complete object descriptor, and can appropriately match the needs of the process.

1. Description and measurement of curve shape

The curve description is important for the analysis of special objects (such as alphabetic symbols) and three-dimensional landscapes (such as the road on a photograph of a region). In addition, the shape description of three-dimensional objects is often simplified to the "contour" line structure.

(1) The method of storing the curve. The easiest way to describe a line is to use the coordinate sequence of each point on the curve. The memory of the computer can be saved significantly if the beginning coordinates of the storage curve and the coordinate increments of each point are followed.

(2) Approximate description of the curve. The approximate method can be used for the close and structural description of the curve. One method is to expand the curve into orthogonal series, and the other is to segment the curve into some relatively simple curves. The piecewise approximation of linear segmentation is the most common, and the spline function (for polynomial segmentation and continuous conditions at each connection point) has universal significance.

(3) Curve shape analysis measurement method. A number of coefficients relating to the analytic approximation of a curve are used to represent the shape characteristics of the curve. The curves of different shapes have different coefficients. However, these coefficients may vary greatly with the scale, rotation, and breaking conditions. Thus, this analytical measure applies only to situations where the number of curves is small and the expected changes are small.

2. Description and measurement of area shape

Using the points on the outside of the graph to describe the graphics is more sound, because relatively small area changes can cause much larger boundary changes.

(1) The measurement of a simple shape. A rough measurement of the shape [area x (circumference) 2] from the area and surrounding of a planar figure is a metric invariant that is independent of the size, position, and direction of the graphic. Define the minimum constraint rectangle for a shape as a rectangle that completely surrounds the shape, and this rectangle will not be surrounded by any other such rectangle, as shown in Figure 10.10.

An improved approximate measurement of the shape of a graph is carried out by its flange. The flange is defined as the smallest protruding shape enclosing the known shape. The original figure is described by the shape of the flange and the number and shape of the concave or sunken in the figure, as shown in Fig. 10.11.

Fig. 10.10 The convex edge and sag of the graph of the least constrained rectangular figure 10.11

(2) Area analysis measurement method. As with the curve description, the coefficients derived from the expansion or approximation of some basic functions, such as the two-dimensional Fourier series, can be used to measure the shape of the graph. For some basic functions, it is possible to combine these coefficients to obtain a invariant for scale, position, and direction.

 

10.4. Shape description of 23-D object

 

The shape of a three-dimensional object can be described by the outer surface of the object or by the inclusions of the outer surfaces (the holes can be described as negative volumes).

It is particularly difficult to describe three-dimensional objects in that three-dimensional surfaces or volumes require two-dimensional images to be inferred, especially for invisible surfaces. Below we will focus on the analysis of two-dimensional image for volume description problem.

1. The generalized cone representation of the shape of the object

A generalized cylinder (sometimes called a generalized cone) can be used to represent the shape of an object. Because a single generalized cone can describe any volume, complex shapes can be naturally divided into several simpler generalized cones to describe. The screwdriver shown in Figure 10.13 can be described by 4 generalized cones. Wherein, one corresponds to a screwdriver piece, a variable rectangular section, and the other corresponds to a screwdriver rod with a circular cross-section; there are 2 generalized cones on the hand. The criterion of simplifying the generalized cone should be that the shape, size or axis direction of the cross-section should not be changed steeply.

Fig. 10.13 The generalized cone representation of a screwdriver

2, the calculation of the generalized cone description

The generalized cone representation is not a transformation representation, and there may be many alternative descriptions for the same input. You need to choose one or more of the best descriptions from it.

(1) Fitting surface data. The best generalized cone can be fitted by the three-dimensional position of the visible surface and the constraints on the axis and cross-section shape. For the cross section of a known shape, a simple iterative solution may be obtained. Consider a positive cylindrical body. At first, the axis direction and cross-section of the cylinder are unknown. When One Direction is chosen, the elliptical cross-section can be fitted to the visible surface. Through the axes of these cross section moments, there is no need to be perpendicular to the axis. Then, the cross section perpendicular to the axis can be made. Repeat this process until only a small cross-section change is observed. For positive cylinder and positive cones, this process converges very quickly. In the case of arbitrary shape, the convergence of the object is uncertain, then the fitting technique needs to assume that the cross section is approximate to the ellipse.

(2) using object boundary. The two-dimensional cone can be computed by the boundary of the object. If the two-dimensional contour is a projection of three-dimensional objects, the computed cone is the projection of the three-dimensional cone.

 

10.4.3 Object Shape Recognition method

 

objects, or components composed of several objects, can be identified by comparing their descriptions and the model descriptions stored in the computer. These models may be obtained by storing the machine descriptions of the objects that are encountered in advance, directly learning the view data sequence, or simply by the operator. If the description of the object is a feature list, that is, the characteristic vector, then the general mathematical pattern recognition technology can be used to identify. For structural description, it is necessary to adopt more complex matching techniques. In addition, it is not required to use a large amount of memory to match a description with each of the storage models, and there is no exact match to select a suitable subset to be retrieved.

1, Graph matching method (graph matching)

A structural description can be considered as a graph or network. We are interested in evaluating the similarity of the two images. Here are some metrics about similarity.

The g:〈n,p,r〉 of a graph is defined as a set R consisting of a set of N (representing the parts of an object), a set P of these binding properties, and a collection of relationships between nodes. It is known that two graphs g:〈n,p,r〉 and g′:〈n′,p′,r′〉, if and only if P (n) and deflection (n) are similar to a given similarity measure (i.e. the characteristics of node N are similar to those of node n′), are said to form a pair of pairing (Assignment) (n,n′). If there are two pairs of pairs (n1,n1′) and (n2,n2′), all relationships between R in R and r′ in r′ make R (n1,n1′) =r′ (n2,n2′) valid, so that the pair are compatible. Among them, we assume that the relationship is two yuan.

If the two graphs G and g′ have a one-to-one pairing that makes all pairs compatible, then the two graphs are called isomorphic (isomorphic). where, if (n,n′) is a pair, then the P (n) =p′ (n′) is still required. If the subgraph of G is isomorphic to the subgraph of the g′, then the graph G and g′ are isomorphic (subisomorphic).

2. Relaxation Labeling Method (relaxation labeling)

The labeling problem is defined as a pairing of a labeled set with a set of nodes (or units), making the label pairing consistent with the given constraint. This notation has many applications and includes a graph matching problem. At this point, the indicator is the node of the other diagram.

Make n a set of labeled nodes, L as a set of tokens. For each NI, you want to specify a labeled set of Li, which makes Li a subset of L, and these markers are compatible with the given constraints. For the unambiguous case, each collection li contains only one element. The simplest constraint is one-dollar, and the restriction label can only be given to a certain node, regardless of the other nodes in the network. A binary constraint stipulates the relationship between the marks of a pair of nodes. For a labeled set of Ni, Li may be compatible with LJ, a labeled set of Node NJ, if each of the Li's markings is at least compatible with one of LJ's markings. This compatibility is called ARC compatibility (arc consistency).

In general, constraints are N-ary, and arc compatibility may not lead to global compatibility (consistency). Figure 10.44 gives an example of a unary constraint: to mark each node as red or green, and to require a different color for adjacent points. Each time a node is specified with a red or green color, we are able to specify a compatible label for its neighboring nodes, but we cannot make the 3 nodes satisfy the global constraint at the same time.

A larger constraint is path compatibility (path consistency). The two-node NI and NJ (which are labeled 1k and 1l) are of the same path, if there is a path from NI to NJ in the network, there is no set of flags for each node on this path, and both ends are consistent with the Mark 1k and 1l (using the two-element method). The network in Figure 10.14 is not compatible with the pitch path.

Only the arc compatibility is considered, as it is often useful to reduce the options available.

Fig. 10.14 an arc consistent but globally inconsistent indicator

 

3. Multi-layer matching method (multilevel matching)

Graph matching and scenery relaxation labeling techniques are common. However, they do not provide a satisfactory description of similarities and differences. The use of digital weights, combined with unrelated characteristics (such as color and size), may not make much sense. An alternative scheme is the multilayer matching method. The results of multi-layer matching of the two descriptions are in themselves a description of their similarities and differences. If the same difference is found by matching two models, the scene may need to be checked again for finer details.

There are some examples of using this method to identify objects. In some cases, two models may have similar connectivity. At this point, the model can be distinguished by the characteristics of individual parts. In general, a more detailed analysis is required.

When the number of models is large, it is not appropriate to match each model, and the retrieval of memory is likely to require only a few models to retrieve. Can be retrieved using such relationships as the observer's azimuth and the knowledge of the desired object in the environment.

A retrieval process should be able to adapt to changes in the description of objects caused by different observational conditions and variability caused by the description process itself. The variability of the description can be adjusted by retrieving the observed description and interfering with the description according to the desired change.

 

10.5 Summary

 

The machine vision studied in this chapter is one of the most abundant, complex and most important senses in many sensing information, and it is also one of the most widely used machine senses. The understanding and analysis of image is one of the central research contents of machine vision.

Object shape is one of the most important visual information, and it is also the main problem that needs to be identified and analyzed in industrial and agricultural production, transportation and national defense applications.

 

From:http://netclass.csu.edu.cn/jpkc2003/rengongzhineng/rengongzhineng/jiaoan/chapter10.htm

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.