[Reprinted] production process of the "Xiaoqiang No. 1" Experimental Robot

Source: Internet
Author: User
Production Process of the experimental robot "Xiaoqiang No. 1" 

Author: mindroid Source:Http://www.mindroid.comLast Updated: 19:47:10

Starting from being interested in robots, I bought a pile of books and read a lot of information. I decided to start with the simplest wheel robot. Although a general computer is powerful, it cannot directly collect physical data from external environments, nor directly control the motor. After searching, although the data collector that can be directly connected to the USB interface on the computer is used to collect sensor data, you can also use the PCI motor control card to drive various motors, but the cost is somewhat high, I will not consider it for the moment. After finding a circle, I found that the single-chip microcomputer solution is cheap. By the way, I can study single-chip microcomputer control and perform some circuit experiments. So it was finally decided that the experimental scheme was implemented using single chip microcomputer.

Parts

The purchased parts are as follows:

8-Bit Single-Chip Microcomputer atmega168 Development Board (Arduino).

Circuit boards and cables

Two geared motors used to drive the wheels. No proper gear is found, and no processing equipment is available. The figure is simple, so the motor integrated with the gear is replaced.

As the main sensor of the robot, the distance sensor is used to detect the distance between obstacles. The commonly used infrared distance sensor is used as the robot's eye.

Steering gear, used to control the direction of the robot's "eyes.

A pile of parts:

 

Lab

Next, let's figure out how to use a single-chip microcomputer.

Hello World

Let the microcontroller run first. I am used to programming in the operating system. I am not used to using single-chip microcomputer for the first time. I am worried about problems, but it is quite smooth. This program is used to light an LED on the microcontroller.

Code:

int ledPin = 13; // LED connected to digital pin 13
void setup()
{
 pinMode(ledPin, OUTPUT); // sets the digital pin as output
}

void loop()

{
 digitalWrite(ledPin, HIGH); // sets the LED on
 delay(1000); // waits for a second
 digitalWrite(ledPin, LOW); // sets the LED off
 delay(1000); // waits for a second
}

The 13th pins of the single-chip microcomputer are cyclically output to the high and low levels at an interval of MS to enable and disable the LED function. It is easy to verify whether the single-chip microcomputer is available.

Control Steering Gear

The steering gear is controlled by the remote control model. It is actually a motor with closed-loop control that can be rotated to a specified angle.

Working principle: the control signal enters the Signal Modulation Chip through the receiver channel to obtain the DC offset voltage. It has a reference circuit that generates a reference signal with a period of 20 ms and a width of Ms. The obtained DC offset voltage is compared with the voltage of the potentiometer to obtain the voltage difference output.

Finally, the positive and negative output of the voltage difference to the motor driving chip determines the positive and inverse of the motor. When the motor speed is certain, the step-by-step deceleration gear is used to drive the potentiometer rotation, so that the voltage difference is 0, and the motor stops rotating.

In the program, you can control the Rotation Angle of the steering gear by generating PWM with different pulse widths.

The above code generates a PWM drive steering gear rotation, which is relatively simple. Delay is used in the code to implement latency. If multiple Steering gears are controlled at the same time, such Code cannot be used and you need to calculate the time yourself. Because the entire single-chip microcomputer with delay is "Resting", no code can be executed. In actual implementation, you also need to constantly read sensor input and perform other processing.

Infrared distance sensor

Sensor introduction:

Sharp infrared distance sensor, used for modeling or robot production, can be used to measure the distance. Each module provides a 15cm-inch-long single-head cable for ph2.0.

Technical specifications:

Test distance: 10-80 cm

Operating voltage: 4-5.5 V

Standard current consumption: 33-50 mA

Output: analog output. The output voltage is proportional to the test distance.

int potPin = 2; 

int ledPin = 13;

int val = 0;


void setup() {

 pinMode(ledPin, OUTPUT);

}


void loop() {

 val = analogRead(potPin);

 digitalWrite(ledPin, HIGH);

 delay(val);

 digitalWrite(ledPin, LOW);

delay(val);

}


As a result, the blinking interval varies with the sensor distance.

Control Motor

The commonly used DC motor is controlled by l293d. The circuit of h293 is as follows:

However, it is found that for small motors, it is feasible to directly use the PWM terminal of the single chip microcomputer to output different voltages to control the speed. Although it is not accurate, it is enough to realize the motion characteristics. The single-chip microcomputer cannot output a real linear analog voltage. PWM simulates different voltage values by adjusting the null proportion Interval.

The current of the motor is measured in the case of no-load and load, so this simple method is used for verification.

Directly connect the motor to the Gnd and pwm pin 9 interfaces of the microcontroller, and then directly output the motor to pin 9 in the program.

int value = 0;  

int ledpin = 9;



void setup()

{

 // nothing for setup

}


void loop()

{

 for(value = 0 ; value <= 255; value+=5)

 {

 analogWrite(ledpin, value);

 delay(30);

 }

 for(value = 255; value >=0; value-=5)

 {

 analogWrite(ledpin, value);

 delay(30);

 }

}

The above program uses different output values to drive the motor, you can see the motor rotation effect. The verification of single-chip microcomputer and main components is basically completed, and the next step is to start programming.

Software design ideas

Before coding, I want to find a brief introduction to how to implement low-level intelligent robots. Currently, industrial robots use precise control methods to solidify each processing logic in a program. Use the programmer's thinking to complete robot operations.
For programmers, a robot is nothing more than a computer with external physical sensing input and motion control output, whether it is a 64-bit multi-core CPU or 8-bit single chip microcomputer, it is just a different performance. In theory, we can all achieve the intended goal through programming. If it is designed according to common programs on the computer, a large number of functional code will be coupled with each other. Each time to add a new motion function or task, new code needs to be compiled, the new Code brings new logic branches to the original loop feedback system. All the steps have been verified and designed by programmers. This design is no problem for PC programs with precise computing or industrial robots with precise movements in fixed scenarios. However, the "Xiaoqiang No. 1" robot to be implemented here is in a dynamic and changing environment. According to the traditional software design logic, there will be a large number of coupled logic branches, this causes poor program scalability. For example, to complete the function of walking along a wall, you must determine the input conditions of various sensors and then make preset action sequences.

If there are many actions to be executed and the situations that need to be judged, the logic branch of the program will be complicated and the scalability will be poor. Is there any better method suitable for Robot software design?
In another way of thinking, what we do is not a computer program, but an experimental robot. This experimental robot can achieve the natural insect-level "intelligence" is very good. Imagine an ant, a cockroach, will think about such a complicated logical branch? What actions should be taken in various situations? Unlikely. We used complicated ideas to solve simple things.
The main difference between robots and computers is that computers run according to pre-planned instructions, and robots need to select appropriate actions based on the changed environment. First, we can simplify the action model. I divide the robot's execution tasks into three layers: task-action

Task-a desired goal for a robot to accomplish. For example, "collecting garbage on the ground within a certain range" can be defined as a robot task.
Behavior: action is the decomposition of tasks, and action is what the robot chooses to do based on environmental changes. For example, you need to avoid a wall. If the power supply is insufficient and requires charging, it can be divided into one action.
Action: The Robot finally controls the actions made by the motor and other output devices, such as forward, backward, and turn. The rotating camera can be divided into an action unit.
Note: a task consists of one or more actions, which are the minimum unit for immediate execution. At any time point, only one action can be executed.
The object of software implementation is to break down simple behaviors and implement each behavior independently without interference or coupling between each other. When multiple actions need to output conflicting actions, an action priority table is used to determine actions with a higher priority. Based on the theory of behavior-based programming, you can search for "behavior-based robotics" on the Internet for detailed descriptions of behavior-based programming ".
Through this design idea, we can simulate the Reflection Characteristics of animals. We do not need to focus on complicated logic processing, but only need to implement a small behavior unit according to the Basic Behavior Decomposition. Each behavior unit only cares about its own sensing input and the control action that needs to be output. After determining the design concept, the next part began to design software for Xiaoqiang No. 1.

For more information, see:
Http://www-robotics.usc.edu /~ Maja/publications/mitcs.ps.gz(This article describes how to break down behaviors. You need to install gsview)
Http://www.research.ibm.com/people/j/jhc/pubs/jhc-design.pdf
Http://www-robotics.usc.edu /~ Maja/bbs.html
Behavior-based robot Practice Guide
 

Software Structure

Xiaoqiang No. 1 has a very simple structure. The sensor uses only an infrared distance sensor as the robot's "eye" and uses two motors for motion. The distance sensor is connected to a steering gear, as the "Header", it can be rotated in the left and right ranges. The simplest behavior of a robot is the cruise behavior. The cruise behavior is to give the motor current and the motor turns. What if a collision occurs? In cruise behavior, collision does not need to be considered, otherwise it will become a complex judgment logic. To avoid collision, design an independent "escape" action to avoid collision. The escape behavior obtains the obstacle distance information from the distance sensor, and simply determines whether there is an obstacle in front of it to avoid it. Because multiple actions may control the motor, an arbitration server is required to determine the control commands output by the logical part based on the priority. The following figure shows the situation:

If you need to add other more complex intelligence, you only need to add corresponding behavior units and do not need to modify the existing logical structure. Each behavior unit only needs to process the sensor concerned and the Action execution part to be controlled. The advantage of this method is that the program logic becomes very simple and there is no mutual influence between functional units.

The following code is "Cruise:

class BhCruis:public Behavior 
{
public:  BhCruis(const char* name):Behavior(name){ };
BhCruis();
void Run(); 
void Setup();
};


void BhCruis::Setup()
{
}


void BhCruis::Run()

    int bid = GetId(); 
    GO_speed_left[bid] = 255; 
    GO_speed_right[bid] = 255; 
    GT_beh_action[ACTION_TYPE_MOTOR][bid] = true;
}

The above code is very simple, that is, let two motors dizzy at the same speed, other do not have to consider. For some actions, we need to introduce a slightly more complex process. For example, to find an obstacle, we must first stop and then determine whether it should be left or right. After making a decision, we can proceed with the specific action. The state machine can be used to solve this kind of behavior that cannot be completed at one time and can be processed only after several different states. It is a state machine for "escaping" behavior:

Implement the state machine of the behavior in the code of the behavior implementation without affecting other behaviors. This structure allows each behavior to focus on its own implementation methods, minimizing software complexity and facilitating function addition and modification. Prototype code for escaping from the behavior state machine:

Void bhescape: Run (){


Int bid = GETID ();

 


// State machine implementation


Switch (state ){


Case 0: // The front distance is too small.


If (gi_distance [distance_forword] <30 ){


Go_speed_left [bid] = 0;


Go_speed_right [bid] = 0;

 


Gt_beh_action [action_type_motor] [bid] = true;

 


Timestart = millis ();


State ++; // status change


} Else {


Gt_beh_action [action_type_motor] [bid] = false;


}



Break;


Case 1: // observe the left and right distance


If (millis ()-timestart> 1000 ){


Go_eyeangle [bid] = 0;


Gt_beh_action [action_type_head] [bid] = true;


Timestart = millis ();


State ++;


}


Case 2: // observe the left and right distance


If (millis ()-timestart> 1000 ){


Go_eyeangle [bid] = 180;


Gt_beh_action [action_type_head] [bid] = true;


Timestart = millis ();


State ++;


}


Case 3: // decide which side to turn


If (millis ()-timestart> 1000 ){


Go_eyeangle [bid] = angle_center_90;


Gt_beh_action [action_type_head] [bid] = true;



If (gi_distance [distance_left_1]> gi_distance [distance_right_1]) {


// Left turn


Go_speed_left [bid] = 0;


Go_speed_right [bid] = 255;


} Else {


// Turn right


Go_speed_left [bid] = 255;


Go_speed_right [bid] = 0;


}


Gt_beh_action [action_type_motor] [bid] = true;

 


State = 0;


Delay (2000 );


}



Default:


Break;


}


}

This implementation method based on behavior units is similar to biological reflection behavior. The complete code is downloaded in the download area. After completing the basic code, the next part needs to build the robot's organization and test.

The structure and the structure of the test robot are taken from the ground, and a part of the assembled toy car is used, such:

In order to install the motor and sensor, we re-assembled it to look like this:

With the basic structure, you can test the function to verify whether the function is normal.

After the function test is completed, you can run it on the ground:

When a thief came in, he went on patrol at home.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.