Skip to main content
  • Original Article
  • Open access
  • Published:

Mobile Robot Combination Autonomous Behavior Strategy to Inspect Hazardous Gases in Relatively Narrow Man–Machine Environment

Abstract

Selecting the optimal speed for dynamic obstacle avoidance in complex man–machine environments is a challenging problem for mobile robots inspecting hazardous gases. Consideration of personal space is important, especially in a relatively narrow man–machine dynamic environments such as warehouses and laboratories. In this study, human and robot behaviors in man–machine environments are analyzed, and a man–machine social force model is established to study the robot obstacle avoidance speed. Four typical man–machine behavior patterns are investigated to design the robot behavior strategy. Based on the social force model and man–machine behavior patterns, the fuzzy-PID trajectory tracking control method and the autonomous obstacle avoidance behavior strategy of the mobile robot in inspecting hazardous gases in a relatively narrow man–machine dynamic environment are proposed to determine the optimal robot speed for obstacle avoidance. The simulation analysis results show that compared with the traditional PID control method, the proposed controller has a position error of less than 0.098 m, an angle error of less than 0.088 rad, a smaller steady-state error, and a shorter convergence time. The crossing and encountering pattern experiment results show that the proposed behavior strategy ensures that the robot maintains a safe distance from humans while performing trajectory tracking. This research proposes a combination autonomous behavior strategy for mobile robots inspecting hazardous gases, ensuring that the robot maintains the optimal speed to achieve dynamic obstacle avoidance, reducing human anxiety and increasing comfort in a relatively narrow man–machine environment.

1 Introduction

In chemical plants, laboratories, warehouses, and other indoor environments, inspection spaces are relatively narrow. A gas leak can cause human, equipment, and property losses. Use of mobile robots for autonomous detection of dangerous gases in indoor environments allows digitization of the production process, reduces accident risk, protects human life and property, and has broad application prospects. The detection range of a fixed detector is limited; a handheld detector cannot ensure inspector safety. Mobile robots have become a worldwide research focus [1,2,3]. Using mobile robots instead of handheld sensors to detect hazardous gases can reduce operator risk and ensure efficient real-time gas leak detection in unmanned conditions. Typical mobile robots for hazardous-gas inspection are large, and untested areas are likely. Robots generally adjust their direction and speed for dynamic obstacle avoidance, which is challenging in narrow spaces shared with humans. The robots may cause human discomfort and affect work efficiency.

Dynamic obstacle avoidance technology is essential for inspection robots [4, 5]. The standard algorithms of traditional obstacle avoidance include the artificial potential field (APF) [6], essential visibility graph (EVG) [7], vector field histogram (VFH) [8], and fuzzy logic control [9,10,11] algorithms. In the 1980s, Khatib [12] proposed an artificial potential-field method based on a virtual force field that generated a relatively smooth path with good obstacle avoidance. However, when gravitation and repulsion are equal, it falls into an optimal local solution and produces a concussion route [13]. The vector field histogram proposed by Janet et al. [14] in the 1990s required large data storage and sensors to collect data in advance. The reliability of obstacle avoidance is strongly affected by sensor performance [15, 16]. To avoid the original defects of the artificial potential field method, Ulrich et al. [17] proposed the VFH vector field histogram method and verified that a specific direction could successfully guide the robot in avoiding the local optimal solution in the pure local obstacle avoidance algorithm. However, their application was restricted [18]. Fuzzy logic can achieve dynamic obstacle avoidance in an unstructured environment [19, 20] without construction of complex motion models and environmental data models, effectively reducing the pressure of data calculation, and improving obstacle avoidance efficiency [21]. Thus, this study used fuzzy logic to achieve dynamic obstacle avoidance. The main advantage of the proposed design is that it allows flexible movements and comprehensive detection in a relatively narrow man–machine space. Based on the social force model and typical man–machine behavior patterns, a combination autonomous behavior strategy is proposed that includes a fuzzy-PID trajectory tracking method and a robot autonomous obstacle-avoidance strategy to choose the optimal robot speed to ensure sufficient human space. A control system is designed to improve controller information transmission accuracy and timeliness.

In the 1970s, Saridis [22] divided the control system into organization, coordination, and execution layers, based on decreasing control accuracy [23]. With the rapid development of sensor systems and widespread application in robots, a classic three-tier structure of sense-panning-action (SPA) layered architecture has emerged [24, 25]. However, completing each task requires hierarchical calculation and transmission, resulting in control delays, with a lack of flexibility and real-time operation. The inclusive behavior-based architecture avoids long-link information transmission and improves the rapid response functions of the robot. The shortcomings include insensitivity to information accuracy, high error rate, and lack of initiative in target tasks due to a lack of task guidance and coordinator plans. This study uses a hybrid architecture in the control system design that can fully reflect the advantages of the two classic architectures and effectively solve single-structure control limitations.

The research includes five main steps: (1) introducing the robot mechanical structure and composition; (2) establishing a man–machine social force model studying four typical man–machine behavior patterns to help the robot achieve dynamic obstacle avoidance, and explaining the robot combination autonomous behavior strategy; (3) designing the robot control system; (4) performing simulations and experiments; (5) analyzing simulation and experimental results and drawing conclusions.

2 Overall Design of Mobile Inspection Robot

The design of the mobile inspection robot is based on the robot behavior strategy. The robot is primarily composed of a chassis, shell, two driving wheels, two driven wheels, and other electronic components. Figure 1 shows a schematic of the overall robot structure. The driving wheels use a hub motor and are fixed to the chassis by mounting seats, bolts, and nuts. The two front driving wheels control forward and backward movement using a differential drive for steering. The rear wheel forms a universal wheel through the wheel shaft to complete passive movement. According to the least friction principle, installing a server helps the rear wheel turn through a gear to reduce pure sliding between the rear wheels and the ground. A group of short and long support columns were installed above the chassis to mount the outer shell of the robot. The hub motor driver, controller, battery, and other components were installed in the middle of the shell and chassis.

Figure 1
figure 1

Schematic of robot structure

A grayscale sensor was installed below the chassis. Four ultrasonic sensors were placed in the robot, two in front of the shell, and one on each side of the shell. A PTZ double-head camera was attached to the robot by a gimbal support and gimbal shell.

3 Kinematics Analysis of Robot

3.1 Forward Kinematics Model of Robot

To establish a robot motion model according to the application environment and robot mechanical structure, the following assumptions are made for the robot system: (1) the robot chassis is a rigid structure; the planes of the four wheels are perpendicular to the ground; the center of mass and centroid of each wheel overlap and are in the same plane; (2) there is only relative rotation between the two driving wheels and the chassis, between the driven wheel shaft and the chassis, and between the driven wheel and driven wheel shaft; (3) each wheel has only one contact point with the ground, and the slight wheel and tire deformation are ignored; (4) wheel skidding on the ground is ignored.

The global reference coordinate system {XOY} and robot reference coordinate system {xoy} are defined, where a vector ξ = [x, y, θ]T represents the robot pose and considers the robot forward direction as the forward direction; o is at the midpoint of the two wheels, and x and y are the displacement of the robot on the X and Y axes, respectively; θ is the angle difference between the global reference coordinate system and the robot reference coordinate system. The rotation speed of the left driving wheels is ω1; the rotation speed of the right driving wheels is ω2, and the radius of the driving wheels is r. L denotes the vertical distance between the center of the drive wheel and the midpoint M of the shaft center line. The distance from the center of the two front wheels to the center of mass is L1. The distance between the rear wheel axis and the center of mass is L2. L3 is the distance from the rear axle to the rotating shaft. The rotation angle of the rear wheel is β. The speed at the midpoint of the two front wheels is v. Figure 2 shows the kinematic model of the robot.

Figure 2
figure 2

Kinematics model of robot

Velocities v and ω of the body can be calculated as [26]:

$$v(t) = \frac{{v_{1} { + }v_{2} }}{2},$$
(1a)
$$w(t) = \frac{{v_{{1}} - v_{{2}} }}{L}.$$
(1b)

In the global reference coordinate system {XOY}, the M-point motion equation is

$$\begin{gathered} \dot{\user2{\xi }}_{1} = \left( {\begin{array}{*{20}c} {\dot{x}} \\ {\dot{y}} \\ {\dot{\theta }} \\ \end{array} } \right) = \left( \begin{gathered} \cos \theta \hfill \\ \sin \theta \hfill \\ 0 \hfill \\ \end{gathered} \right.\left. \begin{gathered} 0 \hfill \\ 0 \hfill \\ 1 \hfill \\ \end{gathered} \right)\left( \begin{gathered} v \hfill \\ \omega \hfill \\ \end{gathered} \right) \\ { = }\left( \begin{gathered} \frac{r\cos \theta }{2} \hfill \\ \frac{r\sin \theta }{2} \hfill \\ \frac{r}{L} \hfill \\ \end{gathered} \right.\left. \begin{gathered} \frac{r\cos \theta }{2} \hfill \\ \frac{r\sin \theta }{2} \hfill \\ - \frac{r}{L} \hfill \\ \end{gathered} \right)\left( \begin{gathered} \omega_{1} \hfill \\ \omega_{2} \hfill \\ \end{gathered} \right). \\ \end{gathered}$$
(2)

Eq. (3) presents each integral:

$$\left\{ \begin{gathered} x = \frac{r}{2}\int_{0} {\cos \theta (\omega_{1} + \omega_{2} )\text{d}t,} \hfill \\ y = \frac{r}{2}\int_{0} {\sin \theta (\omega_{1} + \omega_{2} )\text{d}t,} \hfill \\ \theta = \frac{r}{L}\int_{0} {(\omega_{1} - \omega_{2} )\text{d}t.} \hfill \\ \end{gathered} \right.$$
(3)

When w1 = w2, the driving wheel rotation speed is the same and the robot moves in a straight line. When w1w2, the robot turns according to the differential speed. When w1 > w2, the robot turns left; when w1 < w2, the robot turns right. When w1 = −w2, the robot rotates in place; the robot turning radius is zero. As the rotation speeds of the left and right driving wheels can be set, the robot turning radius is calculated as

$$R = (1 + \frac{{\omega_{1} }}{{\omega_{2} }})L/2(\frac{{\omega_{1} }}{{\omega_{2} }} - 1).$$
(4)

In Eq. (5), for this robot, the rotation angle β of the rear wheel is related to the rotation speed ratio of the left and right wheels, expressed as

$$\frac{{\omega_{1} }}{{\omega_{2} }} = \frac{{2({\it {L}}_{1} + {\it {L}}_{2} + \frac{{{\it {L}}_{3} }}{\cos \beta }) + {\it {L}}\tan \beta }}{{2({\it {L}}_{1} + {\it {L}}_{2} + \frac{{{\it {L}}_{3} }}{\cos \beta }) - {\it {L}}\tan \beta }}.$$
(5)

The rotation angle β is

$$\tan \beta = ({\it {L}}_{1} + {\it {L}}_{2} + \frac{{{\it {L}}_{3} }}{\cos \beta })/\frac{{(1 + \frac{{\omega_{1} }}{{\omega_{2} }}){\it {L}}}}{{2(\frac{{\omega_{1} }}{{\omega_{2} }} - 1)}},$$
(6)

where r = 0.200 m, L = 0.438 m, L1 = 0.214 m, L2 = 0.172 m, and L3 = 0.046 m. Accordingly, a greater rear wheel turning angle produces a smaller turning radius.

The robot-driven wheel is not a direct universal wheel but is used as a whole through the combination of two universal wheels. To reduce friction, two gears were installed with a server to assist in the rotation of the rear wheel. When the robot turns 90°, using the equations we can calculate that the rear wheel rotation angle is approximately 20°. The rotation speed ratio of the left and right wheels is approximately 1.44, and the minimum turning radius R is approximately 1.17 m, as shown in Figure 3.

Figure 3
figure 3

Schematic of robot turning motion

3.2 Dynamic Obstacle Avoidance Strategy

When the robot performs inspection tasks in a narrow indoor space, it should maintain a reasonable safety distance to ensure it does not contact obstacles, especially humans, and prevent human anxiety. Thus, when formulating robot inspection rules, humans are considered as interfering objects in the shared space. As the robot operates in relatively narrow man–machine environments, it may not have sufficient space to plan a new trajectory. Adjusting the moving direction is not the best way to avoid obstacles. To improve safety, a behavior strategy should be developed based on human and robot force model patterns to obtain appropriate speed control parameters from social force models and fuzzy rules appropriate for the environment.

3.2.1 Social Force Model

Generally, each person and robot has local motion targets, individual motion speeds, and accelerations in the man–machine behavior model. Three force items should be considered in the modeling process [27]: (1) the acceleration term is the trend to reach the desired speed, Facc; (2) the force term reflects the boundary condition, maintaining a certain distance during human–robot interaction, Fint; (3) the attractive force term is the target to human/robot, Fatt.

Assuming that these three components simultaneously affect human decision-making, as shown in Figure 4, according to the traditional force superposition principle, we obtain the total effect force Ftotal:

$$\sum {F_{{{\text{total}}}} } = F_{{{\text{acc}}}} + F_{{\text{int}}} + F_{{{\text{att}}}} .$$
(7)
Figure 4
figure 4

Social force model

The social force model can predict local human motion trends and guide robot behavior strategies to ensure a communication space between robots and humans. The social space is calculated as

$$\left\{ \begin{gathered} \frac{{\text{d}}}{{{\text{d}}t}}v\left( t \right) = \frac{{\sum {F_{{{\text{total}}}} } }}{m}, \hfill \\ R_{{\theta ,{\text{ avoid}}}} = R_{{\text{r},{\text{ avoid}}}} - v\left( t \right)t, \hfill \\ \end{gathered} \right.$$
(8)

where Rθ, avoid and Rr, avoid are the robot avoidance distance and human avoidance distance, respectively; m represents the robot mass (34 kg); v(t) is the robot moving speed.

In 1966, American anthropologist Dr. Edward Hall reported that humans need space to prevent interference and danger, especially in social situations; when space invasion occurs, it causes human discomfort, affecting work efficiency and emotional health. Interpersonal distance was divided into four types in the study [28]. (1) Intimate Distance: Within this range, the stimulus intensity is extremely high, divided into 0–0.15 m and 0.15–0.45 m ranges. (2) Personal Distance: This distance is suitable for harmonious acquaintances. Strangers entering this space constitute an invasion of space that causes discomfort, divided into 0.45–0.75 m and 0.75–1.20 m. (3) Social Distance: A commonly used distance in general working conditions that is convenient for work tasks, divided into 1.20–2.10 m and 2.10–3.60 m. (4) Public Distance: This space is often used in formal conditions, divided into 3.60–7.50 m and greater than 7.50 m. As the robot is in a narrow indoor environment, a human avoidance range of 1.50–3.0 m is considered to ensure human comfort.

3.2.2 Behavioral Pattern Analysis

According to the man–machine relative distance, relative motion direction, relative position, and relative speed factors in a shared space, empirical and heuristic methods are used to categorize the human (or interfering body) behavior into four typical types [29]: crossing pattern, encountering pattern, leading pattern, and confronting pattern. Based on these patterns, the robot follows the corresponding rules to ensure safety and does not invade sensitive space.

(1) Crossing behavior: In man–machine interaction, crossing behavior usually appears at an intersection of passage, an interfering body that crosses in front of the robot, as shown in Figure 5a. (2) Encountering behavior: In this passage, the robot and the interfering body meet face-to-face; their motion directions do not affect each other but may enter sensitive areas on both sides, as shown in Figure 5b. (3) Leading behavior: The interfering body appears directly in front of the robot and both sides move in the same direction, as shown in Figure 5c. (4) Confronting behavior: The interfering body appears directly in front of the robot, but the two sides move in opposite directions, as shown in Figure 5d.

Figure 5
figure 5

Four man–machine behavior patterns

To maintain the robot motion range without invading human sensitive areas, the four behavior rules are designed for no collision and no interference.

3.2.3 Fuzzy Controller Design

The fuzzy logic control method is used to formulate behavioral movement rules for interfering objects and robot-specific strategies based on analysis of fundamental human behavior patterns. Three steps are required for the robot to have decision-making capabilities similar to those of humans through speed adjustment: fuzzification of input and output, fuzzy reasoning, and defuzzification of output variables.

  1. (1)

    Fuzzification of input and output

    The input variables include the distance between man and machine Rr, the linear velocity of interfering objects Vj, and the pattern of interference behavior Bj. The output is the linear velocity increment of the robot ΔVrj. The hub encoder can linearly adjust the linear velocity of the robot. Except for behavioral patterns, these variables are continuous as the domains of the input/output variables are continuous. The membership functions can be linearized using Gaussian functions.

    When the man–machine distance is less than the maximum avoidance distance of the robot, a control signal is generated. The minimum cannot be less than the distance to the sensitive human area. The man–machine distance Rr domain is divided into {DS, DE, DN, DF}, where DS represents sensitive human distance, DE represents 75% avoidance distance, DN represents 90% avoidance distance, and DF represents maximum robot avoidance distance. The membership function is shown in Figure 6a. The interfering object linear velocity Vj domain is divided into {VS, VF}, representing "fast" and "slow,” respectively. The division was determined according to average human walking speed. The membership function is shown in Figure 6b. The behavior patterns Bj are divided into {BS, BE, BL, BF}, representing crossing behavior, encountering behavior, leading behavior, and confronting behavior, respectively. The membership function is shown in Figure 6c. The robot linear velocity increment ΔVrj is divided into {VD, VZ, VI, VR, VT}, which represents a stop in place. The inspection speeds of the original set were 25%, 50%, 75%, and 100%. The membership function is shown in Figure 6d.

  2. (2)

    Fuzzy reasoning

    Natural language based on expert experience and knowledge establish a fuzzy rule library for the four behavior rule patterns. Fuzzy reasoning uses the input language as the premise and searches for a rule-based optimal conclusion. Table 1 shows the formulated fuzzy control rules. The input variables Rr, Vj, and Bj are connected with the logical symbol 'and'.

  3. (3)

    Defuzzification of output variables

    The result obtained by fuzzy control rules is a fuzzy quantity; however, in actual fuzzy control, the fuzzy quantity cannot directly control the actuator and must be converted into a precise quantity. The Mandani reasoning method was used to obtain the mean value using Eq. (9):

    $$\begin{gathered} \hat{v}(t) = \left( {1 - \Delta V_{{{\text{ro}}}} } \right)v(t) \\ = \left( {1 - \frac{{\Sigma_{j = 1}^{n} \mu (\Delta V_{{{\text{r}}j}} )\Delta V_{{{\text{r}}j}} }}{{\Sigma_{j = 1}^{n} \mu (\Delta V_{{{\text{r}}j}} )}}} \right)v(t), \\ \end{gathered}$$
    (9)

    where ΔVro is the calculated robot linear velocity increment, which is corresponding to the j element; the fuzzy controller output robot moving speed is \(\hat{v}(t)\).

Figure 6
figure 6

Membership function graphs of input/output variables

Table 1 Fuzzy control rules

3.3 Combination Autonomous Behavior Strategy

From the analysis and calculation, the fuzzy-PID controller is designed to achieve autonomous robot movement with a fixed trajectory. We assume that the vector ξr=[xr, yr, θr]T is the target position vector of the robot and the vector ξe=[xe, ye, θe]T is the robot position error vector; [v w] denotes the velocity vector of the robot; [vr, ωr] and [vk, ωk] represent the target and auxiliary velocity vectors, respectively; [ve, ωe] is the robot velocity error. The robot coordinate error is represented as

$$\xi_{{\text{e}}} = \left[ \begin{gathered} x_{{\text{e}}} \hfill \\ y_{{\text{e}}} \hfill \\ \theta_{{\text{e}}} \hfill \\ \end{gathered} \right] = \left[ {\begin{array}{*{20}c} \begin{gathered} \cos \theta \hfill \\ - \sin \theta \hfill \\ 0 \hfill \\ \end{gathered} & \begin{gathered} \sin \theta \hfill \\ \cos \theta \hfill \\ 0 \hfill \\ \end{gathered} & \begin{gathered} 0 \hfill \\ 0 \hfill \\ 1 \hfill \\ \end{gathered} \\ \end{array} } \right]\left[ \begin{gathered} x - x_{{\text{r}}} \hfill \\ y - y_{{\text{r}}} \hfill \\ \theta - \theta_{{\text{r}}} \hfill \\ \end{gathered} \right].$$
(10)

The differential form of the position error is used to obtain the robot velocity error and angular velocity error.

$$\left\{ \begin{gathered} {\text{d}\it x}_{{\text{e}}} = \omega y_{{\text{e}}} - v_{{\text{r}}} \cos \theta_{{\text{e}}} + v, \hfill \\ {\text{d}\it y}_{{\text{e}}} = - \omega x_{{\text{e}}} + v_{{\text{r}}} \sin \theta_{{\text{e}}} , \hfill \\ {\text{d}}\theta_{{\text{e}}} = \omega - \omega_{{\text{r}}} . \hfill \\ \end{gathered} \right.$$
(11)

To reduce system interference, the auxiliary kinematics controller is designed as

$$\left[ \begin{gathered} v \hfill \\ \omega \hfill \\ \end{gathered} \right] = \left[ \begin{gathered} v_{{\text{k}}} \cos \theta_{{\text{e}}} + a_{1} v_{{\text{e}}} \hfill \\ \omega_{{\text{k}}} + a_{2} v_{{\text{r}}} \sin \theta_{{\text{e}}} + a_{3} w_{{\text{e}}} \hfill \\ \end{gathered} \right],$$
(12)

where a1, a2, and a3 are auxiliary control parameters set as a1 =\(-\) 1, a2 =\(-\) 1.2, a3 =\(-\) 0.5. By obtaining the linear and angular velocities of the robot, the angular speeds of the left and right driving wheels ω1 and ω2 can be determined.

$$\left\{ \begin{gathered} \omega_{{1}} = (v + \frac{1}{2}\omega L)/r, \hfill \\ \omega_{2} = (v - \frac{1}{2}\omega L)/r. \hfill \\ \end{gathered} \right.$$
(13)

To minimize error, the angular speed error of the driving wheel ωe = e(t). Using a PID controller, the controller output u(t) can be calculated [30]:

$$u(t) = k_{{\text{p}}} e(t) + k_{{\text{i}}} \int {e(t)} {\text{d}}t + k_{{\text{d}}} \frac{{{\text{d}}e(t)}}{{{\text{d}}t}}.$$
(14)

The fuzzy-PID controller determines the fuzzy relationship between parameters kp, ki, and kd, which represent the proportional, integral, and differential coefficients of the PID controller, respectively, the deviation e, and deviation change rate ec. The three parameters were adjusted according to the fuzzy control rules through continuous detection of e and ec. The parameters were modified online so that the controlled object exhibited good dynamic and static performance.

$$\left\{ \begin{gathered} k_{{\text{p}}} = k_{{\text{p}}}^{^{\prime}} + k_{{{\text{p}}x}} , \hfill \\ k_{{\text{i}}} = k_{{\text{i}}}^{^{\prime}} + k_{{{\text{i}}x}} , \hfill \\ k_{{\text{d}}} = k_{{\text{d}}}^{^{\prime}} + k_{{{\text{d}}x}} , \hfill \\ \end{gathered} \right.$$
(15)

where kp', ki', and kd' are the initial values of the PID parameters; kpx, kix, and kdx are the fuzzy reasoning output values. The three PID control parameter values were automatically adjusted according to the movement of the robot.

MATLAB/Simulink software was used to design the controller; e, ec, and output kpx, kix, kdx were divided into{NB, NM, NS, ZO, PS, PM, PB}, representing negative large, negative medium, negative small, zero, positive small, positive medium, and positive large, respectively. The domain was [−3, 3]; the membership function of e, ec was 'gaussmf'; the membership function of kpx, kix, kdx was 'trimf'. Forty-nine fuzzy control rules were presented.

The method of 'and' is 'min'; the method of 'or' is 'max'; the method of 'implication' is 'min'; the method of 'aggregation' is max, and the method of 'defuzzification' is 'centroid'. Opening the 'surface', the calculated values of kpx, kix, and kdx are shown in Figure 7.

Figure 7
figure 7

Output surfaces of kpx, kix, and kdx

The robot is a nonlinear system. To overcome the uncertainty of the system and improve the convergence response speed, the gain of e and ec were fixed as ke = 0.9, kec = 0.1; the defuzzification factors were set as k1 = 3, k2 = 1, and k3 = 1. The initial values of kp, ki, and kd, were kp' = 9, ki' = 2, and kd' = 3, as shown in Figure 8a. The fuzzy-PID controller is shown in Figure 8b.

Figure 8
figure 8

Design of fuzzy-PID controller

The robot combination autonomous behavior strategy controller is illustrated in Figure 9. The position vector ξr is input to the fuzzy-PID controller. The robot trajectory tracking speed [v, w] can be obtained using the auxiliary kinematics controller. The social force model fuzzy controller adjusts the robot behavior strategy to control the robot moving speed using the man–machine position data from the sensors, achieving robot dynamic obstacle avoidance.

Figure 9
figure 9

Combination autonomous behavior strategy design

4 Robot Control System Structure

This section introduces the robot control system within the overall structure, the kinematic model, and moving strategies.

The robot uses a hybrid architecture in the control system design, a hierarchical architecture as the basic framework [31], and subsystems with inclusive characteristics to improve system responsiveness and produce overall coordination and decision-making abilities from an organic fusion that effectively overcomes single-structure control limitations.

As shown in Figure 10, the robot control system contains perception, decision-making, and executive layers.

Figure 10
figure 10

Structure of robot control system

The Raspberry Pi and PC represent the client and server, respectively, in the robot decision-making layer. The Raspberry Pi can send and receive control commands and display them on the terminal interface through Wi-Fi established by a router. After the camera image data are collected, the image data must be hardcoded in H.264 and compressed into an H.264-format video image. The output video image frame rate should be greater than ten frames per second. Thus, the UDP protocol was used to improve video transmission efficiency and the video delay in the transport layer. After receiving the information, the decision-making layer processes it and sends it to the executive layer.

The STM32F103 microcontroller unit (MCU) runs the embedded system, sends the motion information including rotation speed and displacement, and uploads the sensing-layer information to the upper computer through serial communication. After receiving the corresponding instructions, the hub motor and PTZ are driven to complete the action. The MCU can obtain and analyze ultrasonic sensor distance information to prevent collision accidents using an internal obstacle avoidance strategy.

5 Simulations and Experiments

5.1 Dynamic Obstacle Avoidance Behavior Strategy

For robot dynamic obstacle avoidance and trajectory tracking, the robot speed and movement angle, and the distance between human and robot must be obtained by the robot sensors. The sensor accuracy experiment is shown in Figure 11.

Figure 11
figure 11

Sensor accuracy experiment platform

The sampling time interval was t = 0.1 s; the speed of the driving wheel was 0.5 m/s; the robot angle was 90°, and the distance between the ultrasonic sensor and the wall was 1 m. The experimental results are shown in Table 2. The speed error of the hub motor is fed back through its current loop. The hub motor error, angle accuracy, and ultrasonic sensor error were ± 0.002 m/s, ± 0.002 rad, and ± 0.03 m, respectively.

Table 2 Sensor error measurement results (partial)

The simulation analysis mainly considers the two main behavior patterns of human (interfering object) movement, crossing pattern and encountering pattern. The relative positions of the human and robot and the starting and ending movement points are randomly set according to the behavior pattern. The trajectory of the hollow circle represents the human walking route; the trajectory of the solid square represents the trajectory of the robot. The robot motion trajectory is preset as a straight line. Human path points were sampled at intervals of t = 1 s. The experimental platform included a PC and a robot; the robot provided real-time measurement data to the PC, as shown in Figure 12.

Figure 12
figure 12

Experimental robot platform

5.1.1 Crossing Pattern Simulation and Experiment

Simulation experiment 1 considered typical crossing behavior at an intersection. With a sampling time interval t = 0.1 s, the human moved along the set movement route at an initial speed of 0.5 m/s; the robot performed a constant speed inspection at an initial speed of 0.5 m/s. The initial distance between the robot and the human was approximately 5.5 m. At t = 4.6 s, the robot detected an interfering object entering the maximum avoidance distance area from the left side, as shown in the dotted line graph. At t = 8.6 s, the robot entered a sensitive human area and stopped, indicated in the solid line graph.

In the crossing behavior pattern experiment, the initial distance between the experimenter and the robot was approximately 5.5 m, as shown in Figure 13. The experimenter crossed from the front left to the front right of the robot and maintained an average speed of approximately 0.5 m/s. At approximately 5 s, the robot began to decelerate until approximately 9 s, when it waited. When a person passes and leaves, the ultrasonic sensors detect the distance between the person and the robot. When the distance was greater than the sensitive human distance, the robot accelerated. When the robot entered the interference area with the front wall, it detected a dangerous distance and stopped.

Figure 13
figure 13

Crossing behavior pattern simulation

The experimenter crossed from the robot front left to front right and maintained an average speed of approximately 0.5 m/s, as shown in Figure 14. At approximately t = 5 s, the robot began to decelerate. When a person passed and left, the ultrasonic sensors detected the distance between the person and the robot. When the distance was greater than the sensitive human distance, the robot accelerated. When the robot entered the interference area with the front wall at approximately t = 9 s, it began to decelerate until it stopped.

Figure 14
figure 14

Crossing behavior pattern experiment: a approaching robot, b robot decelerates to avoid, c human departure

5.1.2 Encountering Pattern Simulation and Experiment

Simulation experiment 2 considered the typical encountering behavior pattern in the aisle. We set the sampling time interval as t = 0.1 s, the human movement speed as 0.5 m/s, and the robot movement speed as 0.5 m/s. At t = 1.6 s, the robot detected an obstacle entering the maximum avoidance area from its moving direction and decelerated, as shown in Figure 15. At t = 3.4 s, the robot line velocity dropped to zero.

Figure 15
figure 15

Encountering behavior pattern simulation

In the encountering behavior pattern experiment, the initial distance between the experimenter and robot was approximately 5.5 m. The experimenter maintained a speed of 0.5 m/s to approach the robot from its front. In Figure 16, the experimenter and robot become closer. At approximately 2 s, the robot began to decelerate; at approximately 4 s, the robot reached its lowest speed to wait for the person to pass. When the measured value was greater than the sensitive human distance, the robot speed gradually increased to the initial speed. In the two experiments, the robot smoothly achieved obstacle avoidance.

Figure 16
figure 16

Encountering behavior pattern experiment: a approaching robot, b robot decelerates to avoid, c human departure

Crossing and encountering interference behavior experiments verified that the dynamic avoidance strategy using the proposed fuzzy-PID controller allowed the robot to produce matching avoidance behaviors according to the obstacle avoidance strategy in two interference conditions, successfully avoiding collisions with humans.

The experimental results show that the robot obstacle avoidance strategy is to react in time, and the small-scale time delay of the speed change does not affect obstacle avoidance. When the robot moves in a man–machine environment, it always maintains a safe distance from humans to reduce anxiety.

5.2 Combination Autonomous Behavior Strategy

To verify the effectiveness of the proposed strategy, we set up the robot in an indoor man–machine environment. Based on the experimental accuracy of the sensor, the sampling time interval in the trajectory tracking experiment was set as t = 0.01 s, and the accuracy of the position error and angle error reached three decimal places. A circular trajectory with a radius of approximately 3.5 m was used as the target trajectory to monitor robot movement. The starting point of the circle was set as (0, 0, 0); the robot was placed at this position. The initial position error and angle error were zero, as shown in Figure 17. Real-time robot position and the marking trajectory of robot movement were measured and recorded using a grayscale sensor. The actual movement trajectory of the robot was obtained after analysis and processing on the PC.

Figure 17
figure 17

Target trajectory of robot

The target trajectory and actual robot trajectory in the tracking experiment are shown in Figure 18. The robot can move flexibly. The relative positions of the corresponding points in the target trajectory were calculated from the eight light states of the grayscale sensor at each point. The actual trajectory of the robot was drawn; the coordinate position of the robot and the position and angle errors of the critical point were recorded.

Figure 18
figure 18

Target and actual robot trajectories

As the robot moved along the target trajectory, it detected that a pedestrian had entered the avoidance distance, assessed as the encountering pattern, and decelerated on the predetermined trajectory, as shown in Figure 19. As the pedestrian gradually moved away from the robot to the avoidance distance, the robot gradually returned to its initial speed and continued to follow the target trajectory.

Figure 19
figure 19

Robot trajectory tracking experiment

Table 3 shows the robot motion data and position errors.

Table 3 Target and actual positions of critical points

Using the fuzzy-PID controller, the maximum position error was less than 0.098 m; the position error converged at t =18.25 s, as shown in Figure 20a. With the PID controller, the maximum position error was 0.1 m until the position error converged at t = 25.76 s. The maximum angle error was less than 0.088 rad with the fuzzy-PID controller; the angle error converged at t = 20.05 s. With the PID controller, the maximum angle error was 0.175 rad until the position error converged at t = 20.52 s, as shown in Figure 20b.

Figure 20
figure 20

Robot trajectory tracking convergence errors

The experimental results show that the combination behavior strategy proposed in this study can protect humans and robots and ensure that human space is not invaded. The robot runs smoothly, and can complete indoor inspection of target tracking, with a position error reaching 0.098 m and an angle error reaching 0.088 rad. Compared with the PID controller, the proposed fuzzy-PID controller is faster, more accurate, and more stable.

6 Conclusions

  1. (1)

    A mobile robot that can perform inspection tasks in a narrow man–machine environment was designed. This study built a robot kinematic model and established a hybrid control system architecture to produce smooth motion in a narrow space, with overall coordination and decision-making ability.

  2. (2)

    A robot dynamic behavior obstacle avoidance strategy was proposed based on fuzzy logic. In the narrow man–machine dynamic environment, the team analyzed human and robot behavior patterns. A social force model for humans and robots was established; four typical man–machine behavior patterns were introduced to build fuzzy control rules. The crossing and encountering behavior pattern simulation and experimental results were similar, indicating that the proposed robot behavior strategy maintains proper distance from humans during dynamic obstacle avoidance, and verifying its effectiveness.

  3. (3)

    Combining the social force model and man–machine behavior patterns, the trajectory tracking fuzzy-PID control method and combination autonomous behavior strategy were proposed for a mobile robot to inspect hazardous gases. The simulation analysis shows that the proposed behavior strategy has a position error less than 0.098 m and an angle error less than 0.088 rad. The proposed convergence method is faster and more steady than with the PID controller. The trajectory tracking simulation and experimental results were similar, indicating that the proposed behavior strategy can help robots and humans maintain a safe distance from each other to reduce anxiety.

References

  1. F G Pratticò, F Lamberti. Mixed-reality robotic games: design guidelines for effective entertainment with consumer robots. IEEE Consumer Electronics Magazine, 2021, 10(1): 6–15.

    Article  Google Scholar 

  2. D Y Huang, C G Yang, Y P Pan, et al. Composite learning enhanced neural control for robot manipulator with output error constraints. IEEE Transactions on Industrial Informatics, 2021, 17(1): 209–218.

    Article  Google Scholar 

  3. T Yang, X S Gao, F Q Dai. New hybrid AD methodology for minimizing the total amount of information content: a case study of rehabilitation robot design. Chinese Journal of Mechanical Engineering, 2020, 33(1): 1–10.

    Article  Google Scholar 

  4. W Zhang, S L Wei, Y B Teng, et al. Dynamic obstacle avoidance for unmanned underwater vehicles based on an improved velocity obstacle method. Sensors, 2017, 17(12): 2742.

    Article  Google Scholar 

  5. J López, P Sanchez-Vilariño, M D Cacho, et al. Obstacle avoidance in dynamic environments based on velocity space optimization. Robotics and Autonomous Systems, 2020, 131: 103569.

    Article  Google Scholar 

  6. Y Chen, J D Han, H Y Wu. Quadratic programming-based approach for autonomous vehicle path planning in space. Chinese Journal of Mechanical Engineering, 2012, 25(4): 665–673.

    Article  Google Scholar 

  7. T Lv, M Feng. A smooth local path planning algorithm based on modified visibility graph. Modern Physics Letters B, 2017, 31(19–21): 1740091.

    Article  MathSciNet  Google Scholar 

  8. E Ferrera, J Capitan, A R Castano, et al. Decentralized safe conflict resolution for multiple robots in dense scenarios. Robotics and Autonomous Systems, 2017, 91: 179–193.

    Article  Google Scholar 

  9. M P Polverini, A M Zanchettin, P Rocco. A computationally efficient safety assessment for collaborative robotics applications. Robotics and Computer-Integrated Manufacturing, 2017, 46: 25–37.

    Article  Google Scholar 

  10. R Singh, T K Bera. Walking model of jansen mechanism-based quadruped robot and application to obstacle avoidance. Arabian Journal for Science and Engineering, 2020, 45(2): 653–664.

    Article  Google Scholar 

  11. N Takahashi, N Shibata, K Nonaka. Optimal configuration control of planar leg/wheel mobile robots for flexible obstacle avoidance. Control Engineering Practice, 2020, 101: 104503.

    Article  Google Scholar 

  12. O Khatib. Real-time obstacle avoidance for manipulators and mobile robots. Proceedings of the Autonomous Robot Vehicles. New York, USA, Springer, 1986: 396–404.

  13. C T Diao, S M Jia, G L Zhang, et al. Design and realization of a novel obstacle avoidance algorithm for intelligent wheelchair bed using ultrasonic sensors. Proceedings of the 2017 Chinese Automation Congress (CAC), IEEE, Jinan, China, Oct., 2017: 4153–4158.

  14. J A Janet, R C Luo, M G Kay. The Essential Visibility Graph: an approach to global motion planning for autonomous mobile robots. Proceedings of the 1995 IEEE International Conference on Robotics and Automation, IEEE, Nagoya, Japan, May, 1995, 2: 1958–1963.

  15. L Blasi, E D'Amato, M Mattei, et al. Path planning and real-time collision avoidance based on the essential visibility graph. Applied Sciences, 2020, 10(16): 5613.

    Article  Google Scholar 

  16. H Shin, J Chae. A performance review of collision-free path planning algorithms. Electronics, 2020, 9(2): 316.

    Article  Google Scholar 

  17. I Ulrich, J Borenstein. VFH*: local obstacle avoidance with look-ahead verification. Proceedings of the 2000 ICRA. Millennium Conference, IEEE, San Francisco, CA, USA, Apr., 2000, 3: 24–28.

  18. X Y Li, F Liu, J Liu, et al. Obstacle avoidance for mobile robot based on improved dynamic window approach. Turkish Journal of Electrical Engineering & Computer Sciences, 2017, 25(2): 666–676.

    Article  MathSciNet  Google Scholar 

  19. P K Mohanty. An intelligent navigational strategy for mobile robots in uncertain environments using smart cuckoo search algorithm. Journal of Ambient Intelligence and Humanized Computing, 2020, 11(12): 6387–6402.

    Article  Google Scholar 

  20. N T Thinh, N T Tuan, L P Huang. Predictive controller for mobile robot based on fuzzy logic. Proceedings of the 2016 3rd International Conference on Green Technology and Sustainable Development (GTSD), IEEE, Kaohsiung, Taiwan, China, Nov., 2016: 141–144.

  21. Y L Fu, S G Wang, Z C Cao. Behavior-based robot fuzzy motion planning approach in unknown environments. Chinese Journal of Mechanical Engineering, 2006, 42(5): 120–125. (in Chinese)

    Article  Google Scholar 

  22. G N Saridis, H E Stephanou. A hierarchical approach to the control of a prosthetic arm. IEEE Transactions on Systems, Man, and Cybernetics, 1977, 7(6): 407–420.

    Article  Google Scholar 

  23. C M Ye, J Li, H Jiang, et al. Semi-automated generation of road transition lines using mobile laser scanning data. IEEE Transactions on Intelligent Transportation Systems, 2019, 21(5): 1877–1890.

    Article  Google Scholar 

  24. K Ren, Q Wang, C Wang, et al. The security of autonomous driving: threats, defenses, and future directions. Proceedings of the IEEE, IEEE, Feb., 2020, 108(2): 357–372.

    Article  Google Scholar 

  25. X Y Li, N Xu, Q Li, et al. A fusion methodology for sideslip angle estimation on the basis of kinematics-based and model-based approaches. Proceedings of the Institution of Mechanical Engineers, Part D: Journal of Automobile Engineering, 2020, 234(7): 1930–1943.

    Google Scholar 

  26. V Dolk, J den Ouden, S Steeghs, et al. Cooperative automated driving for various traffic scenarios: experimental validation in the GCDC 2016. IEEE Transactions on Intelligent Transportation Systems, 2017, 19(4): 1308–1321.

    Article  Google Scholar 

  27. A Hacinecipoglu, E I Konukseven, A B Koku. Multiple human trajectory prediction and cooperative navigation modeling in crowded scenes. Intelligent Service Robotics, 2020, 13(4): 479–493.

    Article  Google Scholar 

  28. E Hall. The silent language. New York: Anchor Books, 1973.

    Google Scholar 

  29. S D Lynch, R Kulpa, L A Meerhoff, et al. Influence of path curvature on collision avoidance behavior between two walkers. Experimental Brain Research, 2021, 239(1): 329–340.

    Article  Google Scholar 

  30. J D Han, Z Q Zhu, Z Y Jiang, et al. Simple PID parameter tuning method based on outputs of the closed loop system. Chinese Journal of Mechanical Engineering, 2016, 29(3): 465–474.

    Article  Google Scholar 

  31. K P Valavanis. The entropy based approach to modeling and evaluating autonomy and intelligence of robotic systems. Journal of Intelligent & Robotic Systems, 2018, 91(1): 7–22.

    Article  Google Scholar 

Download references

Acknowledgements

Not applicable.

Funding

Supported by Research and Development Program of Xi'an Modern Chemistry Research Institute of Chnia (Grant No. 204J201916234/6), and Key Project of Liuzhou Science and Technology Bureau of China (Grant No. 2020PAAA0601).

Author information

Authors and Affiliations

Authors

Contributions

XG and QZ were in charge of the entire trial; QZ and ML wrote the manuscript; BL, XF, and JL conducted the experimental analysis and numerical simulations; XG and QZ guided the experiments. All authors read and approved the final manuscript.

Authors’ information

Xueshan Gao, born in 1966, is currently a professor at School of Mechatronical Engineering, Beijing Institute of Technology, China. His research interests include special mobile and medical rehabilitation robots. E-mail: xueshan.gao@bit.edu.cn

Qingfang Zhang, born in 1994, is currently a Ph.D. candidate at School of Mechatronical Engineering, Beijing Institute of Technology, China. She received her Master’s degree from North China University of Technology, China, in 2016. Her research interests include intelligent control technology and special mobile robots. E-mail: 3120195161@bit.edu.cn.

Mingkang Li, born in 1998, is currently pursuing his Master’s degree at School of Mechatronical Engineering, Beijing Institute of Technology, China. He is currently working with autonomous control of special mobile robot.

Bingqing Lan was born in 1995. He received his Master’s degree from School of Mechatronical Engineering, Beijing Institute of Technology, China, in 2021. His research interests include mechanical design and vehicular dynamics.

Xiaolong Fu, born in 1982, is currently a researcher at Xi'an Modern Chemistry Research Institute, China. His research interests include unmanned monitoring technology and material science.

Jingye Li, born in 1995. He received his Master’s degree from School of Mechatronical Engineering, Beijing Institute of Technology, China, in 2020. His research interests include robotic dynamics.

Corresponding authors

Correspondence to Xueshan Gao or Qingfang Zhang.

Ethics declarations

Competing interests

The authors declare no competing financial interests.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Gao, X., Zhang, Q., Li, M. et al. Mobile Robot Combination Autonomous Behavior Strategy to Inspect Hazardous Gases in Relatively Narrow Man–Machine Environment. Chin. J. Mech. Eng. 35, 135 (2022). https://doi.org/10.1186/s10033-022-00798-x

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s10033-022-00798-x

Keywords