Skip to content

Advertisement

  • Original Article
  • Open Access

Novel Door-opening Method for Six-legged Robots Based on Only Force Sensing

Chinese Journal of Mechanical Engineering201730:172

https://doi.org/10.1007/s10033-017-0172-7

  • Received: 25 June 2016
  • Accepted: 20 July 2017
  • Published:

Abstract

Current door-opening methods are mainly developed on tracked, wheeled and biped robots by applying multi-DOF manipulators and vision systems. However, door-opening methods for six-legged robots are seldom studied, especially using 0-DOF tools to operate and only force sensing to detect. A novel door-opening method for six-legged robots is developed and implemented to the six-parallel-legged robot. The kinematic model of the six-parallel-legged robot is established and the model of measuring the positional relationship between the robot and the door is proposed. The measurement model is completely based on only force sensing. The real-time trajectory planning method and the control strategy are designed. The trajectory planning method allows the maximum angle between the sagittal axis of the robot body and the normal line of the door plane to be 45º. A 0-DOF tool mounted to the robot body is applied to operate. By integrating with the body, the tool has 6 DOFs and enough workspace to operate. The loose grasp achieved by the tool helps release the inner force in the tool. Experiments are carried out to validate the method. The results show that the method is effective and robust in opening doors wider than 1 m. This paper proposes a novel door-opening method for six-legged robots, which notably uses a 0-DOF tool and only force sensing to detect and open the door.

Keywords

  • Door-opening
  • Six-legged robots
  • Force sensing
  • 0-DOF tool

1 Introduction

Legged robots are believed to have better mobility in rough terrain than tracked and wheeled robots, because they can use isolated footholds to optimize support and traction [1]. So in disasters, such as earthquakes, nuclear and toxic explosions which are too dangerous for human, legged robots are expected to take the place of human to perform rescue tasks. In indoor rescue, door-opening is a fundamental and essential task, which has already been studied for more than two decades [2]. However, current researches on door-opening mostly focus on tracked [3, 4] and wheeled [57] robots. In the field of legged robots, few examples of biped and quadruped robots can be found. An early example is the HRP2 [8] in 2009 which was allowed to hit a door open with its whole body. In recent years, under the influence of the Defense Advanced Research Projects Agency(DARPA) Robotics Challenge, more related examples of biped robots opening doors can be found, such as the HUBO [9], the ATLAS [10] and the COMAN [11]. In 2015, González-Fierro, et al. [12] proposed a method for humanoid robots to learn from demonstrations of human opening doors, and defined a multi-objective reward function as a measurement of the goal optimality. Boston Dynamics’ MINISPOT [13] and Ghost Robotics’ MINITAUR [14] can open doors, but there is no related paper about the details, and no related research on six-legged robots can be found. On the other hand, six-legged robots can also adapt to complicated scenarios well and are more stable when walking and operating. Therefore, it is essential and helpful to develop a new method for six-legged robots to realize the function of opening doors.

When opening doors, robots mainly encounter two issues. The first one is how to recognize and locate the door and the handle in real time precisely in unknown environments. In order to recognize and locate the handle, vision systems such as laser scanners, cameras and infrared sensors are always used. A few related works realize the recognition of various door handles of unknown geometries. Moreno, et al. [15] investigated different handle types and applied a morphological filter adapted to the characteristic shape of different handles to realize the recognition. Klingbeil, et al. [16] used a computer vision and supervised learning to identify 3D key locations on any handle, thus choosing a manipulation strategy. Ignakov, et al. [17] extracted the 3D point cloud of any unknown handle by using the optical flow calculated from images taken with a single CCD camera. Most other methods assume the geometry of the handle is already known and the vision systems are just used to locate. Adiwahono, et al. [18] used a Microsoft Kinect sensor and a 2D laser scanner to estimate the handle position, thus planning the trajectory to open the door. Petrovskaya, et al. [19] presented a unified, real-time algorithm that simultaneously modeled the position of the robot within the environment, as well as the door and the handle. Kobayashi, et al. [20] applied an IP camera and IR distance sensors to calculate the position of the handle, which could be cylindrical with its diameter 48 mm to 56 mm or lever type. However, vision systems are frequently subject to calibration errors, occlusions and sight ranges, making it inevitable for scholars to apply force sensing to additionally double-confirm the contact position with the handle [2123]. In fact, it is completely competent for robots to use only force sensing to detect the positional relationship with the door and the handle by touching at different positions and different directions, just like humans acting in the darkness. To simplify the system and supplement relevant study, it is essential to develop a new door-opening method based on only force sensing. If the robot is far away from the door in an undiscovered room, vision systems [24], human-computer interaction or some other methods may be applied to help the robot distinguish the door from the wall and navigate the robot to the door, but not involved in measuring the positional relationship.

The second issue is how to release the inner force in the manipulator that occurs during turning the handle and pushing the door because of the positional error and the imprecise modeling of the environment. The inner force occurs because the motion of the manipulator cannot follow the position of the handle exactly due to the positional error. In order to meet the positional accuracy requirements, the manipulator must have at least three DOFs theoretically, and specific mechanisms or control strategies need to be applied. Farelo, et al. [25] designed a 9-DOF wheelchair mounted robotic arm system to open doors by keeping the end-effector stationary while moving the base through the door. Ahmad, et al. and Zhang, et al. developed a compact wrist which could switch between active mode and passive mode as task requirements differed [26], and applied the wrist to a modular re-configurable robot mounted to both a tracked mobile platform [27] and a wheeled one [28] to open doors. Winiarski, et al. [29] applied a direct impedance controller and a local stiffness controller to a 7-DOF manipulator to robustly open doors. Karayiannidis, et al. [30] proposed a dynamic force/velocity controller which adaptively estimated the door hinge’s position in real time, thus properly regulating the forces and velocities in radial and tangential directions during opening doors. Guo, et al. [31] simulated a hybrid position/force controller for a manipulator mounted to a wheeled platform to open doors. The PR2 [32, 33] could both push and pull room doors and cabinets open by applying vision systems, tactile sensors and an impedance controller. However, the positional error cannot be eliminated completely. The inner force still always occurs, as long as the manipulator is compelled to follow the handle exactly by a firm grasp. Considering that the firm grasp is not essential for all cases, this paper applies a 0-DOF tool which can effectively release the inner force by providing a loose grasp and allowing relative movement between the handle and the tool. By integrating with the 6-DOF body of the robot, the 0-DOF tool mounted to the body has enough DOFs and workspace to operate.

In this paper, a novel method for six-legged robots to open doors autonomously is proposed and implemented to the six-parallel-legged robot [34, 35]. The method makes the following contributions:
  1. (1)

    It is a novel method developed for six-legged robots to open doors.

     
  2. (2)

    The robot autonomously identifies its positional relationship with the door and the handle in real time based on only force sensing.

     
  3. (3)

    The robot uses a 0-DOF tool to operate, making a good use of the robot’s DOFs and workspace. The loose grasp of the tool effectively releases the inner force.

     
  4. (4)

    Experiments are carried out to validate the accuracy and robust of the method in unknown environments.

     

The rest part of this paper is organized as follows: in Section 2 we introduce the system of the six-parallel-legged robot; in Section 3 we define the coordinate systems and build the kinematic model of the robot; in Section 4 we present the approach of opening a door and introduce the subtasks in detail; in Section 5 we provide the experiment results and discuss about them; in Section 6 we conclude this paper.

2 System Overview

Parallel mechanisms have been researched intensively and applied widely [3638]. But for robots with parallel legs, few related examples can be found [39, 40]. The platform we study on is a six-parallel-legged robot as shown in Fig. 1. The robot is a 6-DOF mobile platform with six legs arranged symmetrically along the sagittal plane of the body. Each leg of the robot is a 3-DOF parallel mechanism with three chains: one universal joint - prismatic joint (UP) chain, and two universal joint - prismatic joint - spherical joint (UPS) chains. The prismatic joint of each chain is the active input joint driven by a servo motor. A resolver is mounted to each motor to feedback the real position of the motor. At the head of the robot body, a 0-DOF tool with a 6D force sensor is mounted. The tool is composed of a horizontal rod and a vertical rod, which are parallel to the sagittal and vertical axis of the robot body respectively. The 6D force sensor is the ATI Mini58 IP68 F/T Sensor. Upside the body, a cabinet contains components of the onboard control system, including the battery, the onboard computer and the drivers.
Figure 1
Figure 1

Model of the six-parallel-legged robot

Users control the robot by sending commands via a remote terminal unit, which communicates with the onboard computer via Wi-Fi. The resolvers provide the real positions of all motors, and the 6D force sensor feeds back the contact forces with the environments. The onboard computer analyzes the positions and the forces data, and accordingly plans the trajectories of the body and the feet. According to the planed trajectories, the computer calculates the parameters of all motors at every millisecond by running the real-time Linux OS. After the calculation, the onboard computer sends the parameters to the drivers via EtherCAT. Finally, each driver generates a current proportional to the received parameter and provides the current to drive the relevant motor.

3 Coordinate Systems and Kinematic Model

3.1 Coordinate Systems Definition

In order to well express the positional relationships among the door, the robot and the ground, it is essential to establish five coordinate systems (Figures 2 and 3). The first one is the Robot Coordinate System(RCS), which locates at the center of the body and is fixed to the body. Y R and Z R are parallel to the vertical and sagittal axis of the body respectively. The second one is the Ground Coordinate System(GCS), which superposes the RCS at anywhere the user sets and is fixed to the ground. So the RCS moves together with the body and the GCS keeps still to the ground. Here the GCS is set to superpose the RCS as the door-opening task starts. The third one is the Door Coordinate System(DCS), which locates at the intersection of the handle axis and the door plane. The DCS is fixed to the door, with Z D normal to the door plane and Y D parallel to the door hinge. The fourth one is the Leg Coordinate System(LCS), which locates at the U i1 joint of the UP chain and is fixed to the body. When U i1 is at its initial position where every prismatic joint of leg i shrinks to the shortest, X iL is along the prismatic joint, Y iL and Z iL are along the first and second axis of U i1 respectively. The LCS has a fixed relationship with the RCS defined by the geometry of the robot, which can be denoted by \({}_{{i{\text{L}}}}^{\text{R}} {\mathbf{T}}(i = 1,2, \ldots ,6).\) The fifth one is the Ankle Coordinate System(ACS), which locates at each ankle with the same orientation as the LGS when U i1 is at its initial position. The ACS is fixed to the foot and moves as the leg moves.
Figure 2
Figure 2

Definition of the RCS, GCS and DCS

Figure 3
Figure 3

Definition of the LCS and ACS

3.2 Kinematic Model

Based on these coordinate systems, the kinematic model of the robot can be built in the GCS, which is indispensable for controlling the robot. The inverse kinematic model is essential for assigning the position value of each actuation in real time to generate the planed trajectories of the body and the feet, and the forward kinematic model is essential for calculating the real-time position of the robot.

As shown in Figure 3, let s i denote O iA S i1, \(\theta_{i1}\) and \(\theta_{i2}\) denote the first and second angle of U i1, 2d Ui and 2d Si denote the lengths of U i2 U i3 and S i2 S i3, h Ui and h Si denote the distances from O iL to U i2 U i3 and O iA to S i2 S i3. Let L i (l i1, l i2, l i3)T denote the lengths of U i1 S i1, U i2 S i2 and U i3 S i3, which is the input of leg i. Let S i1(x i , y i , z i )T denote the coordinates of foot i, which is the output of leg i. The output of the body can be denoted by the pose matrix of the RCS in the GCS:
$${}_{{\text{R}}}^{{\text{G}}} {\mathbf{T}} = \left( {\begin{array}{*{20}c} {{}_{{\text{R}}}^{{\text{G}}} {\mathbf{R}}} & {{}^{{\text{G}}}{\mathbf{O}}_{{\text{R}}} } \\ {{\mathbf{O}}_{{1 \times 3}} } & {{\mathbf{I}}_{{1 \times 1}} } \\ \end{array} } \right),$$
(1)
where \({}_{\text{R}}^{\text{G}} {\mathbf{T}}\)—Pose matrix of the RCS in the GCS, \({}_{\text{R}}^{\text{G}} {\mathbf{R}}\)—Orientation matrix of the RCS in the GCS, G O R—Origin of the RCS expressed in the GCS.

3.2.1 Inverse Kinematic Model

When given the output of the coordinates of all six feet and the pose matrix of the RCS, the input of all prismatic joints can be calculated in real time by
$$l_{{ij}} = \left\| {{}_{{i{\text{A}}}}^{{i{\text{L}}}} {\mathbf{R}}{}^{{i{\text{A}}}}{\mathbf{S}}_{{ij}} + {}^{{i{\text{L}}}}{\mathbf{O}}_{{i{\text{A}}}} - {}^{{i{\text{L}}}}{\mathbf{U}}_{{ij}} } \right\|_{2} ,$$
(2)
where i—Leg number, \(i = 1,2, \ldots ,6,\) j—Chain number of leg i, j = 1, 2, 3,
$$_{{i{\text{A}}}}^{{i{\text{L}}}} {\mathbf{R}} = \left( {\begin{array}{*{20}c} {\cos \theta_{i 1} \cos \theta_{i 2} } & { - \cos \theta_{i 1} \sin \theta_{i 2} } & {\sin \theta_{i 1} } \\ {\sin \theta_{i 2} } & {\cos \theta_{i 2} } & 0 \\ { - \sin \theta_{i 1} \cos \theta_{i 2} } & {\sin \theta_{i 1} \sin \theta_{i 2} } & {\cos \theta_{i 1} } \\ \end{array} } \right),$$
$$\left( {\begin{array}{*{20}c} {{}^{{i{\text{A}}}}{\mathbf{S}}_{i 1} } & {{}^{{i{\text{A}}}}{\mathbf{S}}_{i 2} } & {{}^{{i{\text{A}}}}{\mathbf{S}}_{i 3} } \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} {s_{i} } & 0 & 0 \\ 0 & {h_{{{\text{S}}i}} } & {h_{{{\text{S}}i}} } \\ 0 & {d_{{{\text{S}}i}} } & { - d_{{{\text{S}}i}} } \\ \end{array} } \right),$$
$${}^{{i{\text{L}}}}{\mathbf{O}}_{{i{\text{A}}}} = \left( {\sqrt {{}^{{i{\text{L}}}}x_{i}^{2} + {}^{{i{\text{L}}}}y_{i}^{2} + {}^{{i{\text{L}}}}z_{i}^{2} } - s_{i} } \right)\left( {\begin{array}{*{20}c} {\cos \theta_{i1} \cos \theta_{i2} } \\ {\sin \theta_{i2} } \\ { - \sin \theta_{i1} \cos \theta_{i2} } \\ \end{array} } \right),$$
$$\left( {\begin{array}{*{20}c} {{}^{{i{\text{L}}}}{\mathbf{U}}_{{i1}} } & {{}^{{i{\text{L}}}}{\mathbf{U}}_{{i2}} } & {{}^{{i{\text{L}}}}{\mathbf{U}}_{{i3}} } \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} 0 & 0 & 0 \\ 0 & {h_{{{\text{U}}i}} } & {h_{{{\text{U}}i}} } \\ 0 & {d_{{{\text{U}}i}} } & { - d_{{{\text{U}}i}} } \\ \end{array} } \right).$$
The \(\theta _{{i1}}\) and \(\theta _{{i2}}\) here can be calculated from the output \({}_{{\text{R}}}^{{\text{G}}} {\mathbf{T}}\) and G S i1(G x i , G y i , G z i )T by
$$\left\{ {\begin{array}{*{20}l} {\theta _{{i{\text{1}}}} = \arctan \left( {\frac{{ - {}^{{i{\text{L}}}}z_{i} }}{{{}^{{i{\text{L}}}}x_{i} }}} \right),} \hfill \\ {\theta _{{i{\text{2}}}} = \arcsin (\frac{{{}^{{i{\text{L}}}}y_{i} }}{{\sqrt {{}^{{i{\text{L}}}}x_{i}^{2} + {}^{{i{\text{L}}}}y_{i}^{2} + {}^{{i{\text{L}}}}z_{i}^{2} } }}),} \hfill \\ {\left( {\begin{array}{*{20}c} {{}^{{i{\text{L}}}}{\mathbf{S}}_{{i{\text{1}}}} } \\ 1 \\ \end{array} } \right) = {}_{{i{\text{L}}}}^{{\text{R}}} {\mathbf{T}}^{{ - 1}} {}_{{\text{R}}}^{{\text{G}}} {\mathbf{T}}^{{ - 1}} \left( {\begin{array}{*{20}c} {{}^{{\text{G}}}{\mathbf{S}}_{{i{\text{1}}}} } \\ 1 \\ \end{array} } \right).} \hfill \\ \end{array} } \right.$$
(3)

3.2.2 Forward Kinematic Model

When given the input of all prismatic joints, either the output coordinates of all six feet or the pose matrix of the RCS must be known so that the other one can be derived. If the pose matrix of the RCS is known, the output coordinates of all six feet can be expressed by
$$\left\{ {\begin{array}{*{20}l} {\left( {\begin{array}{*{20}c} {{}^{{\text{G}}}{\mathbf{S}}_{{i{\text{1}}}} } \\ 1 \\ \end{array} } \right) = {}_{{\text{R}}}^{{\text{G}}} {\mathbf{T}}{}_{{i{\text{L}}}}^{{\text{R}}} {\mathbf{T}}\left( {\begin{array}{*{20}c} {{}^{{i{\text{L}}}}{\mathbf{S}}_{{i{\text{1}}}} } \\ 1 \\ \end{array} } \right),} \hfill \\ {{}^{{i{\text{L}}}}{\mathbf{S}}_{{i{\text{1}}}} = {}_{{i{\text{A}}}}^{{i{\text{L}}}} {\mathbf{R}}{}^{{i{\text{A}}}}{\mathbf{S}}_{{i{\text{1}}}} + {}^{{i{\text{L}}}}{\mathbf{O}}_{{i{\text{A}}}} ,} \hfill \\ \end{array} } \right.$$
(4)
where i—Leg number, \(i = 1,2, \ldots ,6.\)
If the output coordinates of all six feet are known, the pose matrix of the RCS can be calculated by the coordinates of three stance-feet in 3-3 gait. Here we derive the equation using legs 1, 3, 5, and it is similar for legs 2, 4, 6:
$$\left\{ {\begin{array}{*{20}l} {\begin{array}{*{20}l} {{}^{{\text{R}}}{\mathbf{S}}_{{i{\text{1}}}} = {}_{{i{\text{L}}}}^{{\text{R}}} {\mathbf{T}}\left( {{}_{{i{\text{A}}}}^{{i{\text{L}}}} {\mathbf{R}}{}^{{i{\text{A}}}}{\mathbf{S}}_{{i{\text{1}}}} + {}^{{i{\text{L}}}}{\mathbf{O}}_{{i{\text{A}}}} } \right),} \hfill \\ {\left\| {{}^{{\text{G}}}{\mathbf{S}}_{{i{\text{1}}}} - {}^{{\text{G}}}{\mathbf{O}}_{{\text{R}}} } \right\|_{2} = \left\| {{}^{{\text{R}}}{\mathbf{S}}_{{i{\text{1}}}} } \right\|_{2} ,} \hfill \\ \end{array} } \hfill \\ {{}_{{\text{R}}}^{{\text{G}}} {\mathbf{R}} = \left( {\begin{array}{*{20}c} {{}^{{\text{G}}}{\mathbf{S}}_{{{\text{11}}}} - {}^{{\text{G}}}{\mathbf{O}}_{{\text{R}}} } \\ {{}^{{\text{G}}}{\mathbf{S}}_{{31}} - {}^{{\text{G}}}{\mathbf{O}}_{{\text{R}}} } \\ {{}^{{\text{G}}}{\mathbf{S}}_{{51}} - {}^{{\text{G}}}{\mathbf{O}}_{{\text{R}}} } \\ \end{array} } \right)^{{\text{T}}} \left( {\begin{array}{*{20}c} {{}^{{\text{R}}}{\mathbf{S}}_{{11}} } \\ {{}^{{\text{R}}}{\mathbf{S}}_{{31}} } \\ {{}^{{\text{R}}}{\mathbf{S}}_{{51}} } \\ \end{array} } \right)^{{ - {\text{T}}}} ,} \hfill \\ \end{array} } \right.$$
(5)
where i—Leg number, i = 1, 3, 5.
Here in Eqs. (4) and (5), the \({}_{{i{\text{A}}}}^{{i{\text{L}}}} {\mathbf{R}}\), iL S i1 and iL O iA are defined the same as in Eq. (2), but the \(\theta_{i1}\) and \(\theta_{i2}\) here are calculated from the input L i (l i1, l i2, l i3)T by
$$\left\{ {\begin{array}{*{20}l} {\theta _{{i{\text{2}}}} = \arcsin \left( {\omega _{i} } \right) - \arctan \left( {\frac{{h_{{{\text{S}}i}} }}{{l_{{i{\text{1}}}} - s_{i} }}} \right),} \hfill \\ {\theta _{{i{\text{1}}}} = \arcsin \left( {\frac{{\phi _{i} }}{{d_{{{\text{U}}i}} \left( {\left( {l_{{i{\text{1}}}} - s_{i} } \right)\cos \theta _{{i{\text{2}}}} - h_{{{\text{S}}i}} \sin \theta _{{i{\text{2}}}} } \right)}}} \right),} \hfill \\ \end{array} } \right.$$
(6)
where
$$\left\{ {\begin{array}{*{20}l} {\begin{array}{*{20}l} {\omega_{i}^{4} + a_{i} \omega_{i}^{3} + b_{i} \omega_{i}^{2} - a_{i} \omega_{i} + c_{i} = 0,} \hfill \\ {a_{i} = - \frac{{2\varphi_{i} }}{{h_{{{\text{U}}i}} \sqrt {\left( {l_{i1} - s_{i} } \right)^{2} + h_{{{\text{S}}i}}^{2} } }},} \hfill \\ \end{array} } \hfill \\ {\begin{array}{*{20}l} {b_{i} = \frac{{\varphi_{i}^{2} - d_{{{\text{U}}i}}^{2} d_{{{\text{S}}i}}^{2} }}{{h_{{{\text{U}}i}}^{2} \left( {\left( {l_{i1} - s_{i} } \right)^{2} + h_{{{\text{S}}i}}^{2} } \right)}} - 1,} \hfill \\ {c_{i} = - \frac{{\phi_{i}^{2} d_{{{\text{S}}i}}^{2} + \left( {\varphi_{i}^{2} - d_{{{\text{U}}i}}^{2} d_{{{\text{S}}i}}^{2} } \right)\left( {\left( {l_{i1} - s_{i} } \right)^{2} + h_{{{\text{S}}i}}^{2} } \right)}}{{h_{{{\text{U}}i}}^{2} \left( {\left( {l_{i1} - s_{i} } \right)^{2} + h_{{{\text{S}}i}}^{2} } \right)^{2} }},} \hfill \\ \end{array} } \hfill \\ {\begin{array}{*{20}l} {\varphi_{i} = \frac{{\left( {l_{i1} - s_{i} } \right)^{2} + d_{{{\text{U}}i}}^{2} + d_{{{\text{S}}i}}^{2} + h_{{{\text{U}}i}}^{2} + h_{{{\text{S}}i}}^{2} }}{2} - \frac{{l_{i2}^{2} + l_{i3}^{2} }}{4},} \hfill \\ {\phi_{i} = \frac{{l_{i2}^{2} - l_{i3}^{2} }}{4}.} \hfill \\ \end{array} } \hfill \\ \end{array} } \right.$$

4 Door-opening Method

The approach of the door-opening method is shown in Figure 4, which can also be applied to pull doors with some revisions on the trajectory planning. The task can be decomposed into five subtasks: locating the door, rotating to align, locating the handle, opening the door and walking through. Here we use Q to denote the end-effecter of the tool, and Q is fixed to the tool. The positions of Q at different transients along the trajectory are marked by other letters, and these marks are fixed to the ground.
Figure 4
Figure 4

Approach of the door-opening method

4.1 Locating the Door

The first subtask is locating the door (O  A→B  C), in which the robot identifies the orientation matrix of the DCS in the GCS(\({}_{\text{D}}^{\text{G}} {\mathbf{R}}\)) by touching three non-collinear points on the door, as shown in Figure 5.
Figure 5
Figure 5

Locating the door plane

Three non-collinear points define a plane. According to this basic principle, the robot firstly moves its body forward (–Z R) until Q touches the first point A on the door (O  A). Then, the robot moves its body both backward and leftward to a different point (A  O’), and forward again to touch the second point B(O’  B). Finally, the robot moves its body both backward and upward (B  O’’), and forward again to touch the third point C(O’’  C). By making the backward and leftward distance during AO’ equal, the backward and upward distance during BO’’ equal, the maximum angle between the sagittal axis of the robot body and the normal line of the door plane is allowed to be \(45^\circ\).

4.1.1 Trajectory Generation

The 6D trajectory of the robot body is generated by a discrete force control model:
$$\left\{ {\begin{array}{*{20}l} {{\mathbf{M}}{\ddot{\mathbf{S}}}_{k} = {\mathbf{F}}_{k} - {\mathbf{C}}{\dot{\mathbf{S}}}_{{k - 1}} ,} \hfill \\ {\begin{array}{*{20}l} {{\dot{\mathbf{S}}}_{k} = {\dot{\mathbf{S}}}_{{k - 1}} + {\ddot{\mathbf{S}}}_{k} \Delta t,} \hfill \\ {{\mathbf{S}}_{k} = {\mathbf{S}}_{{k - 1}} + {\dot{\mathbf{S}}}_{k} \Delta t,} \hfill \\ \end{array} } \hfill \\ \end{array} } \right.$$
(7)
where S k —6D coordinates of the robot body at time k,
$${\mathbf{S}}_{k} = \left( {x_{k} ,y_{k} ,z_{k} ,\alpha_{k} ,\beta_{k} ,\gamma_{k} } \right)^{\text{T}} ,$$
F k —6D force at time k,
$${\mathbf{F}}_{k} = \left( {F_{xk} ,F_{yk} ,F_{zk} ,M_{xk} ,M_{yk} ,M_{zk} } \right)^{\text{T}} ,$$
M—Mass matrix, C—Damp matrix.
The robot determines M and C according to the required accelerations and velocities, and generates different trajectories by applying different F k . During locating the door, the F k is
$${\mathbf{F}}_{k} = \left\{ {\begin{array}{*{20}l} {\left( {\begin{array}{*{20}c} {{}_{{\text{R}}}^{{\text{G}}} {\mathbf{R}}_{O} \left( {0,0, - 1} \right)^{{\text{T}}} } \\ {{}_{{\text{R}}}^{{\text{G}}} {\mathbf{R}}_{O} \left( {0,0,0} \right)^{{\text{T}}} } \\ \end{array} } \right),\;{\text{if }}Q \in OA \cup O^{\prime}B \cup O^{\prime\prime}C,} \hfill \\ {\left( {\begin{array}{*{20}c} {{}_{{\text{R}}}^{{\text{G}}} {\mathbf{R}}_{O} \left( { - 1,0,1} \right)^{{\text{T}}} } \\ {{}_{{\text{R}}}^{{\text{G}}} {\mathbf{R}}_{O} \left( {0,0,0} \right)^{{\text{T}}} } \\ \end{array} } \right),\;{\text{if }}Q \in AO^{\prime},} \hfill \\ {\left( {\begin{array}{*{20}c} {{}_{{\text{R}}}^{{\text{G}}} {\mathbf{R}}_{O} \left( {0,1,1} \right)^{{\text{T}}} } \\ {{}_{{\text{R}}}^{{\text{G}}} {\mathbf{R}}_{O} \left( {0,0,0} \right)^{{\text{T}}} } \\ \end{array} } \right),\;{\text{if }}Q \in BO^{\prime\prime},} \hfill \\ \end{array} } \right.$$
(8)
where \({}_{\text{R}}^{\text{G}} {\mathbf{R}}_{O}\)\({}_{\text{R}}^{\text{G}} {\mathbf{R}}\) at point O, derived by Eq. (5), \(Q \in OA \cup O^{\prime}B\)Q is on line segment OA or OB.

4.1.2 Orientation Matrix Calculation

By applying Eq. (5), G O R at A, B, C can be derived and denoted as G O RA (x A , y A , z A )T, G O RB (x B , y B , z B )T and G O RC (x C , y C , z C )T. Let n(x n , y n , z n )T denote the normal vector of the door plane. Then n can be calculated by
$$\left( {\begin{array}{*{20}c} {\begin{array}{*{20}c} {x_{A} - x_{B} } \\ {y_{A} - y_{B} } \\ {z_{A} - z_{B} } \\ \end{array} } & {\begin{array}{*{20}c} {x_{C} - x_{B} } \\ {y_{C} - y_{B} } \\ {z_{C} - z_{B} } \\ \end{array} } \\ \end{array} } \right)^{{\text{T}}} \left( {\begin{array}{*{20}c} {{}^{{\text{G}}}x_{n} } \\ {{}^{{\text{G}}}y_{n} } \\ {{}^{{\text{G}}}z_{n} } \\ \end{array} } \right) = 0.$$
(9)
Transferring the vector n from the GCS to the RCS, we can get
$$\left( {\begin{array}{*{20}c} {{}^{{\text{R}}}x_{n} } & {{}^{{\text{R}}}y_{n} } & {{}^{{\text{R}}}z_{n} } \\ \end{array} } \right)^{{\text{T}}} = {}_{{\text{R}}}^{{\text{G}}} {\mathbf{R}}_{O}^{{ - 1}} {}^{{\text{G}}}{\mathbf{n}}.$$
(10)
Then projecting n into the X R O R Z R plane, we can get
$${}^{{\text{R}}}{\mathbf{n}}_{{\text{p}}} = (\begin{array}{*{20}c} {{}^{{\text{R}}}x_{n} } & 0 & {{}^{{\text{R}}}z_{n} } \\ \end{array} )^{{\text{T}}} .$$
(11)
Here the Tait-Bryan angle, which includes roll, pitch and yaw, is used to express the orientation of the DCS in the GCS. The yaw angle is
$$Y_{{\text{a}}} = \theta = \arctan \left( {\frac{{{}^{{\text{R}}}x_{n} }}{{{}^{{\text{R}}}z_{n} }}} \right).$$
(12)
Taking into account that there may be stairs or slopes along the direction of Z D in front of the door, making the door plane not normal to the ground plane, the pitch angle exists between n and n p:
$$P_{{\text{i}}} = \alpha = - \arcsin \left( {\frac{{{}^{{\text{R}}}y_{n} }}{{\left\| {\mathbf{n}} \right\|_{2} }}} \right).$$
(13)

Considering there is nearly no doors with stairs or slopes along the direction of X D, we can reasonably assume that the roll angle \(R_{\text{o}} = 0.\)

So, the orientation matrix can be calculated from the Tait-Bryan angle by
$$\left\{ {\begin{array}{*{20}l} {{}_{{\text{D}}}^{{\text{G}}} {\mathbf{R}} = {}_{{\text{R}}}^{{\text{G}}} {\mathbf{R}}_{O} {}_{{\text{R}}}^{{\text{D}}} {\mathbf{R}}_{{YX^{\prime}Z^{\prime\prime}}}^{{ - 1}} ,} \hfill \\ {{}_{{\text{R}}}^{{\text{D}}} {\mathbf{R}}_{{YX^{\prime}Z^{\prime\prime}}} \left( {Y_{{\text{a}}} ,P_{{\text{i}}} ,R_{{\text{o}}} } \right) = \left( {\begin{array}{*{20}c} {\cos \theta } & {\sin \theta \sin \alpha } & {\sin \theta \cos \alpha } \\ 0 & {\cos \alpha } & { - \sin \alpha } \\ { - \sin \theta } & {\cos \theta \sin \alpha } & {\cos \theta \cos \alpha } \\ \end{array} } \right).} \hfill \\ \end{array} } \right.$$
(14)

4.2 Rotating to Align

The second subtask is rotating around X R and Y R to align with the door plane (C  D). As shown in Figure 6, the robot moves its body, both transferring and rotating, from C to D, and at the same time moves the feet to follow the body (C L  D L). The point O R at D superposes O R at O, which determines the 6D trajectory as
$${\mathbf{CD}} = \left( {\begin{array}{*{20}c} {{}^{{\text{G}}}{\mathbf{O}}_{{{\text{R}}O}} - {}^{{\text{G}}}{\mathbf{O}}_{{{\text{R}}C}} } \\ {\left( {\begin{array}{*{20}c} {R_{{\text{o}}} } & {P_{{\text{i}}} } & {Y_{{\text{a}}} } \\ \end{array} } \right)^{{\text{T}}} } \\ \end{array} } \right).$$
(15)
After the alignment, the horizontal rod of the tool will always keep normal to the door plane when the body translates, which guarantees the tool not to collide with the door plane during the process of locating the handle.
Figure 6
Figure 6

Rotating to align and approaching the handle

4.3 Locating the Handle

The third subtask is locating the handle (D  E→F  G), in which the robot identifies translational parameter G O D by three touches in three orthogonal directions.

In order to touch the handle, the robot has to decide the height and the moving direction of the tool first. A statistical analysis [15] of the most frequent size of handles shows that the height of the handle is in a range from 99 cm to 103 cm. Based on this acknowledgement, the robot keeps the vertical rod of the tool in this range. Then the robot chooses right as its target direction to touch the handle. If the robot confirms that the handle is not in the current direction, it changes its direction to left and performs the process of locating the handle again. Here we present the process of the localization and confirmation of the handle in the direction of right, and it is similar for left.

As shown in Figure 6, in case that the robot is far from the handle, the robot needs to move rightward cyclically. In every cycle except the final one, the robot successively moves the body forward (–Z R) until touching the door plane, backward for a short constant distance to avoid rubbing with the door plane, rightward for a constant distance decided by the workspace of the tool, and moves the legs to follow the body, thus finishing the process of D  E. The purpose of moving forward to touch and backward for a constant distance at the beginning of every cycle is to initialize the distance of the current cycle and eliminate the error accumulated in last one. When the robot starts too far from the handle, a very little angular error will cause a large translational error along Z D, even though the robot has already rotated to align. The large error makes it highly possible for the tool to fail to enter the space between the door and the handle considering its narrowness, thus failing to open the door. Because the distance of every rightward cycle is limited to an acceptable constant value which will never be too far, the translational error along Z D can be limited well. Furthermore, by applying multiple three-points contacts of locating the door and reducing the distance of every rightward cycle, the detection accuracy can be guaranteed even if there are embossments or grooves on the door.

In the final cycle (E  F→G in Figure 7), after touching something at F, the robot moves its body leftward for a constant distance shorter than the handle, and downward to touch the handle. If the tool touches nothing until it gets lower than the minimum height, the robot treats it as a confirmation that the handle is not in this direction. If the tool touches something, the robot treats it as a signal of successfully locating the handle and goes on to next subtask.
Figure 7
Figure 7

Final cycle of locating the handle

The trajectory is generated by Eq. (7) in every cycle. The F k of every cycle during D  E is similar to the F k of E  F in the final cycle, and in the final cycle the F k is
$${\mathbf{F}}_{k} = \left\{ {\begin{array}{*{20}l} {\begin{array}{*{20}l} {\left( {\begin{array}{*{20}c} {{}_{{\text{R}}}^{{\text{G}}} {\mathbf{R}}_{E} \left( {\begin{array}{*{20}c} 0 & 0 & { - 1} \\ \end{array} } \right)^{{\text{T}}} } \\ {{}_{{\text{R}}}^{{\text{G}}} {\mathbf{R}}_{E} \left( {\begin{array}{*{20}c} 0 & 0 & 0 \\ \end{array} } \right)^{{\text{T}}} } \\ \end{array} } \right),{\text{if }}Q \in EE^{\prime},} \hfill \\ {\left( {\begin{array}{*{20}c} {{}_{{\text{R}}}^{{\text{G}}} {\mathbf{R}}_{E} \left( {\begin{array}{*{20}c} 0 & 0 & 1 \\ \end{array} } \right)^{{\text{T}}} } \\ {{}_{{\text{R}}}^{{\text{G}}} {\mathbf{R}}_{E} \left( {\begin{array}{*{20}c} 0 & 0 & 0 \\ \end{array} } \right)^{{\text{T}}} } \\ \end{array} } \right),{\text{if }}Q \in E^{\prime}F^{\prime},} \hfill \\ \end{array} } \hfill \\ {\begin{array}{*{20}l} {\left( {\begin{array}{*{20}c} {{}_{{\text{R}}}^{{\text{G}}} {\mathbf{R}}_{E} \left( {\begin{array}{*{20}c} 1 & 0 & 0 \\ \end{array} } \right)^{{\text{T}}} } \\ {{}_{{\text{R}}}^{{\text{G}}} {\mathbf{R}}_{E} \left( {\begin{array}{*{20}c} 0 & 0 & 0 \\ \end{array} } \right)^{{\text{T}}} } \\ \end{array} } \right),{\text{if }}Q \in E^{\prime}F^{\prime},} \hfill \\ {\left( {\begin{array}{*{20}c} {{}_{{\text{R}}}^{{\text{G}}} {\mathbf{R}}_{E} \left( {\begin{array}{*{20}c} { - 1} & 0 & 0 \\ \end{array} } \right)^{{\text{T}}} } \\ {{}_{{\text{R}}}^{{\text{G}}} {\mathbf{R}}_{E} \left( {\begin{array}{*{20}c} 0 & 0 & 0 \\ \end{array} } \right)^{{\text{T}}} } \\ \end{array} } \right),{\text{if }}Q \in FG^{\prime},} \hfill \\ {\left( {\begin{array}{*{20}c} {{}_{{\text{R}}}^{{\text{G}}} {\mathbf{R}}_{E} \left( {\begin{array}{*{20}c} 0 & { - 1} & 0 \\ \end{array} } \right)^{{\text{T}}} } \\ {{}_{{\text{R}}}^{{\text{G}}} {\mathbf{R}}_{E} \left( {\begin{array}{*{20}c} 0 & 0 & 0 \\ \end{array} } \right)^{{\text{T}}} } \\ \end{array} } \right),{\text{if }}Q \in FG^{\prime}.} \hfill \\ \end{array} } \hfill \\ \end{array} } \right.$$
(16)
The location of O D on the handle can be expressed by
$$\left\{ {\begin{array}{*{20}l} {\left( {\begin{array}{*{20}c} {{}^{{\text{G}}}{\mathbf{O}}_{{\text{D}}} } \\ 1 \\ \end{array} } \right) = {}_{{\text{R}}}^{{\text{G}}} {\mathbf{T}}_{E} \left( {\begin{array}{*{20}c} {{}^{{\text{R}}}{\mathbf{O}}_{{\text{D}}} } \\ 1 \\ \end{array} } \right),} \hfill \\ {{}^{{\text{R}}}{\mathbf{O}}_{{\text{D}}} = {}^{{\text{R}}}{\mathbf{O}}_{{{\text{R}}E}} + \left( {\begin{array}{*{20}c} {\left\| {{}^{{\text{G}}}{\mathbf{O}}_{{{\text{R}}F}} - {}^{{\text{G}}}{\mathbf{O}}_{{{\text{R}}F'}} } \right\|_{2} } \\ { - \left\| {{}^{{\text{G}}}{\mathbf{O}}_{{{\text{R}}G}} - {}^{{\text{G}}}{\mathbf{O}}_{{{\text{R}}G'}} } \right\|_{2} } \\ { - \left\| {{}^{{\text{G}}}{\mathbf{O}}_{{{\text{R}}E}} - {}^{{\text{G}}}{\mathbf{O}}_{{{\text{R}}E'}} } \right\|_{2} } \\ \end{array} } \right).} \hfill \\ \end{array} } \right.$$
(17)

4.4 Opening the Door

The fourth subtask is opening the door (G  H→I), in which the robot firstly moves along a circular line in the door plane to turn the handle until it reaches the end (G  H in Figure 8), and then moves forward to try to push the door open (H  I in Figure 8). When moving forward, the robot keeps detecting the contact force. If it exceeds the maximum force the robot can apply, the robot treats it as a signal of door blocked and stops the task. The trajectories of turning and pushing are both generated by Eq. (7), and the F k is
$${\mathbf{F}}_{k} = \left\{ {\begin{array}{*{20}l} {\left( {\begin{array}{*{20}c} {{}_{{\text{R}}}^{{\text{G}}} {\mathbf{R}}_{G} \left( {\begin{array}{*{20}c} {\sin \left( {\frac{{2\overline{v} k}}{{\pi r}}} \right)} & { - \cos \left( {\frac{{2\overline{v} k}}{{\pi r}}} \right)} & {\text{0}} \\ \end{array} } \right)^{{\text{T}}} } \\ {{}_{{\text{R}}}^{{\text{G}}} {\mathbf{R}}_{G} \left( {\begin{array}{*{20}c} 0 & 0 & 0 \\ \end{array} } \right)^{{\text{T}}} } \\ \end{array} } \right),{\text{if }}Q \in {\mathop {\frown}\limits_{GH}},} \hfill \\ {\left( {\begin{array}{*{20}c} {{}_{{\text{R}}}^{{\text{G}}} {\mathbf{R}}_{G} \left( {\begin{array}{*{20}c} 0 & 0 & { - 1} \\ \end{array} } \right)^{{\text{T}}} } \\ {{}_{{\text{R}}}^{{\text{G}}} {\mathbf{R}}_{G} \left( {\begin{array}{*{20}c} 0 & 0 & 0 \\ \end{array} } \right)^{{\text{T}}} } \\ \end{array} } \right),{\text{if }}Q \in HI,} \hfill \\ \end{array} } \right.$$
(18)
where \(\bar{v}\)—Average linear speed planed along the arc, r—Radius of the arc, \(Q \in {\mathop {\frown}\limits_{GH}}\)Q is on the arc \({\mathop {\frown}\limits_{GH}}\).
Figure 8
Figure 8

Turning the handle and pushing the door

The simpler mechanism of the 0-DOF tool plays a significant role here, in releasing the inner force in the tool when turning the handle. The open-loop structure of the end-effecter cannot achieve a firm grasp of the handle like widely used closed-loop multi-DOF grippers, which takes a notably positive effect on the subtask but not negative, because the inner force is effectively released. The inner force occurs because the motion of the manipulator cannot follow the position of the handle exactly due to the positional error and the imprecise modeling of the door, while a firm grasp compels the manipulator to follow. And it is nearly impossible to completely solve this confliction only if a firm grasp is applied. However, the firm grasp is not essential for all cases. When applying the loose grasp, the contact point of the tool and the handle can slide along themselves, so that the tool does not have to follow the handle exactly, thus releasing the inner force. And because of the large areas the tool can move in (red areas in Figure 8), it will not be a trouble for the tool to keep contacting with the handle when moving.

4.5 Walking Through

The fifth subtask is walking through(I  J→K  L), in which the robot adjusts its body back to the sagittal plane, walks leftward into the door range, and then walks forward to get through the door (Figure 9). The robot keeps detecting the contact force during the whole process. If the contact force exceeds the maximum force the robot can apply, the robot treats it as a signal of door blocked and stops the task.
Figure 9
Figure 9

Walking through the door

When adjusting, the tool translates parallel to the wall plane (I  J) to release the handle and prepare for the left walking. The point J is in the sagittal plane like the point E, but higher than E for h to avoid colliding with the handle, so the adjustment trajectory is
$${\mathbf{IJ}} = \left( {\begin{array}{*{20}c} {{}_{{\text{R}}}^{{\text{G}}} {\mathbf{R}}_{I} \left( {\begin{array}{*{20}c} {\begin{array}{*{20}c} {\left( {{}_{{\text{R}}}^{{\text{G}}} {\mathbf{R}}_{I} {}^{{\text{R}}}{\mathbf{X}}_{{\text{R}}} } \right)^{{\text{T}}} \left( {{}^{{\text{G}}}{\mathbf{E}} - {}^{{\text{G}}}{\mathbf{I}}} \right)} \\ {\left( {{}_{R}^{G} {\mathbf{R}}_{I} {}^{{\text{R}}}{\mathbf{Y}}_{{\text{R}}} } \right)^{{\text{T}}} \left( {{}^{{\text{G}}}{\mathbf{E}} - {}^{{\text{G}}}{\mathbf{I}}} \right) + h} \\ \end{array} } \\ 0 \\ \end{array} } \right)} \\ {{}_{{\text{R}}}^{{\text{G}}} {\mathbf{R}}_{I} \left( {\begin{array}{*{20}c} 0 & 0 & 0 \\ \end{array} } \right)^{{\text{T}}} } \\ \end{array} } \right),$$
(19)
where \({}^{\text{R}}{\mathbf{X}}_{\text{R}} ,{}^{\text{R}}{\mathbf{Y}}_{\text{R}}\)—Basis vector of X, Y-axis of the RCS,
$${}^{\text{R}}{\mathbf{X}}_{\text{R}} = \left( {1,0,0} \right)^{\text{T}} ,\;{}^{\text{R}}{\mathbf{Y}}_{\text{R}} = \left( {0,1,0} \right)^{\text{T}} ,$$
\({}^{\text{G}}{\mathbf{E}},{}^{\text{G}}{\mathbf{I}}\)—Coordinates of point E, I in the GCS.
During the leftward walking, the robot keeps the end-effecter Q touching the door plane to prevent the door from closing automatically because of the door closer. Considering the high load capacity, the robot can deal with doors with large rebounding forces. The trajectory is
$${\mathbf{JK}} = \left( {\begin{array}{*{20}c} {{}_{{\text{R}}}^{{\text{G}}} {\mathbf{R}}_{J} \left( {\begin{array}{*{20}c} {\left( {{}_{{\text{R}}}^{{\text{G}}} {\mathbf{R}}_{J} {}^{{\text{R}}}{\mathbf{X}}_{{\text{R}}} } \right)^{{\text{T}}} \left( {{}^{{\text{G}}}{\mathbf{O}}_{{\text{D}}} - {}^{{\text{G}}}{\mathbf{O}}_{{\text{R}}} } \right) - \frac{{w_{{\text{R}}} }}{2}} \\ {\begin{array}{*{20}c} 0 \\ 0 \\ \end{array} } \\ \end{array} } \right)} \\ {{}_{{\text{R}}}^{{\text{G}}} {\mathbf{R}}_{J} \left( {\begin{array}{*{20}c} 0 & 0 & 0 \\ \end{array} } \right)^{{\text{T}}} } \\ \end{array} } \right),$$
(20)
where \(w_{\text{R}}\)—Width of the robot.
During the process of walking forward, the robot uses its body to push the door open, making a good use of the high load capacity. The forward trajectory is
$${\mathbf{KL}} = \left( {\begin{array}{*{20}c} {{}_{{\text{R}}}^{{\text{G}}} {\mathbf{R}}_{K} \left( {\begin{array}{*{20}c} 0 \\ 0 \\ {\left( {{}_{{\text{R}}}^{{\text{G}}} {\mathbf{R}}_{K} {}^{{\text{R}}}{\mathbf{Z}}_{{\text{R}}} } \right)^{{\text{T}}} \left( {{}^{{\text{G}}}{\mathbf{O}}_{{\text{D}}} - {}^{{\text{G}}}{\mathbf{S}}_{{41}} } \right) - l_{{\text{R}}} } \\ \end{array} } \right)} \\ {{}_{{\text{R}}}^{{\text{G}}} {\mathbf{R}}_{K} \left( {\begin{array}{*{20}c} 0 & 0 & 0 \\ \end{array} } \right)^{{\text{T}}} } \\ \end{array} } \right),$$
(21)
where \({}^{\text{R}}{\mathbf{Z}}_{\text{R}}\)—Basis vector of Z-axis of the RCS,
$${}^{\text{R}}{\mathbf{Z}}_{\text{R}} = \left( {0,0,1} \right)^{\text{T}} ,$$
\({}^{\text{G}}{\mathbf{S}}_{41}\)—Derived by Eq. (3), l R —Length of the robot.

5 Experiment Results

In order to verify the proposed method, experiments were carried out on the robot. The robot did not know the detailed parameters of the environment and autonomously planed its motion to open the door completely based on only the force feedbacks in real time. The unknown environment here means that the size of the door, the position of the handle, the required force et al. are all unknown. The door is 2025 mm high and 1130 mm wide, with a door closer to provide rebound tendency, as shown in Figure 10. Figure 11 shows the process of opening the door in the experiment.
Figure 10
Figure 10

Door and door closer in the experiments

Figure 11
Figure 11

Snapshots of the experiment

During locating the door, the robot adjusted its position to prepare for next touch every time when the force sensor detected a force pulse along Z R which indicated the robot had already touched the door. The robot only moved its body to touch and kept its feet still on the ground. After detecting the third touch, the robot calculated the positional relationship with the door and rotated to keep the tool normal to the door plane. During the alignment, the robot moved both its body and feet. Figure 12 shows the positions of the feet and the tool, and Figure 13 shows the force detected. Here we can use the motions of feet 2 and 5 to show the motions of all feet, because feet 2 and 5 move alternately in 3-3 gait.
Figure 12
Figure 12

Feet and tool positions during locating the door and rotating to align

Figure 13
Figure 13

Feedback force during locating the door and rotating to align

Then, the robot moved rightward to touch the handle and made different decisions based on different force feedbacks. If no force pulse fed back, the robot moved its legs to follow the body. If the force pulse along X R was detected, the robot knew it had touched the handle, and started to adjust its position to detect the handle along Y R. During this process, the robot moved its body and feet separately. Figure 14 shows the positions of the feet and the tool, and Figure 15 shows the force detected.
Figure 14
Figure 14

Feet and tool positions during locating the handle

Figure 15
Figure 15

Feedback force during locating the handle

After detecting the force pulse along Y R indicating that the robot had touched the handle, the robot started to turn the handle. Once the force feedback from the handle exceeded the threshold indicating that the handle had reached the end, the robot moved forward to push the door open. Finally, the robot walked leftward into the door range according to the calculated position of the handle and then walked through. Figure 16 shows the positions of the feet and the tool, and Figure 17 shows the force detected.
Figure 16
Figure 16

Feet and tool positions during turning the handle and walking through

Figure 17
Figure 17

Feedback force during turning the handle and walking through

6 Conclusions

  1. (1)

    The method of measuring the positional relationship between the robot and the door is developed, which uses only the force sensing and the 0-DOF tool to detect and open the door.

     
  2. (2)

    The real-time trajectory planning method for the robot to open the door is proposed, which is completely based on the real-time measuring of the force sensing.

     
  3. (3)

    The proposed door-opening method is implemented to the six-parallel-legged robot. Experiments are carried out to validate the method and the results show that the method is effective and robust in opening doors wider than the robot (1 m) in unknown environments.

     

Notes

Declarations

Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

(1)
State Key Laboratory of Mechanical System and Vibration, Shanghai Jiao Tong University, Shanghai, 200240, China
(2)
Shanghai GQY Robot Limited Company, Shanghai, 201206, China

References

  1. M H Raibert. Legged robots that balance. Cambridge: MIT press, 1986.Google Scholar
  2. K Nagatani, S I Yuta. Designing a behavior to open a door and to pass through a door-way using a mobile robot equipped with a manipulator. Proceedings of the IEEE/RSJ/GI International Conference on Intelligent Robots and Systems, Munich, Germany, September 12–16, 1994: 847–853.Google Scholar
  3. J Craft, J Wilson, W H Huang, et al. Aladdin: a semi-autonomous door opening system for EOD-class robots. Proceedings of the SPIE Unmanned Systems Technology XIII, Orlando, USA, April 25, 2011: 804509–1.Google Scholar
  4. B Axelrod, W H Huang. Autonomous door opening and traversal. Proceedings of the IEEE International Conference on the Technologies for Practical Robot Applications, Boston, USA, May 11–12, 2015:1–6.Google Scholar
  5. A Jain, C C Kemp. Behavior-based door opening with equilibrium point control. Proceedings of the RSS Workshop on Mobile Manipulation in Human Environments. Seattle, USA, June 28, 2009: 1–8.Google Scholar
  6. W Chung, C Rhee, Y Shim, et al. Door-opening control of a service robot using the multifingered robot hand. IEEE Transactions on Industrial Electronics, 2009, 56(10): 3975–3984.Google Scholar
  7. D Kim, J H Kang, C S Hwang, et al. Mobile robot for door opening in a house. Proceedings of the International Conference on Knowledge-Based Intelligent Information and Engineering Systems, Wellington, New Zealand, September 20–25, 2004: 596–602.Google Scholar
  8. H Arisumi, J R Chardonnet, K Yokoi. Whole-body motion of a humanoid robot for passing through a door-opening a door by impulsive force. Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Saint Louis, USA, October 11–15, 2009: 428–434.Google Scholar
  9. M Zucker, Y Jun, B Killen, et al. Continuous trajectory optimization for autonomous humanoid door opening. Proceedings of the IEEE International Conference on Technologies for Practical Robot Applications, Boston, USA, April 22–23, 2013: 1–5.Google Scholar
  10. N Banerjee, X C Long, R X Du, et al. Human-supervised control of the ATLAS humanoid robot for traversing doors. Proceedings of the IEEE-RAS International Conference on Humanoid Robots, Seoul, Korea, November 3–5, 2015: 722–729.Google Scholar
  11. J Lee, A Ajoudani, E M Hoffman, et al. Upper-body impedance control with variable stiffness for a door opening task. Proceedings of the IEEE-RAS International Conference on Humanoid Robots, Madrid, Spain, November 18–20, 2014: 713–719.Google Scholar
  12. M González-Fierro, D Hernández-García, T Nanayakkara, et al. Behavior sequencing based on demonstrations: a case of a humanoid opening a door while walking. Advanced Robotics, 2015, 29(5): 315–329.Google Scholar
  13. E Ackerman. Boston Dynamics’ SpotMini Is All Electric, Agile, and Has a Capable Face-Arm. New York: IEEE Spectrum, 2016[2016-10-17]. http://spectrum.ieee.org/automaton/robotics/home-robots/boston-dynamicsspotmini.
  14. E Ackerman. Ghost Robotics’ Minitaur Quadruped Conquers Stairs, Doors, and Fences and Is Somehow Affordable. New York: IEEE Spectrum. 2016[2016-10-17]. http://spectrum.ieee.org/automaton/robotics/roboticshardware/ghost-robotics-minitaur-quadruped.
  15. J Moreno, D Martínez, M Tresanchez, et al. A combined approach to the problem of opening a door with an assistant mobile robot. Proceedings of the International Conference on Ubiquitous Computing & Ambient Intelligence. Belfast, Northern Ireland, December 2–5, 2014: 9–12.Google Scholar
  16. E Klingbeil, A Saxena, A Y Ng. Learning to open new doors. Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei, Taiwan, October 18–22, 2010:2751–2757.Google Scholar
  17. D Ignakov, G Okouneva, G J Liu. Localization of a door handle of unknown geometry using a single camera for door-opening with a mobile manipulator. Autonomous Robots, 2012, 33(4): 415–426.Google Scholar
  18. A H Adiwahono, Y Chua, K P Tee, et al. Automated door opening scheme for non-holonomic mobile manipulator. Proceedings of the International Conference on Control, Automation and Systems, Gwangju, Korea, October 20–23, 2013: 839–844.Google Scholar
  19. A Petrovskaya, A Y Ng. Probabilistic mobile manipulation in dynamic environments, with application to opening doors. Proceedings of the International Joint Conference on Artificial Intelligence, Hyderabad, India, January 6–12, 2007: 2178–2184.Google Scholar
  20. S Kobayashi, Y Kobayashi, Y Yamamoto, et al. Development of a door opening system on rescue robot for search “UMRS-2007”. Proceedings of the SICE Annual Conference, Tokyo, Japan, Augest 20–22, 2008: 2062 –2065.Google Scholar
  21. T Winiarski, K Banachowicz, D Seredyński. Multi-sensory feedback control in door approaching and opening. Proceedings of the International Conference on Intelligent Systems. Warsaw, Poland, September 24–26, 2014: 57–70.Google Scholar
  22. A J Schmid, N Gorges, D Goger, et al. Opening a door with a humanoid robot using multi-sensory tactile feedback. Proceedings of the International Conference on Robotics and Automation, Pasadena, USA, May 19–23, 2008: 285–291.Google Scholar
  23. M Prats, P J Sanz, A P del Pobil. Reliable non-prehensile door opening through the combination of vision, tactile and force feedback. Autonomous Robots, 2010, 29(2): 201–218.Google Scholar
  24. Y Pan, F Gao, C K Qi, et al. Human-tracking strategies for a six-legged rescue robot based on distance and view. Chinese Journal of Mechanical Engineering, 2016, 29(2): 219–230.Google Scholar
  25. F Farelo, R Alqasemi, R Dubey. Task-oriented control of a 9-DoF WMRA System for opening a spring-loaded door task. Proceedings of the International Conference on Rehabilitation Robotics, Zurich, Switzerland, June 29–July 01, 2011: 1–6.Google Scholar
  26. H W Zhang, Y G Liu, G J Liu. Multiple mode control of a compact wrist with application to door opening. Mechatronics, 2013, 23(1): 10–20.Google Scholar
  27. S Ahmad, G J Liu. A door opening method by modular re-configurable robot with joints working on passive and active modes. Proceedings of the International Conference on Robotics and Automation, Anchorage, USA, May 03–08, 2010: 1480–1485.Google Scholar
  28. S Ahmad, H W Zhang, G J Liu. Multiple working mode control of door-opening with a mobile modular and reconfigurable robot. IEEE/ASME Transactions on Mechatronics, 2013, 18(3): 833–844.Google Scholar
  29. T Winiarski, K Banachowicz. Opening a door with a redundant impedance controlled robot. Proceedings of the Workshop on Robot Motion and Control, Kuslin, Poland, July 03–05, 2013: 221–226.Google Scholar
  30. Y Karayiannidis, C Smith, P Ögren, et al. Adaptive force/velocity control for opening unknown doors. Proceedings of the International IFAC Symposium on Robot Control, Dubrovnik, Croatia, September 05–07, 2012: 753 –758.Google Scholar
  31. W Guo, J C Wang, W D Chen. A manipulability improving scheme for opening unknown doors with mobile manipulator. Proceedings of the International Conference on Robotics and Biomimetics, Hanoi, Vietnam, December 5–10, 2014: 1362–1367.Google Scholar
  32. T Rühr, J Sturm, D Pangercic, et al. A generalized framework for opening doors and drawers in kitchen environments. Proceedings of the International Conference on Robotics and Automation, Saint Paul, USA, May 14–18, 2012: 3852–3858.Google Scholar
  33. S Chitta, B Cohen, M Likhachev. Planning for autonomous door opening with a mobile manipulator. Proceedings of the International Conference on Robotics and Automation, Anchorage, USA, May 03–08, 2010: 1799–1806.Google Scholar
  34. Y Pan, F Gao. A new 6-parallel-legged walking robot for drilling holes on the fuselage. Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science, 2014, 228(4): 753–764.Google Scholar
  35. Y L Xu, F Gao, Y Pan, et al. Method for six-legged robot stepping on obstacles by indirect force estimation. Chinese Journal of Mechanical Engineering, 2016, 29(4): 669–679.Google Scholar
  36. J He, F Gao, X D Meng, et al. Type synthesis for 4-DOF parallel press mechanism using GF set theory. Chinese Journal of Mechanical Engineering, 2015, 28(4): 851–859.Google Scholar
  37. C Z Wang, Y F Fang, S Guo. Multi-objective optimization of a parallel ankle rehabilitation robot using modified differential evolution algorithm. Chinese Journal of Mechanical Engineering, 2015, 28(4): 702–715.Google Scholar
  38. H B Qu, Y F Fang, S Guo. Theory of degrees of freedom for parallel mechanisms with three spherical joints and its applications. Chinese Journal of Mechanical Engineering, 2015, 28(4): 737–746.Google Scholar
  39. X L Ding, K Xu. Gait analysis of a radial symmetrical hexapod robot based on parallel mechanisms. Chinese Journal of Mechanical Engineering, 2014, 27(5): 867–879.Google Scholar
  40. M F Wang, M Ceccarelli. Topology search of 3-DOF translational parallel manipulators with three identical limbs for leg mechanisms. Chinese Journal of Mechanical Engineering, 2015, 28(4): 666–675.Google Scholar

Copyright

Advertisement