Upload
others
View
1
Download
0
Embed Size (px)
Citation preview
Technische Universitat Braunschweig
Diplomarbeit
Development and Verification of a RobotProgramming Interface Based on Skill
Primitives
Torsten Kroger
Betreuer: Dipl.-Ing. Bernd Finkemeyer
6. Dezember 2002
Institut fur Robotik und Prozessinformatik
Prof. Dr. F. Wahl
Erklarung
Hiermit erklare ich, dass die vorliegende Arbeit selbstandig nur unter Verwendung deraufgefuhrten Hilfsmittel von mir erstellt wurde.
Braunschweig, den 6. Dezember 2002
Unterschrift
Kurzfassung
Wirft man heutzutage einen Blick in hochautomatisierte Industriemontagestagestraßen,stellt man schnell fest, dass Knickarmroboter rein positionsgesteuert eingesetzt werden.Jedoch aufgrund des permanent steigenden Automatisierungsgrades werden die Anfor-derungen an Roboter und deren Steuerungsarchitekturen immer anspruchsvoller. Pro-grammierer benotigen mehr als nur eine einfache Positionsruckkopplung, verschiedensteSensoren wie Kraft-/Momentsensoren, Abstandssensoren oder Computer-Vision-Systememussen in eine Robotersteuerung integriert werden. Eine Programmierschnittstelle basie-rend auf sehr einfachen Roboterbewegungen bietet enorme Moglichkeiten hierfur. Einesolche Schnittstelle konnte man mit einem Montageplanungssystem verbinden, dass dannautomatisch den Code fur entsprechende Roboteraufgaben generiert. Bei diesen einfachenRoboterbewegungen spricht man von Aktionsprimitiven. Ein solches Konzept wurde indieser Diplomarbeit entwickelt, implementiert und auf Funktionalitat uberpruft. DiesesManuskript dokumentiert die komplette Diplomarbeit. Besonders die Einfuhrung in dasAktionsprimitivkonzept ist detailliert beschrieben. Aufgrund der verschiedenen Sensorenmussen verschiedene Regler entworfen und zusammengefuhrt werden, d.h. es entsteht einhybrider Regler, der Positions-, Geschwindigkeits- und Kraft-/Momentregelung in sechsFreiheitsgraden ermoglicht. In anbetracht der Positionsregelung ist ein Online-Bahnplanererforderlich, der ebenfalls implementiert und vorgestellt wird. Ein weiteres Ziel ist dieSchaffung eines solchen Systems mit sehr geringem Pflegeaufwand und einfacher Er-weiterbarkeit, welches nur durch eine hohe Modularitat ermoglicht wird. Hierfr bietet dieechtzeitfahige Middleware MiRPA hervorragende Voraussetzungen auf dem Gebiet der In-terprozesskommunikation. Neben allen Steuerungsarchitekturaspekten, ist es wunschens-wert, dem Entwickler eine leicht erlernbare Programmierumgebung zu Verfugung zu stel-len, die alle Details der Regelungsarchitektur verbirgt. Um das Konzept auf Funktionalitatzu uberprufen wurden zwei sehr gebrauchliche Roboteraufgaben ausgewahlt: die Objek-tablage und das Einsetzen einer Gluhlampe in eine Bajonettfassung. Dieses Dokumentbeschreibt beides, die Steuerungsarchitektur und auch die Programmierschnittstelle. Im-plementierungsdetails bleiben weitgehend verschwiegen, nur wenige Abstrakte werdenzum besseren Verstandnis erwahnt.
Abstract
Purely position controlled robots are still state of the art in industrial assembly lines.Due to a permanently increasing degree of automation, robot control architectures be-come more and more sophisticated. Since program developers demand more than simpleposition feedback, several sensors like force/torque sensors, distance sensors, or computervision systems are supposed to be integrated into a robot control architecture. Here,a programming interface based on very simple manipulator movements offers enormouspossibilities. Such an interface could be used to be connected to an assembly planningsystem, which automatically generates the required robot programs. These simple move-ments are called skill primitives. Within a student research project, this scheme has beendeveloped, implemented, and verified. This paper documents the whole work. Skill prim-itives are discussed very detailed. Due to multiple sensors, several controllers have to beconsolidated, i.e. a hybrid controller, that enables position, velocity, and force/torquecontrol in six degrees of freedom is presented. Regarding to position control, an on-linetrajectory planner is introduced. Another aim is to obtain a modular low-maintenancesoftware architecture, which is supported by the use of MiRPA, a real-time middleware,for interprocess communication. Beside all control architecture aspects, the program de-veloper is supposed to obtain an easy-to-learn programming environment without anycontrol engineering details. To verify this concept, two well-known robot tasks have beenchosen: object placing and bayonet insertion. As a matter of course, the control structureis described as well as the programming interface. Implementation details are abandoned,only very few parts are presented to introduce the new programming environment.
Acknowledgements
This is the final work to complete my studies of electrical engineering at the TechnicalUniversity of Braunschweig and I would like to thank the many people, who contributedtheir time to me. First, my thanks to the whole staff of the Institute for Robotics andProcess Control. The friendly and cooperative environment is unbeatable. In particular,I owe a dept to Bernd Finkemeyer, who supported me in an excellent way and providedan awesome and stimulating environment during all stages of this work. I have interactedwith many others around Braunschweig and elsewhere. Many thanks to all my friendsin Braunschweig; we really had a great time during the past five years. I hope never toforget this lovely period of life. Wishing, I had had more time to spend on leisure timearrangements in my homeland, I’m going to visit my homeland friends more often in fu-ture times. Finally, I thank my family for their outstanding aid all along my years of study.
Torsten Kroger
Contents
1 Introduction 11.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Conceptional Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . 21.3 Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3.1 Robot Kinematics . . . . . . . . . . . . . . . . . . . . . . . . . . . 41.3.2 Hybrid Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41.3.3 Trajectory Planning . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.4 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2 About the Nomenclature within this Thesis 10
3 Skill Primitives 143.1 From Task Planning to Execution . . . . . . . . . . . . . . . . . . . . . . 143.2 Skill Primitive Specification . . . . . . . . . . . . . . . . . . . . . . . . . 143.3 Setpoint Specification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183.4 Stop Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4 Trajectory Planning 274.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274.2 Online Trajectory Planning . . . . . . . . . . . . . . . . . . . . . . . . . 284.3 Trajectory Planning for Velocity-Controlled DOFs . . . . . . . . . . . . . 344.4 Cartesian Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
4.4.1 Orientation Representation: RPY angles . . . . . . . . . . . . . . 364.4.2 Geometric Problems with Cartesian Paths . . . . . . . . . . . . . 37
5 Software Architecture 405.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405.2 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405.3 Driver of the JR3 Force/Torque Sensor . . . . . . . . . . . . . . . . . . . 425.4 The Skill Primitive Controller . . . . . . . . . . . . . . . . . . . . . . . . 46
5.4.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465.4.2 Start-Up . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475.4.3 Initialization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485.4.4 The Hybrid Controller . . . . . . . . . . . . . . . . . . . . . . . . 545.4.5 About Transformations . . . . . . . . . . . . . . . . . . . . . . . . 585.4.6 Checking Stop Conditions . . . . . . . . . . . . . . . . . . . . . . 605.4.7 Answering the User Application . . . . . . . . . . . . . . . . . . . 635.4.8 Error Handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . 645.4.9 Reception of a Skill Primitive . . . . . . . . . . . . . . . . . . . . 65
Contents i
5.4.10 Resetting Force/Torque Values . . . . . . . . . . . . . . . . . . . 685.4.11 Request any Control Values . . . . . . . . . . . . . . . . . . . . . 685.4.12 Request Buffer Time and Default Exit Condition . . . . . . . . . 695.4.13 Ending the skill primitive controller . . . . . . . . . . . . . . . . . 695.4.14 The Output Files . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
5.5 The class ”skillprim” . . . . . . . . . . . . . . . . . . . . . . . . . . . . 715.6 Sample Code for a Single Skill Primitive . . . . . . . . . . . . . . . . . . 79
6 Experimental Results 836.1 Trajectory Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 836.2 Bayonet Insertion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 846.3 Object Placing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
7 Real Time Problems 89
8 Suggestions for Future Work 938.1 About Force Control w.r.t. a Task Frame Given w.r.t the World Frame . 938.2 Trajectory Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 948.3 Skill Primitives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 948.4 Software Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 978.5 Skill Primitive Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
9 Conclusion 101
A Transformations 103A.1 Calculation of Position, Velocity and Acceleration Values . . . . . . . . . 103A.2 Computation of a New Position Setpoint . . . . . . . . . . . . . . . . . . 107
B Error Messages 109
C Sample Output Files 118C.1 General Output File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118C.2 Value Output File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
D Contents of the Enclosed CD 121
List of Figures 122
List of Tables 124
Bibliography 125
1 Introduction 1
1 Introduction
The matter in hand is part of a student research project at the Institute for Robotics
and Process Control at the Technical University of Braunschweig. Amongst other ones,
the institute’s research fields cover computer vision, assembly planning and robot control
architectures.
1.1 Motivation
The degree of automation in industrial manufacturing systems is increasing rapidly. To
maintain flexibility, the usage of robots gets more and more important, but it is still
state of the art, to use purely position controlled robots. Of course, these provide a high
accuracy, they can be programmed very easily, a process may be repeated uncountable
times without any loss of accuracy. On the other hand, industrial processes become more
and more sophisticated, that the demands to a robot control architecture grow. The robot
programmer likes to have more than position feedback, i.e. a system, which provides the
integration of any sensor kind, which is desirable. Force/torque sensors, various distance
sensors, vision systems, or acceleration sensors are supposed to be embedded in a very
flexible manner, i.e. the sensors have to be exchangeable without much efforts. As
consequence a highly complex control system has to be developed. The advantages of
such an architecture might vanish in comparison to the disadvantages. A program for
such a system can not be implemented on-line, i.e. a program is developed by the use
of a robot in its real work cell. Only off-line programming is possible, but therefore
a developing engineer needs well-founded knowledge about control systems. Another
major disadvantage is, that a single command for such a system would contain that much
parameters that it becomes very error-prone and due to complex robot tasks as they
appear in industrial assembly facilities such a program is very voluminous. Regarding to
all these facts, the idea of an automated programming system is born.
Another part of research is the assembly planning field, which is more of theoretical na-
ture up to now. Based on CAD data, an assembly plan is generated, but to execute
these plans, a human programmer has to implement the corresponding routines. These
plans are based on the exact dimensions of the CAD data, i.e. the work cell, all robot
dimensions, the robot tools, and the assembly parts of course are determined by a re-
spective CAD model. But the real world consists of inaccuracies in dimension as well as
in position. As a matter of fact, the execution of an automated assembly plan with a
purely position controlled robot is not always successful. A more complex architecture
is required to provide a unique solution for executing automatically generated assembly
1 Introduction 2
Real-time
Non-real-time
CAD data
Assembly planner
Skill primitive net
Skill primitve net execution
Skill primitive
Skill primitive execution
Control data
Hybrid controller
Robot
CAD data
Assembly planner
Skill primitive net
Skill primitve net execution
Skill primitive
Skill primitive execution
Control data
Hybrid controller
Robot
Figure 1.1: Simplified overview: from CAD data to robot task execution
plans.
To build a bridge between assembly planning concepts and control approaches, the as-
sembly plan can be decomposed into single robot tasks. Regarding to [SIEGERT96] there
are just a few different basic tasks, which recur habitually. A single robot task can be
decomposed again. The individual parts of this decomposition are very primitive robot
movements, so called ”manipulation primitives” or ”skill primitives”. Hence, a robot task
can be described as a set of skill primitives or as a ”skill primitive net”. A complete, but
very simplified, overview shows figure 1.1. Here, the border between the real-time and
the non-real-time environment is drew in, but this is still a discussion point that will be
debated in 3 on page 14.
1.2 Conceptional Formulation
This thesis is related to as a new robot control approach, which is based on skill prim-
itives. Firstly, an contradiction-free definition of skill primitives has to be developed.
This is a very important part, since the exact definition of a skill primitive constitutes
the interface between the assembly planning process and the execution process. To test
new approaches within this area, a manutec r2 robot is provided. Preceding assignments
have developed a PC-based real-time control system for this six-joint industrial manipu-
lator. The power electronics of the robot are linked to an adaptive electronic assembly
that is coupled to a customary PC containing a motion control card. To fulfill real-time
demands the operating system QNX is used on this PC and to provide a convenient
programming environment, the QNX PC is connected to a Windows PC via an ethernet
network. The feature of a very modular and in addition very flexible system is supported
1 Introduction 3
MiRPA
Application
Positioncontroller
Robotdriver
Motion Control Board
Ha
rdw
are
So
ftw
are
Adaptive electronic assembly
Power electronics
Robot and sensors
Figure 1.2: Robot control architecture
by a self-developed middleware called MiRPA (Middleware for Robotic and Process Con-
trol Applications, [BORCHARD01]). [HANNA01] implemented a position controller and
a driver for the adaptive electronic assembly that was developed by [SCHROEDER99].
The position controller rules in joint space and receives the desired robot position via
MiRPA. An application process is responsible for the generation of a position setpoint
and has to send this data to MiRPA. There are several examples for this application
program, e.g. a teach-in program to control the robot via joystick or space mouse. Fig-
ure 1.2 shows a summarized overview. A first version of a force controller as well as a
driver for the used JR3 force/torque sensor was presented in [KROEGER02]. To embed
the skill primitive concept, the implementation of a new MiRPA module has to be per-
formed. This new module, the skill primitive controller, receives a skill primitive from
a user application that contains the skill primitive net, uses this data to feed a hybrid
controller, and acknowledges the skill primitive net application if the execution of one
skill primitive is finished. The position setpoint generation for the position controller
requires an on-line trajectory planner, which is connected to the hybrid controller. Since
the interface to the skill primitive net program is not specified and to be able to test the
skill primitive controller an own interface has to be created. This interface acts as a new
1 Introduction 4
task-based robot programming language. I.e. the user implements an own skill primitive
net program, which communicates via MiRPA with the skill primitive controller that
generates joint positions and sends these ones to the position controller. In future time,
the skill primitive net program is supposed to be generated automatically, but for this
first approach the respective program has to be implemented by hand.
A long-term aim is to provide multi-sensor integration possibilities, but for a first step,
only force/torque feedback is used. For a universal usage of the sensor data, the force/torque
values have to be transformed from the sensor frame to any other frame in space. This
transformation functionality has to be added to the existing force/torque driver. The
opportunity of an easy extension by other sensors e.g. a triangulation distance sensor has
to be maintained. Lastly, the complete functionality of the skill primitive controller has
to be verified by implementing example applications. As already presented in [WAHL02],
the insertion of a light bulb into a bayonet socket as well the well known robot task object
placing might be used as test applications.
Due to all these aspects, the major part of this student research project is the implemen-
tation of the introduced concept followed by the composition of this documentation.
1.3 Basics
Prior to the detailed description of the implemented software, this chapter delivers a short
introduction to three major subjects: the kinematic robot model, hybrid control concepts
and trajectory planning.
1.3.1 Robot Kinematics
The kinematic model of a robot provides the ability to transform joint values into a six
degree of freedom (DOF) Cartesian position and vice versa. Since a joint position control
interface is used to control the robot and since the hybrid controller rules in Cartesian
space, this is an important key feature. Within this item, a general weakness appears:
while the forward kinematic transformation is always unique, the inverse kinematic trans-
formation might provide several or even an infinite number of solutions. And even if a
joint solution for given Cartesian coordinates is found, the robot is not allowed to change
the configuration. Introductory lectures can be found in [MCKERROW91], [HO90] and
[CRAIG89]. More details, which concern this work are illustrated in 4.4.2 on page 37.
1.3.2 Hybrid Control
One fundamental part of this work the hybrid controller, i.e. a Cartesian six-DOF con-
troller, which is able to control each DOF by the output of any certain controller. E.g.
1 Introduction 5
InverseKinematic
PositionController
Robot+
Sensors
C
-
-F0
F
Foriginal
q
q0
q
F
x0
q
ForceController
S
C
S
T
v0
S
Transformationinto task frame
x0V
x0P
x0F
T
SC
Compliance selection matrixContact matrixCycle time
ForwardKinematic
xCart
x
Desired force w.r.t. the task frameDesired speed in case of no contactDesired cartesian position
Fv
0
0
0
x
-
Figure 1.3: Force controller as suggested in [KROEGER02]
some DOFs are force, others are position controlled, which constitutes the presupposi-
tions for the skill primitive concept. To obtain the possibility of multi-sensor integration,
a controller must be expandable. Common literature delivers several approaches for force
control. Due to Mason’s concept [MASON83] each DOF can be used for another con-
trol scheme. Within this work only position, force, and velocity control is presented.
The force controller is taken from [KROEGER02], where various force control concepts
are introduced. [MCKERROW91] also illustrates several force control approaches. The
shown force controller of figure 1.3 uses the compliance selection matrix S, whose diagonal
elements are one if a DOF is force controlled and zero if position controlled, to determine
the control method for each DOF. As result the following equation rules:
�x = S · ∆�xP0 + S · C · �f + S · C · �v0 · T
robotTtasknew=robot Ttaskold
· Trans(x0, x1, x2) · RPY(x3, x4, x5)
1 Introduction 6
Where �f represents the force controller’s output. Here the force controller acts as a
spring, i.e. the proportional value is given in mm/N . �x is an Cartesian difference vector,robotTtasknew is the homogeneous transformation from the robot base frame to the task
frame that is determined with respect to (w.r.t.) the robot’s hand frame and the contact
matrix C determines whether a DOF is in contact with its environment or not. If a force
controlled DOF is not in contact, the robot heads for contact with the corresponding speed
of the speed vector �v0. This rudimentary concept works, but two serious disadvantages
appear:
• a mixture of position and force control is not supported, only force and velocity
control can be combined
• velocities are reached jerkily, i.e. no speed ramps with given accelerations are applied
By using an on-line trajectory planner for position setpoint generation and for speed
ramps, this concept can be improved.
To extend the system by new sensors we need more than one selection matrix and, above
all, more than one (force) controller. Each additional sensor requires an own controller.
The installed controllers are numbered from 0 to k. The index i is the number of the
controller. A ”1” in the diagonal of Si selects the corresponding DOF to be controlled
with the controller numbered with i. For a correct instruction the following equation
must be true:
I =∑k
i=0 Si,
where I is the identity matrix. S0 always determines all position controlled DOFs. For
each additional sensor, i.e. for each additional controller, a new selection matrix is re-
quired. In a simplified scheme, shown in figure 1.4, this concept is illustrated. Controller
1 could be a force controller and S0 would determine all force controlled DOFs. Controller
2 might be a position controller using a distance sensor for feedback. In future time, the
system can also be expanded by a computer vision system. A detailed overview about
the realized hybrid controller is presented in 5.4 on page 46.
1.3.3 Trajectory Planning
In this manner, trajectory planning means that a path for the robot’s hand (or task)
frame is computed, i.e. moves from one point of space to another. One important fact
is, that the planning system hides all details and the user only specifies a target position
in any multidimensional space and, optionally, a target speed and a target acceleration.
To reach this state a maximum speed and, if desired, a maximum acceleration and a
maximum jerk (time derivative of acceleration) have to be determined. The computation
of the whole course belongs to the planning system’s job. Collision free path-finding is
not considered here. An one dimensional example is shown in figure 1.5 on page 8. A
1 Introduction 7
Robot
Positionsensors
Sensor 1 Sensor k...
Joint positionController
Inversekinematic
Forwardkinematic
Transformationinto hand frame�
Transformationinto task frame
Trajectoryplanner
Controller 1
Controller i
Controller k
S0
S1
Si
Sk
Transformationinto task frame
Transformationinto task frame
Transformationinto task frame
Figure 1.4: Proposition for a hybrid control architecture
maximum jerk of 31.25 mm/s3 is applied to reach a maximum speed of 10 mm/s and
obtain a position difference of 20 mm. Common literature (e.g. [CRAIG89]) proposes
several concepts to realize a multidimensional path planning system. The system might
act in joint space or in Cartesian space. Some approaches for joint space interpolation
are also usable for Cartesian space interpolation, but there are also some trajectory
planning concepts that are designed for movements in Cartesian space. Interpolation with
polynomials, circular interpolation and spline interpolation are the fundamentals of the
most popular approaches. Many of these concepts are off-line concepts, i.e. all sampling
points are prior-planned. Because of this reason, the user has no influence during runtime
and if the target position changes because of a certain sensor event, the trajectory can’t
be changed. This enormous disadvantage of off-line concepts is compensated for on-line
trajectory planning approaches. Here, the user can change the target state during runtime
as it is demanded for the skill primitive concept. The next sampling point in calculated
every control cycle. Of course, this requires a higher system performance, especially
for trajectories, which are based on higher order polynomials. Trajectory planning is
often explained for the one-dimensional case. For Cartesian trajectory planning, the six-
dimensional space is considered, of course. But here, the distances to go per DOF, the
maximum speeds per DOF, the maximum accelerations and the maximum jerk is usually
different for each DOF, i.e. the maximum speeds and accelerations have to be adapted for
all DOFs, expect the DOF that takes the longest time to reach the target state. Only this
way, all DOFs can reach the target simultaneously. Hence, first of all, the DOF with the
greatest execution time has to be determined and the new maximum speeds, accelerations,
1 Introduction 8
-40
-30
-20
-10
0
10
20
30
40
0,5 1 1,5 2 2,5 3
Time in s
Je
rkin
mm
/s3
or
°/s
3,re
sp
ec
tiv
ely
-15
-10
-5
0
5
10
15
0,5 1 1,5 2 2,5 3
Time in s
Ac
ce
l.in
mm
/s2
or
°/s
2,re
sp
ec
tiv
ely
-2
0
2
4
6
8
10
12
0,5 1 1,5 2 2,5 3
Time in s
Ve
loc
ity
inm
m/s
or
°/s
,re
sp
ec
tiv
ely
-10
-5
0
5
10
15
20
0,5 1 1,5 2 2,5 3
Time in s
Po
sit
ion
inm
mo
r°,
resp
ecti
vely
Figure 1.5: Trajectory planning with a defined jerk, i.e. the time dependent position behavioris a third order polynomial
decelerations and jerks for all other five DOFs have to be calculated afterwards. Basics
about trajectory planning are introduced in [OLOMSKI89] and in [MCKERROW91].
1 Introduction 9
The accurate description of the implemented path planner can be found in 4 on page 27.
1.4 Overview
This introduction chapter is supposed to deliver a short overview about the subjects and
problems of this work. To prevent from any confusion, chapter 2 explains the applied
nomenclature and used magnitudes. One of the most important parts is chapter 3, that
gives a detailed description about a single skill primitive as it is implemented currently.
Another part is the presentation of an appropriate trajectory planner, which is performed
by chapter 4. The general approach of the on-line trajectory planner as well as the cor-
respondence of its usage in joint and in Cartesian space is constituted. The software
architecture is another major part of this thesis. So, chapter 5 provides a precise insight
into the implemented programs. To demonstrate practice suitability, two example appli-
cations are introduced in chapter 6. Of course, the concept has to work in a real-time
environment, but since the implementation became quite complex and the cycle time of
the hybrid controller had to be increased, the results of time measurements are illustrated
in part 7. Since the project is not completely finished by this work, some suggestions for
future improvements are presented in chapter 8, which is followed by a short conclusion.
Finally, the appendix contains a list of all transformations applied within the skill prim-
itive controller and a list of all error messages with a short description. The very major
part of this work is the implementation, followed by development and documentation.
All files that have been created during this work are also available on a separate CD,
whose contents are shown in appendix D.
2 About the Nomenclature within this Thesis 10
2 About the Nomenclature within this Thesis
For a better understanding, the used nomenclature and reference frames as well as the
respective units are introduced in this short chapter. In general, all matrices are printed
in bold style while vectors are labeled by an arrow. As shown in figure 2.1 a homogeneous
coordinate transformation requires a reference frame and a target frame:
Reference frameTTarget frame.
robotThand
handTsensor1hand
Ttask
Robot Base
Hand
F/T-Sensor (1)
TaskWorld
worldTrobot
Fixed transformation
User transformation
Kinematic f transformationorward
Figure 2.1: Homogeneous coordinate transformations
Hence, worldThand represents the homogeneous transformation from the world frame to
the hand frame of the robot. A complete overview is illustrated in figure 2.1. The re-
spective frame arrangement represents all important frames that are used within this
work. Usually, hand, sensor, and task frame are described w.r.t. the world frame, i.e.worldThand,
worldTsensor, and worldTtask instead of robotThand , robotTsensor, and robotTtask.
A description of positions, velocities, accelerations and forces in Cartesian space requires
six DOFs. So, a six-dimensional vector is used for each magnitude. These six values
constitute the three translatory and and the three rotatory DOFs of the corresponding
reference frame. While implementing the concept, arrays containing six elements are
used. For the array index, a variable runs from zero, for the value of the translatory
x-direction, to five for the rotatory z-direction.
2 About the Nomenclature within this Thesis 11
Magnitude Character Unit Remarks
Position �p =
px
py
pz
ϕx
ϕy
ϕz
[ �p ] =
mmmmmm◦◦◦
The angles ϕx, ϕy, and ϕz are roll-pitch-yawangles. There is always a duality between thissix-tuple and a coordinate frame. The respec-tive meanings are described in 4.4.1 on page36. This kind of transformation determina-tion is also applied for the task frame, whosetransformation is given by�k = (kx, ky, kz, κx, κy, κz)
T .
Velocity �v =
vx
vy
vz
ωx
ωy
ωz
[ �v ] =
mm/smm/smm/s◦/s◦/s◦/s
Acceleration �a =
ax
ay
az
αx
αy
αz
[ �a ] =
mm/s2
mm/s2
mm/s2
◦/s2
◦/s2
◦/s2
Time derivative velocity and setpoint for thetrajectory planner, i.e. used for increases ofthe absolute velocity value
Deceleration �d =
dx
dy
dz
δx
δy
δz
[�d
]=
mm/s2
mm/s2
mm/s2
◦/s2
◦/s2
◦/s2
Setpoint for the trajectory planner, i.e. usedfor decreases of the absolute velocity value
Jerk �j =
jx
jy
jz
ιxιyιz
[�j
]=
mm/s3
mm/s3
mm/s3
◦/s3
◦/s3
◦/s3
Setpoint for the trajectory planner, i.e. timederivative acceleration and deceleration, notconsidered within this thesis
Force/torque �F =
Fx
Fy
Fz
τx
τy
τz
[�F
]=
NNN
NmNmNm
A six-tuple like this is also used the determinea force/torque threshold value�r = (rx, ry, rz, ρx, ρy, ρz)
T .
Table 2.1: Used symbols and units
2 About the Nomenclature within this Thesis 12
Cartesian space vector =
translatory x directiontranslatory y directiontranslatory z direction
yaw (rotatory x direction)pitch (rotatory y direction)roll (rotatory z direction)
Generally, millimeter, degrees and, seconds are used for distances, angles and, times.
Regarding to positions, there exists always a unique corresponding coordinate frame.
The change from a six-dimensional position vector to a position frame and vice versa is
discussed in 4.4 on page 34. A complete overview about used symbols and units provides
table 2.1, but there is one exception: the table is only valid for movements in Cartesian
space, when moving in joint space, all translatory magnitudes in millimeter have to be
replaced by rotatory ones in degrees. The vector component indexes change:
joint�p =(jointp1,
joint p2,joint p3,
joint p4,joint p5,
joint p6
)T
joint�v =(jointv1,
joint v2,joint v3,
joint v4,joint v5,
joint v6
)T
joint�a =(jointa1,
joint a2,joint a3,
joint a4,joint a5,
joint a6
)T
To handle the joint space of a robot with more than six joints, the dimension of the
vectors increases. Of course, it is senseless to transform forces and torques measured in
Cartesian space into joint space.
Similar to the notation of frames: positions, velocities and accelerations also require a
reference and a goal frame, but since vectors are applied here and we need to access
individual vector components the following notation has been introduced:
reference�p target =
referenceptargetx
referenceptargety
referenceptargetz
referenceϕtargetx
referenceϕtargety
referenceϕtargetz
I.e. worldphandy represents the distance in y direction from the world frame to the hand
frame, exemplary. Or in other words: world�p hand corresponds to worldThand.
To calculate new values in a control cycle, we always refer to the old ones from the last
control cycle. To distinguish both, values from the last control cycle are underlined.
E.g. the task frame displacement w.r.t. the task frame of the last cycle is represented bytaskTtask or in vector notation task�p task, where the x component is determined by taskptask
x .
worldTtask = worldTtask ·task Ttask
2 About the Nomenclature within this Thesis 13
The speed, which is caused by this displacement is named task�v task, the x component
by taskvtaskx . To determine the respective acceleration value task�a task, we need the speed
value from the last cycle, i.e.
task �v task = task�v task .
task�atask =task�v task − task�v task
tcycle
Usually, the first notation task�v task is taken to keep lucidity. By the way: the example
from above is referred to a movement w.r.t. the task frame in normal mode.
Regarding to positions, an overlined symbol describes a movement from a reference frame
at the beginning of a skill primitive. E.g.
task�p tasky
determines the covered distance of the task frame since the beginning of the skill primitive
(Compare to figure 3.4 on page 23). Whereas position setpoints, i.e. desired positions,
are marked by angle brackets.
world�p <task>
represents the task frame’s target position w.r.t. the world frame as well as worldT<task>
does.
3 Skill Primitives 14
3 Skill Primitives
As already has been introduced in 1.1, skill primitives are used to perform simple robot
movements based on sensor data. This chapter’s job is to define formally, how the con-
cept is implemented. After a short overview, the specification of a single skill primitive
follows. For simplification, 3.3 and 3.4 use three-dimensional examples to illustrate the
functionality of the different possibilities to define a position setpoint and how to set up
an exit condition.
3.1 From Task Planning to Execution
Within this approach, a single skill primitive represents the interface between planning
systems and control (i.e. execution) systems. The demands are easy to formulate:
1. All skill primitive parameters have to be interpretable by the control program that
generates parameters for the hybrid controller
2. The planning system has to be able to determine every skill primitive parameter
Actually, another demand is to support all imageable robot movements, but this item
could not be realized in this first approach (refer to 8.3 on page 94). The project’s
long-term aim can be described as follows: A planning system uses CAD data to gen-
erate the skill primitive net, which describes one single robot task. Such a net can be
decomposed into single skill primitives, whose contents are used to feed a hybrid control
system, that operates the manipulator. Skill primitive nets are just a minor part of this
work, nevertheless a quick overview is supposed to be delivered. A simplified net that
depicts a robot task, which inserts a light bulb into a bayonet socket (compare 6.2 on
page 84), is shown in figure 3.1. Since the complete depiction of an sample skill primitive
net would look much too complex, a very simplified variant was chosen. Here, a single
skill primitive consists of a compliance frame C and a stop condition λ. The task frame or
center of compliance (CoC) is prior defined. C is a diagonal matrix, its entries contain
the setpoint for each DOF. Depending on the units, the controller (position, speed or
force) is chosen. The stop condition is checked every control cycle, i.e. the manipulation
primitive ends in dependance on sensor values. If it becomes true, post branches decide,
which skill primitive to execute next.
3.2 Skill Primitive Specification
This section delivers a description of a single skill primitive. A detailed specification of
the implementation can be found in 5.4 on page 46. As already mentioned, within this
3 Skill Primitives 15
(a)
(e)
(d)
(c)
(b)
diag C = (0 mm, 0 mm, −25 N, 0 rad, 0 rad, 0 rad)
λ : (Fz ≤ −20 N)
λ : (Tz ≤ −0.1 Nm)
λ : (Fz ≥ 5 N)
diag C = (0 N, 0 N, −15 mm, 0 Nm, 0 Nm, 0 Nm)
λ : ( ((Fz ≤ −15 N) ∧ (Tz ≤ −0.2 Nm))
∨((Fz ≥ −5 N) ∧ (Pz ≥ 3 mm) ∧ (Tz = 0))))
diag C = (0 N, 0 N, −25 N, 0 Nm, 0 Nm, 0 Nm)
diag C = (0 N, 0 N, −25 N, 0 Nm, 0 Nm, −0.5 Nm)
(Fz ≤ −15 N) ∧ (Tz ≤ −0.2 Nm)
open gripper
diag C = (0 N, 0 N, −25 N, 0 Nm, 0 Nm, −0.5 Nm)
λ : (Fz ≤ −20 N)
(Fz ≥ −5 N) ∧ (Pz ≥ 3 mm) ∧ (Tz = 0 Nm)
Figure 3.1: A skill primitive net for a light bulb to be inserted into a bayonet socket
implementation, only force, speed and position control is available, but with the addition,
that embedding of speed control is still an enormous point of discussion, because stability
and real-time reasons (see 8.3 on page 94). A single skill primitive contains the following
data:
1. Reference frame can be:
• World frame
• Hand frame
• Task frame
• Sensor frame
• Joint space
Force control is only possible w.r.t. the hand, sensor or task frame. All values from
(2) to (8) are referred to this frame. The trajectory planner interpolates a path w.r.t.
this frame.
2. Compliance selection matrix S: 6× 6 diagonal matrix S that determines, which
control method is to be applied for a DOF:
3 Skill Primitives 16
• Position
• Velocity
• Force/torque
• Hold position
The individual matrices positionS, velocityS, and forceS are generated out of this overall
matrix S.
3. Position setpoint, �p ε �6, is given in Cartesian coordinates w.r.t. the reference
frame. Exception: if movements are executed in joint space, the position setpoint
contains the desired joint positions in degrees. To determine the target position, the
following transformation is applied in Cartesian space:
ReferenceTTarget = Trans(px, py, pz) · RPY(ϕz, ϕy, ϕx)
If a position is not reached, the desired speed (5) is applied to achieve the setpoint.
4. Force setpoint, �F ε �6, is always Cartesian and refers to the frame determined by
(1). Force control can only be applied w.r.t. the hand, sensor, or task frame. The
compliance selection matrix S determines all force controlled DOFs. The switching
from position control (or velocity control, respectively) occurs not until a certain
force/torque threshold value (8) is exceeded for this DOF. Position or velocity set-
points have to be given w.r.t. the same reference frame as the force setpoint is
given in. If a DOF is supposed to be force controlled, but there is no environmental
contact, the robot heads for contact with the speed given by (5).
5. Velocity setpoint, �v ε �6, has three functions:
(a) Value for speed controlled DOFs
(b) Max. speed to reach a position setpoint
(c) Max. speed to head for contact (force control)
Except joint space velocities, this setpoint is given in Cartesian space w.r.t. the
reference frame from (1). The sign of each vector component is only relevant for
speed control. The signs are adapted for position and force control. All setpoints
have to refer to the same reference frame (1).
6. Acceleration, �a ε �6, is used for acceleration ramps of the speed given in (5), i.e.
for increases of the absolute speed value.
7. Deceleration, �d ε �6, is applied for defined deceleration ramps of velocity (5), i.e.
for decreases of the absolute velocity value.
3 Skill Primitives 17
8. Force threshold values �r ε �6: Force/torque control is only engaged, if this
force/torque threshold value is exceed and if the sign of the desired force/torque
equals the one of the current values.
9. Tool control data: This interface has been not specified yet. It is supposed to
be used for any tool operation with any kind of tool. Here, a bidirectional interface
might be a smart solution, but this is part of a substantial to-do-List. Regarding
to the current implementation, the commands for opening and closing the gripper
are send from the user application straight to the respective controller, i.e. the skill
primitive controller is bypassed.
10. Task frame (Center of Compliance, CoC), �k ε �6, is a user-given transforma-
tion, always given w.r.t. the hand frame. The format is the same as used for position
setpoints, i.e. the six given values are used to generate the transformation frame. In
future time, a task may also be given w.r.t. to the world frame (see 8.3 on page 94
and 8.1, page 93).
11. Stop condition is a Boolean expression containing various comparison terms that
can be conjunct. There are three different stop conditions:
(a) System stop condition contains the minimum and maximum joint angles, the
maximum forces and torques, and a standard timeout value. All these values
are or-conjunct.
(b) User stop condition contains the user-given exit condition, which is sent with
every skill primitive.
(c) Skill primitive stop condition: escape if all setpoints are reached (within an
ε-environment), i.e. for position controlled DOFs that also the velocity equals
zero.
All three parts are or-conjunct. If the whole exit condition becomes true, the current
state is taken as new setpoint. The reference system determined in (1) persists in
any case. If world frame or joint space are submitted in (1), the current position is
maintained. For hand, sensor, and task frame, the current force/torque is used as
new setpoint if a defined force/torque threshold value is exceed, otherwise (i.e. if a
DOF is in free space), the current position is kept for the respective DOF.
12. Hold time, thold ε �, In order that, the skill primitive exit condition (11c) becomes
true, all DOFs must have reached their setpoint for this certain time.
Further information about the declaration of setpoint values as well as about the three
kinds of exit condition can be found in the following sections.
3 Skill Primitives 18
3.3 Setpoint Specification
By the first view, this section may seem a little confusing, but as can be seen, the pre-
sented way of setpoint specification is extremely helpful for program developers. The
computation of any coordinate transformations disappears completely. The user applica-
tions sends amongst the twelve mentioned parameters a position vector, a velocity vector,
and a force vector to the skill primitive controller.
Nomenclature: reference system: A B�p <C>
A System, the trajectory planner works inB Setpoints are given in this frameC Frame that is supposed to reach the setpoint
refe
rence
p<
refe
rence
>
join
tp
worl
dp
<han
d>
task
p<
task
>
senso
rp
<se
nso
r>
han
dp
<han
d>
join
t/w
orl
d/w
orl
d*
p<
refe
rence
>
join
tp
worl
d*
p<
han
d>
worl
dp
<ta
sk>
worl
dp
<se
nso
r>
worl
dp
<han
d>
Joint 1 1 2 3 4 5 1 1 6 7 8 2 X � �World 9 10 9 11 12 13 14 10 14 15 16 9 X � �Task 17 18 19 17 20 21 22 18 23 22 24 19 � � �Sensor 25 26 27 28 25 29 30 26 31 32 30 27 � � �Hand 33 34 35 36 37 33 35 34 38 39 40 35 � � �
refe
rence
pre
fere
nce
ho
ld
Normal mode Mixed mode
Ref
eren
cesy
stem
Position
refe
rence
F
refe
rence
vre
fere
nce
Table 3.1: About setpoint designations
Depending on the compliance selection matrix, the respective values are taken out of
these three vectors. At position control and force control, the speed vector acts as transi-
tion vector. There are several opportunities to determine a position setpoint. Generally,
we distinguish two modes: the normal mode and the mixed mode. Depending on the
desired robot motion, sometimes the first and sometimes the second one can be chosen.
A complete overview is shown in table 3.1, which will be discussed below.
World* designs a frame, the world frame translatory displaced into the hand frame.
Hybrid control becomes possible, if all setpoints refer to the same reference frame, i.e.
A = B = C (shadowed grey). All other cases (without shadow) can only be used for
position control. The numbers of this table don’t have any meaning for the time being.
All different opportunities are numbered, that you can recognize redundant (i.e. equal)
case very easily. For a better understanding, table 3.2 details the meaning of the various
3 Skill Primitives 19
X
y
1 2 3 4 5
1
2
3
4
5
6
0
A
B
C
X
y
1 2 3 4 5
1
2
3
4
5
6
0
A
B C
Figure 3.2: Two-dimensional example to bring the hand frame into a desired target positionA, B, or C
possibilities to specify a position. This table is encouraged by figure 3.2, which illustrates
the advantages of this position setpoint specification by a two-dimensional example. The
left part shows the hand frame (it does not matter if the hand, the sensor, or the task
frame is considered) and three target positions, A, B, and C. The huge coordinate system
represents the world frame, we assume, that we know all three position w.r.t. the world
frame. The easiest way to reach position A is to specify the hand frame target position in
world frame coordinates, (1, 4), but to reach B, it is cleverer to refer to the current hand
frame and determine the hand frame target position by (0, 5). Regarding to position C
in the left example, one opportunity is to enter (4.5, 2.5) w.r.t. the world frame, but
we can also define the target position w.r.t. the world frame translatory displaced into
the current hand frame, i.e. (0, 1). To point out the advantage of this feature the right
part of figure 3.2 is to be respected. Preassumption is that no one of the three positions,
A, B, and C is prior-known. We roughly know the position of the grey part within the
work space, but we know the dimensions of this part. Target position is the right angled
corner. Firstly, the hand frame heads for contact with the aslant cant (position A), the
contact is kept and we move towards position B. Since we know the part dimensions, we
can determine the movement from B to C by (2, 0) w.r.t. the world frame translatory
displaced into the hand frame.
Another explanatory item is the matter of velocities within different reference frames.
Movements in free space, from A to B (see figure 3.3), also care about the reference
frame. The movement executed with Cartesian interpolation always succeeds the same
path, but the execution time differs depending on the speed parameters and the reference
3 Skill Primitives 20
jointpThe target hand frame position is given in joint space (all six values in degrees). Hybridcontrol is inhibited.
worldp<hand> The target hand frame position is given w.r.t. the world frame (normal mode for worldframe). Hybrid control is inhibited
taskp<task>The target task frame position is given w.r.t. the current task frame (normal mode fortask frame). Hybrid control is enabled for the task frame as reference one.
sensorp<sensor> The target sensor frame position is given w.r.t. the current sensor frame (normal modefor sensor frame). Hybrid control is enabled for the sensor frame as reference one.
handp<hand>The target hand frame position is given w.r.t. the current hand frame (normal mode forhand frame). Hybrid control is enabled for the hand frame as reference one.
world∗p<hand>The target hand frame position is given w.r.t. the world frame translatory displaced intothe current hand frame (mixed mode for world frame). Hybrid control is inhibited.
worldp<task> The target task frame position is given w.r.t. the world frame (mixed mode for the taskframe). Hybrid control is inhibited.
worldp<sensor> The target sensor frame position is given w.r.t. the world frame (mixed mode for thesensor frame). Hybrid control is inhibited.
worldp<hand> The target hand frame position is given w.r.t. the world frame (mixed mode for the handframe). Hybrid control is inhibited.
Table 3.2: Description of reference frames and target positions
World frame
Task frame
B
X
y
1 2 3 4 5
1
2
3
4
5
6
0
A
Figure 3.3: About skill primitive parameters for movements in free space and differentreference frames
frame. The target position can be determined as follows:
• Reference: task frame, normal mode, task�p <task> =
(0mm5mm
)
• Reference: task frame, mixed mode, world�p <task> =
(5mm6mm
)
3 Skill Primitives 21
To obtain simple values, we assume an infinite acceleration here. Depending on the
reference frame the following discrepancies occur:
• Reference: world frame, normal mode, world�v hand =
(1mm/s1mm/s
), time: 4 seconds
• Reference: world frame, normal mode, world�v hand =
(2mm/s1mm/s
), time: 3 seconds
• Reference: world frame, mixed mode, world∗�v hand =
(1mm/s1mm/s
), time: 4 seconds
• Reference: task frame, normal mode, task�v task =
(1mm/s1mm/s
), time: 5 seconds
• Reference: task frame, mixed mode, world�v task =
(1mm/s1mm/s
), time: 5 seconds
Of course, by executing this movement in joint space, a completely other path, which is
incomparable to the above cases, would be taken.
3.4 Stop Conditions
Another important part of a skill primitive is the stop condition. It’s a conjunction of
comparisons. A single comparison contains a value from the hybrid controller and a real
number, e.g.
(worldp handy > 540.5mm)
means, if the hand frame exceeds 540.5 mm in y direction (regarding to the world frame),
the skill primitive ends and the current state maintains. One term of the exit condition
can consist of a timeout value in milliseconds, which engages the skill primitive’s end,
if the execution is not finished after this certain time. In many cases the utilization of
some functions is very helpful. The functions
• absolute value
• average value
• absolute value of the average value
• average value of the absolute value
are provided in this first approach. If any average function is embedded, the average
becomes calculated within a user-defined time interval given in milliseconds. A complete
overview (without timeout) shows table 3.3. All comparisons can be nested by AND- and
OR-conjunctions, that
3 Skill Primitives 22
Reference system DOF Magnitude Function OperatorCart. Joint
• Joint space
• World frame
• Hand frame
• Sensor frame
• Task frame
• x
• y
• z
• ϕx
• ϕy
• ϕz
• 1
• 2
• 3
• 4
• 5
• 6
• x (normal)
• v (normal)
• a (normal)
• x (mixed)
• v (mixed)
• a (mixed)
• F
• ddtF
• |a|• a
• |a|• |a|• nof
• >
• <
• ≥• ≤• ε=
Table 3.3: Elements of one comparison term within an exit condition
x v a F (d/dt)F x v a
Joint � � � X X + + +
World � � � X X � + +
Hand � � � � � + + +
Sensor � � � � � � � �Task � � � � � � � �
� possible
X impossible
+ redundant
Normal mode Mixed mode
Table 3.4: Possible combinations to specify a comparison term of an exit condition
((taskFz
20 ms< −50N
)∨
((handphand
x > 40mm)∧ (|worldvtask
x | > 75mm/s)) ∨ (t ≥ 5000ms)
)
is an example for a common exit condition. If the average value over 20 milliseconds of
the z-force in the task frame becomes less than -50N, a timeout of 5000ms is reached,
or the movement in x-direction of the hand frame exceeds 40mm and the absolute value
of the task frame speed in x-direction w.r.t. the world frame (mixed mode) of the last
cycle becomes greater than 75mm/s, the exit condition becomes true. Table 3.5 shows
the respective reference frames for the normal and for the mixed mode, whereas table
3.6 delivers a detailed listing of provided magnitudes of the hybrid controller. There
are two important points to consider. Figure 3.4 illustrates a skill primitive for two-
dimensional space. The compliance selection matrix determines that all three DOFs are
force/torque controlled: Fx = 0N , Fy = −5N , and τz = 0Nm. The stop condition is
(taskptasky > 80mm). On the left hand side of each part, the world frame is plotted. A
speed vector as well as an acceleration vector are given to enable the heading for contact
3 Skill Primitives 23
X
y
z
1 2 3
X
y
z
X
y
z
4 5 6
X
y
z
X
y
zX
y
z
80
mm
Figure 3.4: Example for a force controlled task frame movement in two dimensional space:Setpoints: Fx = 0N , Fy = −5N , and τz = 0Nm, stop condition: (taskptask
y > 80mm)
Reference frame Normal mode Mixed mode
World frame WF WF*
Hand frame HF WF
Task TF WF
Sensor SF WF
Joint space JS JS
JS Joint space
HF Hand frame
TF Task frame
SF Sensor frame
WF World frame
WF* World frame translatory dis-placed into the hand frame
Table 3.5: Reference frames in normal and in mixed mode
in y-direction. The position in x-direction and ϕz-direction is kept, because the force
setpoint and the torque setpoint are zero, i.e. the setpoint is already reached while rest-
ing in free space and so these DOFs do not have to head for contact. Because of the
negative force setpoint in y-direction, the robot heads for contact in positive y-direction
(actio=reactio). When the robot starts moving, i.e. the hand frame accelerates with the
given acceleration to search for contact in y-direction, the manipulator covers a certain
distance per cycle. E.g. after a distance of 20 millimeters in y-direction (2) the exit
condition, compares 20 millimeters to 80. In part 3, the hand frame gets in contact with
an obstacle. A torque establishes and the torque controller changes the orientation (4).
Once, the torque is zero again, the force in y-direction disappears and the movement in
3 Skill Primitives 24
Task Hand Sensor World Joint
Position (normal) task�p task hand�p hand sensor�p sensor world�p hand joint�p
Position (mixed) world�p task world�p hand world�p sensor world∗�p hand joint�p
Velocity (normal) task�v task hand�v hand sensor�v sensor world�v hand joint�v
Velocity (mixed) world�v task world�v hand world�v sensor world�v hand joint�v
Acceleration (normal) task�a task hand�a hand sensor�a sensor world�a hand joint�a
Acceleration (mixed) world�a task world�a hand world�a sensor world�a hand joint�a
Force/torque task �F hand �F sensor �F - -
Time derivative force/torque ddt
(task �F
)ddt
(hand �F
)ddt
(sensor �F
)- -
Table 3.6: Magnitudes provided by the hybrid controller
y-direction continues. Regarding to the exit condition, there are still 50 millimeters to
go until it becomes true. Because of the change of orientation, the point of space, where
the stop condition ends the skill primitive, has changed. When the task frame movement
has covered a distance of 80 millimeters w.r.t. the task frame at the beginning of the skill
primitive worldTtask, the robots keeps this position, i.e. it does not stop jerkily. The task
frame decelerates with the respective values of the deceleration vector and accelerates
again to reach the desired point of space, where the exit condition became true, where
it stops finally. Another possibility to obtain the same movement is to specify the stop
condition w.r.t. the real world frame, but therefore the absolute position value must be
known. The stop point for a second possibility to interpret the stop condition (compare
to figure 8.3 on page 95) is marked by the blank circle. Here, the stop condition rules
w.r.t. the task frame taskTtask at the beginning of the skill primitive.
X
y
z
World frame
Task frame
A
Figure 3.5: About exit condition that contain velocity comparisons w.r.t. the world frameor w.r.t. the task (hand, or sensor) frame
Regarding to the hand frame, sensor frame, and task frame as reference frame within a
stop condition, the comparisons are applied to the values of the hand, sensor, or task
frame at the start of the skill primitive.
3 Skill Primitives 25
The second point to be considered regards to exit conditions, which compare velocities.
The hand frame in figure 3.5 moves towards the target position A. If the skill primitive is
supposed to be stopped when a certain speed is exceeded, the following possibilities turn
up:
•(
worldvhandx > vlimit
)•
(taskvtask
y > vlimit
)•
(world∗vhand
x > vlimit
)•
(worldvtask
x > vlimit
)All exit conditions are equivalent for this special case. The same matter appears when
setting up the parameters for a skill primitive, as shown above on page 20. Regarding
the normal mode, an enormous difference between stop conditions containing position
comparisons and ones containing velocity or acceleration comparisons is that positions of
hand frame, task frame, and sensor frame (handThand,taskTtask,
sensorTsensor,world∗Tworld∗)
are compared to the covered distances w.r.t. the respective positions at the beginning of
the skill primitive (worldThand,worldTtask,
worldTsensor,worldTworld∗), while velocities and
accelerations w.r.t. the same frame are always determined w.r.t. the last cycle’s position,
of course (hand�v hand, task�v task, sensor�v sensor, world∗�v world∗).As already mentioned in 3.2 on page 17, there are three different exit conditions. Here,
only the user exit condition has been discussed yet. The system exit condition is gen-
erated during the initialization phase of the skill primitive controller. It contains the
maximum and the minimum joint values for each joint, i.e. if one robot joint exceeds
a certain value, the skill primitive is stopped at the current position. To protect the
sensitive force/torque sensor from damage, the values for the maximum loads of each
DOF are also part of the exit condition. I.e. if one maximum force/torque value in the
sensor frame is exceeded, the force controller engages and current forces and torques are
used as new setpoint if hand, sensor, or task frame is reference frame. For the case, that
the skill primitive was defined in joint space or in world frame coordinates, the current
position is kept. Since it is possible to setup skill primitives, whose setpoints are unreach-
able, e.g. a desired position in free space with a velocity vector containing only zeros, a
standard timeout value must be introduced. I.e. if the execution time of a single skill
primitive takes more than this time value, the skill primitive ends. The third exit con-
dition, named skill primitive exit condition, contains all setpoints of the skill primitive.
I.e. if all setpoints determined by the compliance selection matrix of the skill primitive
S are reached (within an ε-environment) the skill primitive also ends. If a DOF is sup-
posed to be position controlled, also the speed has to be zero, just for the case, that the
target position is reached, but the robot still has a certain velocity. Here, the trajectory
planner crosses the target position, decelerates and accelerates again to reach the tar-
get. This exit condition is generated newly for every sent skill primitive. The system and
3 Skill Primitives 26
the skill primitive stop conditions as well as their generation are detailed in 5.4 on page 46.
The most significant reason for this complicated concept with normal mode and mixed
mode is, that the easiest way for a user is to specify setpoints w.r.t. the current hand or
task frame, respectively, to enable hybrid control, but exit conditions are often specified
in world frame coordinates. Hence, this firstly confusing seeming method simplifies the
handling for the user. If a skill primitive net is generated automatically, some features
might be neglected (e.g. the mixed mode), but therefore the planning system must
compute complex transformations. Another point, which is not categorical necessary
is the movement in sensor frame coordinates. The embedding of this feature has two
reasons: firstly, the origin system from [KROEGER02] used the sensor frame as hand
frame, i.e. the Denavit-Hartenberg parameters were changed. This was changed back
within this work. But the second reason is the major one: the force/torque sensor must
be protected from overload. If the forces and torques are measured in any task frame, the
values can increase rabidly, just caused by slight touch, and the system would think, that
the sensor is overloaded. In reality, the high values are caused by the transformation, not
by overload. I.e. the system also has to calculate the values for the sensor frame and has
to compare these values with the maximum ones. Because of this transformation and the
first reason discussed above, the general usage of the sensor frame was introduced.
4 Trajectory Planning 27
4 Trajectory Planning
This chapter gives a detailed overview about the functionalities of the implemented tra-
jectory planner.
4.1 Introduction
�pstart =
−150
3080
mm �ptarget =
250
210−190
mm
�vmax =
100
20090
mm/s �amax =
50
8090
mm/s2
-150
-100
-50
0
50
100
150
0 0,5 1 1,5 2 2,5 3 3,5 4 4,5 5 5,5 6
t/s
v/m
m/s
-250
-200
-150
-100
-50
0
50
100
150
200
250
300
0 0,5 1 1,5 2 2,5 3 3,5 4 4,5 5 5,5 6
t/s
s/m
m
1 2 3
-150
-100
-50
0
50
100
150
0 0,5 1 1,5 2 2,5 3 3,5 4 4,5 5 5,5 6
t/s
v/m
m/s
-250
-200
-150
-100
-50
0
50
100
150
200
250
300
0 0,5 1 1,5 2 2,5 3 3,5 4 4,5 5 5,5 6
t/s
s/m
m
1 2 3
Figure 4.1: Left column: speeds and positions above time, each DOF takes its shortest time;right column: speeds and positions above time, all DOFs finish simultaneously
As already has been introduced in 1.3.3 on page 6, a trajectory planner computes the
trajectory in multidimensional space, which describes the desired motion of a manipula-
tor. The aim of a path planning concept is to provide a simple user interface, i.e. the
user should not be required to write down complicated functions of time and space to
4 Trajectory Planning 28
specify the task. Several approaches can be found in literature, most of them are based
on off-line concepts, i.e. the trajectory is prior-computed. Since a skill primitive’s exit
condition can become true arbitrary within every cycle, i.e. in the next cycle another
skill primitive rules, the planned trajectory must be changed ad hoc. To provide this
key feature, an on-line trajectory planning concept must be implemented. At run time,
only the sampling point of the next desired robot position is computed in terms of �p
and �v. Hence, only a concept containing second order functions for the computed path
is implemented. A much better system would be a trajectory planner that computes
second order functions for velocities, i.e. third order polynomials for the calculated path.
An example was already introduced in figure 1.5 on page 8. Here, acceleration and de-
celeration with a defined jerk would be enabled. Most of the approaches assume, that
the target velocity equals zero. The next step, for an improved approach would be to
specify a certain target velocity. So, continues paths containing specified via points can
be succeeded. But since the trajectory computation also takes a certain time, for this
first approach, only a second order planning system that accelerates with defined values
has been realized. The target speed is always zero, the start velocity is not matter of
interest. There are several possibilities to generate a certain speed profile from the ini-
tial speed to zero. One individual scheme is presented in the next section. The realized
concept uses the same trajectory planner for joint space as well as for Cartesian space.
Section 4.2 describes the concept of the provided joint space trajectory planner while 4.4
describes the usage of the same trajectory planner for Cartesian space. Regarding to the
generation of velocity profiles with defined acceleration values, section 4.3 introduces the
implemented trajectory planning concept for speed controlled DOFs.
4.2 Online Trajectory Planning
The on-line trajectory planner’s job is to calculate a new six dimensional position. Its
input magnitudes are the current position vector �pstart, the target position vector �ptarget,
the current speed vector �v, the maximum speed vector �vmax, the maximum acceleration
vector �amax, the maximum deceleration vector �dmax and a six dimensional selection vector
that determines all position controlled DOFs. Output values, which are sent to the
hybrid controller, are the next desired position and the next applied velocity vector. One
occurring problem is, that all DOFs have to reach the target state simultaneously as
demonstrated in figure 4.1. Here, only a three-DOF system is considered. On the left
hand side a trajectory was planned that every DOF needs the shortest possible time
and the right hand side lets all DOFs finish execution simultaneously. In this simple
example, start and stop speed are zero and the jerk is infinite. In practice, at least the
start speed is not supposed to be zero and, depending on the purpose, the target speed
differs from zero, when applying via-trajectories, e.g. movements from A via B to C. To
comply with these demands, the execution time for the DOF that takes the longest time
4 Trajectory Planning 29
Max. speed can not bereached, return the totalexecution time for thistriangle shaped speed
function.
Substract accel. distanceand decel. distance from
remaining distance
Calculate remainingposition difference
Moving towardstarget position?
Calculate time to decelerateuntil zero speed, add the
arising position differenceto the remaining one and set
current speed to zero
v > vmax
Calculate time to decelerateuntil max. speed, add thearising position difference
to the remaining one and setcurrent speed to max. speed
Calculate distance toaccelerate from currentspeed to max. speed
Calculate distance todecelerate from max.speed to zero speed
Substraction result > 0?
Max. speed can bereached, return the totalexecution time for this
trapezoid shaped speedfunction.
y
n
y
n
n y
Figure 4.2: Scheme to calculate the execution time for a single DOF
to reach the target position has to be calculated as first step of the trajectory planning
algorithm. To prevent from any confusions caused by signs, some conventions are made:
acceleration values, deceleration values and maximum speed values are always signed
positive; Position values, position differences and velocities are signed. Since there are
several cases to distinguish, a function to calculate the time until one single DOF has
4 Trajectory Planning 30
reached the target position was implemented. An overall scheme of this function is shown
in figure 4.2, which is constituted by the following lines. After the position difference is
calculated, the algorithm checks if the DOF moves towards the target position, i.e. the
sign of v equals the sign of premain. If the DOF moves away from the target, we have to
decelerate with the given deceleration value until speed is zero. The point of time and
the new position difference are computed as tpre−deceleration and ppre−deceleration and are
considered in the following calculations.
tpre−deceleration = |v|dmax
(decelerate to zero)
ppre−deceleration = sign(v) · v2
2 · dmax
If this branch is taken, any further computations are applied with a velocity value of
zero. If the DOF moves towards the target position (right branch in figure 4.2), we have
to check, if the maximum speed is exceeded. For this case a deceleration to the given
maximum speed has to be applied. The point of time and the position difference, where
this speed is reached, are as well computed and also considered in further computations.
tpre−deceleration = |v| − vmax
dmax(decelerate to vmax)
ppre−deceleration = sign(v) · (|v| − vmax)22 · dmax
For any subsequent calculations of this branch, a current speed v of vmax is applied. Now,
the velocity function can only be triangle or trapezoid shaped. The remaining distance
is set to
premain = pstart + ppre−deceleration − ptarget .
Afterwards the distance to accelerate to the maximum speed and the distance to decel-
erate from maximum speed to zero speed is calculated:
pacceleration = 2 · v · (|vmax| − |v|) + (|vmax| − |v|)22 · amax
pdeceleration = v2max
2 · dmax
The route of constant speed pconst = premain − pacceleration − pdeceleration is not necessarily
positive signed. A positive sign of pconst indicates, that the maximum speed can be
reached for this DOF, i.e. the speed function is trapezoid-shaped and the remaining time
to reach the target position is
texecution = |vmax| − |v|amax
+ |pconst|vmax
+ vmaxdmax
+ tpre−deceleration (trapezoid case)
4 Trajectory Planning 31
where tpre−deceleration represents the time for any prior deceleration processes, either to
reach zero speed or to reach the maximum speed. But if pconst is signed negative, the
maximum speed can not be reached, i.e. the velocity function is triangle-shaped. There-
fore, an alternative maximum velocity v′max can be calculated and be used to compute
the remaining time texecution.
v′max =
√(v2
2 · amax+ |premain|
)· 2 · amax · dmax
amax + dmax
texecution = v′max − |v|
amax+ v′
maxdmax
+ tpre−deceleration (triangle case)
Calculate remainingposition difference
Target pos. reachedand v = 0?
Calculate time untiltarget position is reached.Return this time, phase 4
This DOF is ready.Return zero time, phase 3
yn
Moving towardstarget position?
Enough spaceto decelerate?
Calculate time untiltarget position is reached.Return this time, phase 2
Calculate the requireddeceleration value to reachthe target position exactly
Calc. Decel. Valuewithin tolerance?
Calculate time untiltarget position is reached.Return this time, phase 2
Calculate time untiltarget position is reached.Return this time, phase 1
n
n
n
y
y
y
Figure 4.3: Flow chart to determine the trajectory planning phase for one single DOF
The proceeding of figure 4.3 is applied to all DOFs that are supposed to be position
controlled. Here, the current phase of a DOF is determined. The following phases are to
be distinguished:
1. deceleration with the given maximum value
2. deceleration with a recalculated maximum value for deceleration
3. ready, the DOF has already reached the target position, velocity equals zero
4 Trajectory Planning 32
4. the maximum speed of this DOF has to adapted, to let all DOFs reach the target
simultaneously
Phase 3 is the easiest one, if a DOF has already reached the target position at zero speed,
the trajectory planning process is finished. Phase 2 is applied for decelerations to zero
speed, which occurs if a DOF moves in the reverse direction or if the DOF overshoots
the target position. If the DOF can decelerate to the target position within the given
deceleration value range, phase 1 is used. A new value for deceleration
d′max = v2
2 · premain
is calculated to ensure, that the DOF reaches the target position very precisely. If the
given deceleration value would be applied, the manipulator would slow down too early or
would overshoot. This proceeding becomes necessary since this algorithm works in a dis-
crete and quantized environment. If there is enough space to decelerate, a new maximum
velocity has to be calculated to ensure that all DOFs reach their target synchronously.
By running through the branches of figure 4.3, the algorithm to calculate the execution
time of one DOF is also embedded. By comparison of all execution time values, the
DOF, that would take the longest time tmax to reach the target position is detected. As
reminder: here, the usage of the maximum speed, acceleration, and deceleration values
is applied to determine the execution times. The respective time value defines the point
of time, where all DOFs have to reach their target position. All other DOFs, expect the
one with longest execution time, have to be recalculated as demonstrated in the following.
If a DOF utilizes phase 1, i.e. it is supposed to reach the target position exactly, the new
velocity and the new position are calculated by applying the respective deceleration value�d′max. Because of discretization and quantization the target position might be overshoot,
because of numerical inaccuracies. For this case, the target position is set exactly and
the speed is set to zero, i.e. within the next control cycle the respective would turn to
the ”ready” phase.
DOFs in phase 2 just apply the given deceleration value �dmax to compute the next position
and the next velocity. DOFs, which are in phase 3 (ready), easily set the velocity to zero
and the next position to the target one. The most sophisticated phase is phase 3, where
a new velocity value must be calculated. Several approaches are possible for this part
of the computation, one of them was already illustrated in figure 4.1 on page 27. Here,
the time dependent velocity behavior is trapezoid shaped, i.e. the maximum velocity is
recalculated to ensure, that all DOFs reach the target simultaneously. But also a triangle
shape can be considered, therefore, the maximum acceleration and deceleration value has
to be adapted. The next part of this document demonstrates how to adapt the velocity
for all phase 3 DOFs. Firstly, we have to check, if a phase 3 DOF needs to accelerate, to
decelerate, or to maintain its current velocity. If
4 Trajectory Planning 33
Target speed v’
reached?max
n
| v | | v’� max |
Calculate next velocityvalue v by applying the
given acceleration valuenext
| v | | v’ |next max�
Return vnextReturn v’max
Calculate next velocityvalue v by applying the
given deceleration valuenext
| v | | v’ |next max�
Return vnext
n n
n
y
y
y
y
Return v’max
Figure 4.4: How to apply the next acceleration or deceleration step for a single DOF
v2
2 · dmax+ |v| · (tmax − |v|
dmax) < |premain|
becomes true, the DOF is supposed to be accelerated. The new maximum speed can be
calculated as follows:
v′max = |v| · dmax + tmax · amax · dmax
amax + dmax−
√( |v| · dmax + tmax · amax · dmax
amax + dmax
)2
− dmax · |v| + 2 · |premain| · amax · dmax
amax + dmax
(acceleration case)
The result for that DOF, which provided tmax, vmax equals v′max. If the DOF has to
decelerate, v′max can be computed by
v′max = |premain| · dmax − 1
2 · v2
tmax · dmax − v. (deceleration case)
All phase 3 DOFs are treated like shown above. Once, the new maximum speed value
is calculated, the acceleration or deceleration process is initiated like shown for a single
DOF in figure 4.4. Here, an overshooting in prevented, the new calculated maximum
speed is reached exactly. vnext is the velocity value for the current control cycle. The
maximum difference to the velocity value from the last cycle is the product of cycle time
and acceleration value or deceleration value, respectively.
4 Trajectory Planning 34
The whole trajectory planning concept must be self-contained and uncontroversial. Ex-
perimental results can be found in 6.1 on page 83.
4.3 Trajectory Planning for Velocity-Controlled DOFs
Another kind of trajectory planner is required to provide speed control, i.e. actually it is
still position control, but the speed ramps must be calculated to reach the desired target
speed vtarget. Figure 4.5 illustrates a flow chart, that covers all occurring cases. If vtarget is
not reached, the algorithm checks if the respective DOF moves into the wrong direction.
For this case, the DOF has to be decelerated with the given deceleration value dmax to
zero speed. If the sign of the speed value changes, i.e. the DOF moves into the right
direction, we have to check, if the target speed is already exceeded (possible for very low
target velocities). According to this, either the calculated speed value vnext is applied
or the actual target velocity value. If the initial velocity is signed correctly, it has to be
checked, if the DOF is slower or faster than the desired velocity. Regarding to this, the
DOF becomes either accelerated or decelerated. And for the case, that the computed
value overshoots the setpoint, the target speed is set.
A part of this algorithm was already used to calculate the next speed value for the position
trajectory planner as shown in figure 4.4.
4.4 Cartesian Interpolation
The above discussed trajectory planning concept suits for joint space interpolation. Carte-
sian interpolation poses several problems. The three translatory Cartesian DOFs can be
handled easily, but some complications occur at the computation of the angular DOFs.
Common literature proposes a variety of approaches for movements in Cartesian space.
In general, all Cartesian approaches can be distinguished in two groups: the first one
uses universal trajectory planners for six dimensional space and the second group con-
sists of concepts that are particulary developed for Cartesian space. The second group
usually uses approaches that guarantee a single equivalent rotation axis. But within this
work a concept of the first group was chosen to obtain a suitable interface that provides
the opportunity to determine angular velocities for each single rotatory DOF. This kind
of approach is called ”Cartesian straight line motion” by literature [CRAIG89]. The
calculated values are used within the transformation
Tnext = Trans(px, py, pz) · RPY(ϕz, ϕy, ϕx),
where �p = (px py, pz, ϕx, ϕy, ϕz)T is the calculated position vector for the current control
cycle.
4 Trajectory Planning 35
Targ
etspeed
v
reached?
targ
et
sig
n(v
)=
sig
n(v
)ta
rget
y
n
Retu
rnta
rgetspeed
|v
||v
�ta
rget|
Calc
ula
tenextvelo
city
valu
ev
by
apply
ing
the
giv
en
accele
ration
valu
enext
|v
||v
|next
targ
et
�
Retu
rnv
next
Retu
rnv
targ
et
Calc
ula
tenextvelo
city
valu
ev
by
apply
ing
the
giv
en
decele
ration
valu
enext
|v
||v
|next
targ
et
�
Retu
rnv
next
Calc
ula
tenextvelo
city
valu
ev
by
apply
ing
the
giv
en
decele
ration
valu
enext
sig
n(v
)sig
n(v
)next
targ
et
�
|v
||v
|next
targ
et
�
Retu
rnv
targ
et
Retu
rnv
next
n
n
nn
n
n
y
yy
y
y
y
Figure 4.5: Flow chart of the velocity calculation for the next control cycle
4 Trajectory Planning 36
4.4.1 Orientation Representation: RPY angles
The current position vector �p and the target position �ptarget is sent to the trajectory
planner. One slight complication arises from the fact, that the RPY representation of
orientation is not unique:
ϕi = ϕi + n · 360◦ with i = x, y, z and n ε �.
In order to minimize the total amount of rotation, initial and target orientation are
expressed with a minimum difference:
m =(
ϕtarget − ϕstart
360◦)
div 1
ϕ1 = ϕstart + (m − 1) · 360◦
ϕ2 = ϕstart + m · 360◦
ϕ3 = ϕstart + (m + 1) · 360◦
ϕ′start =
ϕ1 : ((|ϕ1| ≤ |ϕ2|) ∧ (|ϕ1| ≤ |ϕ3|))ϕ2 : ((|ϕ2| ≤ |ϕ1|) ∧ (|ϕ2| ≤ |ϕ3|))ϕ3 : ((|ϕ3| ≤ |ϕ1|) ∧ (|ϕ3| ≤ |ϕ2|))
The ”div” operator applies integer division. Since there might be several implementations
for the this function (floor, truncate and ceil), we have to consider the two numbers next
to the result value of m. After comparison, the value ϕ′start is the one that is next to
ϕtarget. After this function is applied, another problem concerning the RPY representa-
tion arises. For the explanation, some basics about RPY angles are introduced in the
following.
In general, a frame T is given as
T =
n1 o1 a1 p1
n2 o2 a2 p2
n3 o3 a3 p3
0 0 0 1
=
p1
rpy(ϕz, ϕy, ϕx) p2
p3
0 0 0 1
= Trans(px, py, pz) · RPY(ϕz, ϕy, ϕx),
where the rotation matrix rpy(ϕz, ϕy, ϕx) is defined as
rpy(ϕz, ϕy, ϕx) =
c(ϕy)c(ϕz) s(ϕx)s(ϕy)c(ϕz) − c(ϕx)s(ϕz) c(ϕx)s(ϕy)c(ϕz) + s(ϕx)s(ϕz)
c(ϕy)s(ϕz) s(ϕx)s(ϕy)s(ϕz) + c(ϕx)c(ϕz) c(ϕx)s(ϕy)s(ϕz) − s(ϕx)c(ϕz)−s(ϕy) s(ϕx)c(ϕy) c(ϕx)c(ϕy)
=
n1 o1 a1
n2 o2 a2
n3 o3 a3
4 Trajectory Planning 37
Here, the cosine function is abbreviated by ”c”, the sine function by ”s”. Comparable to
the inverse kinematic problem, the calculation of rpy(ϕz, ϕy, ϕx) is always unique for any
given values ϕx, ϕy, and ϕz (forward kinematic). But if a frame T is given and the angles
rpy(ϕz, ϕy, ϕx) have to be calculated, there are at least two solutions as demonstrated
nextly. By taking the square root of the sum of the squares of n1 and n2 we can compute
cosine ϕy. Then, we can solve for ϕy with the arc tangent of −n3 over the computed
cosine. As long as c(ϕy) �= 0 we can solve for ϕz by taking the arc tangent of n2/c(ϕy)
over n1/c(ϕy) and we can solve for ϕx by taking the arc tangent of o3/c(ϕy) over a3/c(ϕy).
Since we can use the positive or the negative square root, we obtain two solutions:
ϕy1 = atan2(−n3,
√n2
1 + n22
)ϕy2 = atan2
(n3,
√n2
1 + n22
)
ϕz1 = atan2(
n2c(ϕy) , n1
c(ϕy)
)ϕz2 = atan2
(n2
c(ϕy) , n1c(ϕy)
)
ϕx1 = atan2(
o3c(ϕy) , a3
c(ϕy)
)ϕx2 = atan2
(o3
c(ϕy) , a3c(ϕy)
)
The atan2(y,x) function computes tan−1( yx) but uses the signs of both, x and y, to de-
termine the quadrant in which the resulting angle lies. E.g. atan2(−5.0, 5.0) = −45◦,whereas atan2(5.0,−5.0) = 135◦, a distinction, which would be lost with a single-
argument arc tangent function. The atan2 function becomes undefined when both argu-
ments are zero. As result, we always obtain two solutions for ϕx, ϕy, and ϕz. The next
problem occurs, if the angle ϕy equals ±90◦. Here the solution degenerates. In practice,
we have to prevent this case by keeping the value for ϕy between −90◦ and 90◦. The
performed solution arranges a transformation of the respective frame Toriginal that the
order for x, y, z DOF changes to y, z, x in Ttransformed, because of a rotation of −90◦
about the z and again about −90◦ about the y axis.
Ttransformed = Toriginal · RPY(−90◦,−90◦, 0◦)
Afterwards the RPY angles are calculated again and are transformed back to the order
x, y, z. If no one of the both solutions provides a value |ϕy| < 90◦ the order is changed
by the same transformation to z, x, y. Now, a solution with |ϕy| < 90◦ definitely can
be found. Once again, the values have to be transformed back to the order x, y, z. The
shown concept is only used to calculate RPY angles, the used frames maintain untouched.
As result, we can map RPY angle and the orientation of a given frame one-to-one.
4.4.2 Geometric Problems with Cartesian Paths
Because of the continuous correspondence between a path shape described in Cartesian
space and joint space, Cartesian paths are prone to various problems relating to workspace
and singularities. Generally, these problems can be separated into three different types
as illustrated by figure 4.6, figure 4.7, and figure 4.8.
4 Trajectory Planning 38
A
B
Figure 4.6: Cartesian path problem type I: intermediate points unreachable
A
B
Figure 4.7: Cartesian path problem type II: high joint rates near singularity
Type I: intermediate points out of workspace The initial position as well as the goal
position are in the robot’s workspace, but it is quite possible, that not all sample points
of the trajectory are in workspace. Figure 4.6 demonstrates the problem for a simple
two-link manipulator in two dimensional space.
Type II: singularity causes high joint velocities If the Cartesian straight line approaches a
singular manipulator configuration, one or more joint rates may increase towards infinity.
A planar two-link manipulator with equal link lengths is shown by figure 4.7. For a better
visualization some intermediate manipulator configurations have been drawn. All points
along the path are reachable, but as the robot goes past the middle portion of the path,
the velocity of joint one becomes very high. The closer the path comes to the axis of joint
1, the faster this rate will be.
4 Trajectory Planning 39
A B
Figure 4.8: Cartesian path problem type III: start and goal reachable in different solutions
Type III: different configurations for initial and target position As figure 4.7 does, figure
4.8 shows a planar two-link manipulator with equal link lengths, too. The robot’s joint
angels are limited and and so the number of solutions with which it can reach a given
point in space is restricted. If the goal position can not be reached in the same configu-
ration as the robot is in at the start position, a problem arises.
Due to these problems for Cartesian paths, the skill primitive concept has to provide
both, joint space movements and Cartesian interpolation. In general, movements in free
space should be supposed to be executed in joint space, while any movements using the
Cartesian hybrid controller have to be accomplished in Cartesian space, of course. But
even for joint space movements, the robot’s sensors like force or distance sensor should
be interrogated to monitor free space movements.
5 Software Architecture 40
5 Software Architecture
5.1 Introduction
The previous two chapters described two major parts of this work: trajectory planing and
the skill primitive concept at all. Naturally, for the implementation of the whole concept
more than these two parts are necessary. To devote an own chapter to every detail would
let the paper become too voluminous. Hence, this chapter is supposed to describe the
complete software implementation. The overview illustrates all software modules as well
as all processes. For force control, the driver for the JR3 force/torque sensor had to
be adapted, which is described before the main module of the work, the skill primitive
controller. Afterwards, the user interface, i.e. the class skillprim, is presented.
5.2 Overview
MiRPA
Skill primitive execution
Receive skill primitive
Calculate parameters
Set parameters
Hybrid controller
Compute position values
Compute force values
Compute new position
Check stop conditions
Inverse kinematic model
Send new joint position
Robot and sensors
On-linetrajectoryplanner
F/T sensordriver
Robottask
Robotdriver
Positioncontroller
Force controller
Figure 5.1: The four processes run to execute a robot task: MiRPA, (joint) position con-troller, skill primitive controller, and the application program containing the robot task(as kind of a skill primitive net)
For the execution of a robot task, a software architecture as shown in figure 5.1 is provided.
Every single block in this figure represents a process running on the QNX platform, that
was introduced in 1.2 on page 2. The middleware MiRPA is responsible for interprocess
communication. The joint position controller including the hardware driver is also a single
process. Generally, this process receives a position setpoint by the hybrid controller of
5 Software Architecture 41
File User application Skill primitive controller
SPController.cpp x
skillprim.h x x
skillprim.cpp x x
SPControllerAccessories.h x
SPControllerAccessories.cpp x
SPControllerTrajectory.h x
SPControllerTrajectory.cpp x
Table 5.1: Source files required for the user application and for the skill primitive controller
the skill primitive control process. This process executes every single skill primitive
that was sent by the user application program, which contains a kind of skill primitive
net. As already has been mentioned in chapter 1.1 on page 1, the long-term aim is to
generate the user application including the skill primitive net automatically. After any
exit condition becomes true, the application process gets acknowledged. Any details about
interprocess communication are discussed within this part of the document. Obviously,
the bodywork of the software architecture shown in figure 5.1 is not very modular. E.g.
the trajectory planner as well as the force controller could be a single process each to
keep low-maintenance system. But because of runtime problems (see 7 on page 89) the
modules have been melted together. A suggestion for a more modular future architecture
in illustrated in 8.4 on page 97. The source code of the user application and the skill
primitive controller consists of several files as shown in table 5.1.
SPController.cpp
This file contains the whole skill primitive control program including the hybrid controller.
The functionality details are presented by 5.4. All subroutines are swaped out to the file
SPControllerAccessories.cpp with the header file SPControllerAccessories.h.
skillprim.h and skillprim.cpp
The class skillprim represents the user interface. The complete functionality of the skill
primitive controller can be addressed via this class. 5.5 delivers an exact description of
these files.
SPControllerAccessories.h and SPControllerAccessories.cpp
All subfunctions of the skill primitive controller are embedded within these files.
5 Software Architecture 42
SPControllerTrajectory.cpp and SPControllerTrajectory.h
These files include the on-line trajectory planner that was described above. In general,
only two functions are visible to the outside: the routine for the trajectory planner and
the one for speed controlled DOFs.
Followed by the characterization of the force/torque sensor driver, the introduced files
and their interrelationships are described in the following sections.
5.3 Driver of the JR3 Force/Torque Sensor
A first version of the driver for the force-torque sensor was developed in [KROEGER02].
Within this version of the driver all important functions are implemented, but the user
is only able to setup a task frame w.r.t. the sensor frame on the sensor surface, which is
translatory displaced. This translatory displacement of vector �b = (bx, by, bz) rules as
follows:
FTransx= Fx τTransx
= bz · Fy − by · Fz + τx
FTransy= Fy τTransy
= bx · Fz − bz · Fx + τy
FTransz= Fz τTransz
= by · Fx − bx · Fy + τz
�FTrans = �F �τTrans = �d × �F + �τ
Fx, Fy, Fz, τx, τy and τz are the original values from the sensor driver, whereas �FTrans
and �τTrans contain the transformed ones. Regarding to chapter 2, the description of the
task frame had to be determined by a vector �k, but �k is given w.r.t. the robot’s hand
frame. Here, the transformation is determined w.r.t. the sensor frame. The orientation
of the task frame can not be set. To enable the determination of any desired task frame,
the sensor driver must be expanded by an orientation transformation. Twelve different
conventions can be found in literature, but the most common ones are the Euler angles
Euler(φ, θ, ψ) = RotZ(φ) · RotY(θ) · RotZ(ψ)
and the roll-pitch-yaw angles(RPY angles)
RPY(ϕz, ϕy, ϕx) = RotZ(ϕz) · RotY(ϕy) · RotX(ϕx).
These are two possibilities to determine the orientation of the task frame. The advantage
of the Euler angles, to use the same algorithm for the forward and for the backward
transformation, is less concise in comparison the RPY angles, which are used in the
existing force control concept (see 4.4.1 on page 36) and are more intuitive to handle.
The derivation for the force/torque transformation from one frame into another one, i.e.
the transformation from the sensor frame into a user-specified task frame, is illustrated
in the following. To simplify this derivation, we start with a trivial rotation in one degree
5 Software Architecture 43
of freedom, which is applied to the force values of the first (translatory) transformation.
Most of accepted literature use the nomenclature ϕ, θ and ψ for RPY angles, but to
prevent from any confusion, the angles are named βx, βy and βz within this document.
FRotZx= FTransx
· cos(βz) + FTransy· sin(βz)
FRotZy= FTransy
· cos(βz) − FTransx· sin(βz)
FRotZz= FTransz
�FTrotZ = rotZ(βz) · �FTrans
These results are reused for a rotatory transformation about the y-axis, whose results are
transformed by a rotation about the x-axis:
�FTrotY = rotY(βy) · �FRotZ
�FTrotX = rotX(βx) · �FRotY
As a final result for the transformed forces �F ′ we retrieve
�F ′T =
c(βz)c(βy) s(βz)c(βy) −s(βy)
c(βz)s(βy)s(βx) − s(βz)c(βx) c(βz)c(βx) + s(βz)s(βy)s(βx) c(βy)s(βx)c(βz)s(βy)c(βx) + s(βz)s(βx) s(βz)s(βy)c(βx) − c(βz)s(βx) c(βy)c(βx)
�FTrans
where the cosine function is abbreviated by ’c’, the sine function by ’s’. The transformed
torques �τ ′ behave analogous:
�τ ′T =
c(βz)c(βy) s(βz)c(βy) −s(βy)
c(βz)s(βy)s(βx) − s(βz)c(βx) c(βz)c(βx) + s(βz)s(βy)s(βx) c(βy)s(βx)c(βz)s(βy)c(βx) + s(βz)s(βx) s(βz)s(βy)c(βx) − c(βz)s(βx) c(βy)c(βx)
�τTrans
The rotatory transformation matrix above equals the inverse of the rotation matrix
rpy(βz, βy, βx). Hence:
�F ′T = rpy (βz, βy, βx)−1 · �F
�τ ′T = rpy (βz, βy, βx)−1 ·(�r × �F + �τ
) ForwardTransformation
To enable the user any influence to the origin values of the sensor, e.g. a reset of all
transformed values, an inverse transformation is necessary, i.e. the same rotatory trans-
formations are applied but in reverse order.
�FTTrans =
c(βy)c(βz) s(βx)s(βy)c(βz) − c(βx)s(βz) c(βx)s(βy)c(βz) + s(βx)s(βz)
c(βy)s(βz) s(βx)s(βy)s(βz) + c(βx)c(βz) c(βx)s(βy)s(βz) − s(βx)c(βz)−s(βy) s(βx)c(βy) c(βx)c(βy)
�F ′
�τTTrans =
c(βy)c(βz) s(βx)s(βy)c(βz) − c(βx)s(βz) c(βx)s(βy)c(βz) + s(βx)s(βz)
c(βy)s(βz) s(βx)s(βy)s(βz) + c(βx)c(βz) c(βx)s(βy)s(βz) − s(βx)c(βz)−s(βy) s(βx)c(βy) c(βx)c(βy)
�τ ′
5 Software Architecture 44
Lastly, this inverse translatory transformation must be applied to obtain the values in
the sensor frame:
Fx = FTransxτx = by · FTransz
− bz · FTransy+ τTransx
Fy = FTransyτy = bz · FTransx
− bx · FTransz+ τTransy
Fz = FTranszτz = bx · FTransy
− by · FTransx+ τTransz
�FT = �FTrans �τ = �FTrans ×�b + �τTrans
And summarized:
�FT = rpy (βz, βy, βx) · �F ′
�τT =((
rpy (βz, βy, βx) · �F ′)T
�b
)T
+ rpy (βz, βy, βx) · �τ ′ BackwardTransformation
Since the skill primitive control program uses the transformation
T = Trans(px, py, pz) · RPY(ϕx, ϕy, ϕz),
the transformation parameters �b = (bx, by, bz)T and �β = (βx, βx, βx)
T can be applied for
the force/torque transformation as well. Figure 5.2 shows some example code. JR3 pro-
vides different kinds of force/torque sensors. Amongst other ones they also offer twelve-
DOF sensors that measure force, torque, and six degrees of acceleration. For an easy
handling, the driver can be used either for force/torque measurements (as seen in figure
5.2) or for acceleration measurements. The constructor of the class isaFTSensor has to
be used with the argument FORCE for force/torque measurement or with the argument
ACCELERATION to interrogate accelerations. The units are Newton and Newton meter or
g (= 9.81 m/s2) and rad/s2, respectively. The SetTrans method sets up the task frame
w.r.t. the sensor frame, which is in the center of the sensor surface as to see in figure 5.5
on page 59.
JR3 supports two kinds of controller cards: ISA and PCI. Driver for both are provided.
The ISA driver requires the files isaftsensor.cpp and isaftsensor.h, whereas the files
pciftsensor.cpp and pciftsensor.h are used by the PCI driver.
5 Software Architecture 45
...#include <isaftsensor.h>
int main(){
unsigned int i;char c;isaFTSensor fts(FT FORCE); // initialize the sensor; chose between a force/torque
// sensor (FORCE) or an acceleration sensor (ACCELERATION)switch (fts.State) // check if the sensor is connected correctly{
case FT NO SENSOR:printf("JR3 PC-Card found, but no sensor available! \n");return(-1);break;
case FT NO PC CARD:printf("No JR3 PC-Card found! \n");return(-1);break;
default: break;}fts.SetUnits(FT INTERNATIONAL); // set up metric units (N and Nm)fts.ResetOffset(); // reset all values w.r.t the task frame !!!fts.SetTrans(30, 40, 50, 0, 0, -45); // transformation from the sensor frame
// to the task frame; in mm and degrees (roll-pitch-yaw)fts.SetFilter(2); // enable filter 2 (125 Hz)printf("\n\nPress ’r’ to reset values and ’Escape’ to end. Any key to continue...\n\n");while (!kbhit());getch();do{
fts.Measure(); // interrogate sensor valuesfor (i = 0; i < 6; i++)
printf(" %d: %8.3f ",i,fts.FTValue[i]);printf("|\n");delay(200);if (kbhit()) // if a key was pressed
c=getch();if ( (c == ’r’) || (c == ’R’) ){
fts.ResetOffset(); // Set the offset valuesc = 0; // such that all values in the task frame are zeroprintf("Reset\n");
}}while (c != 27);return(0);
}
Figure 5.2: Part of an example application for the force/torque sensor
5 Software Architecture 46
5.4 The Skill Primitive Controller
5.4.1 Overview
Variable declaration
Read initialization file
Initiate force/torque sensor
Initiate output files (if required)
Create the default exit condition
Establish MiRPA connection
Start joint position controller
Register MiRPA messages
Set up trigger proxy
Start Robot
Set all initial parameters
Acknowledge user application
Hybrid controller
Receive message
y n
Refresh control values
Measure forces/torques
Transform forces/torques into all frames Transform all task frame values
Request trajectory values Calculate target position
Calculate new Cartesian position Check parameters
Calculate new joint position Check limit violation
Generate user stop conditionTransform positions, velocities,
accelerations into all frames Set new control values
Check stop conditions
Generate new set points
Set new control values
Send answer to user application
Send joint position to position controller Rec
eived
oth
erm
essa
ge
(e.g
.en
d
skil
lp
rim
itiv
eco
ntr
oll
ero
rre
set
of
forc
es/t
orq
ues
);h
andle
this
mes
sage
End joint position controller
Detach trigger proxy
Log off from MiRPA
Close all output files (if required)
End
Trigger message?
Stop cond. true?
New skill primitive?
y n
y n
Figure 5.3: Structure chart of the SPController.cpp file
This section delivers a detailed description of the skill primitive controller as it was im-
plemented for this work. That includes the presentation of routines contained in the
accessory file SPControllerAccessories.cpp. The printout of any source code is aban-
doned, since the linkage is as complex as a discussion of every detail would guide to an
much to voluminous documentation. Also the source code itself would go beyond the
scope of a usual printout. A first complete but very simplified overview is delivered by
figure 5.3. All single items are discussed below.
5 Software Architecture 47
Process Start commandMiRPA ObSer -f InitFileNamePosition controller r2 regler -f InitFileNameSkill primitive controller spc -f InitFileName
Table 5.2: Executable files to start each process
5.4.2 Start-Up
Regarding to the implementation ruling during this work, each process can be started as
presented in table 5.2. Since also the object server is able to start processes, the position
controller usually becomes started by a message that is sent to MiRPA. Once MiRPA is
running, the user application can be started additionally. This process sends a message
(”StartSPCtrl”) to the object server, which executes the skill primitive controller. This
file can be started with several options:
-f InitFileName If only the option -f is used, the default initialization file,
”spccontroller.init” is taken and if InitFileName is specified, another file is
used. With out the -f option no file is loaded for initialization. More about the
initialization process can be found in section 5.4, where the contents of a initialization
file is discussed.
-o Enables writing all events into an output file, which is named within the initializa-
tion file. All ingoing and outgoing messages, setpoint data, exit conditions, initial
parameters, unsolvable inverse kinematic configurations etc. can be taken from this
file to get support for any eventual bugs.
-v Creates a file, whose name is also determined by the initialization file, that contains
several control data, i.e. positions, forces/torques, velocities etc. which are calcu-
lated or measured, respectively. This file is directly importable to Microsoft Excel
and Origin by OriginLab. The current implementation logs the most important force,
position and velocity data in joint space and w.r.t. the world frame. Improvements
of this feature are proposed in 8.5 on page 98.
-t Writing the third output file gets enabled by this option. Here all calculations of the
trajectory planning process are documented. Usually, this option is not required
anymore, since this file was only supposed to be a debugging supporter.
The QNX system uses a buffer to write in before any output data is written to its desti-
nation. Once the buffer becomes emptied, the system needs a certain part of the whole
system performance for this action. In consequence, the system slows down dramatically
if there is too much data to write to any file output. The general output file that doc-
uments several events does not get too huge, but the other two ones become very huge,
especially when applying high control rates. To prevent from any system crashes, the
5 Software Architecture 48
cycle time should be increased when writing the value and/or the trajectory planning
file. Common cycle times are discussed in 7 on page 89.
5.4.3 Initialization
The initialization file of MiRPA contains an entry, that enables the object server to start
the skill primitive controller, if the message ”StartSPCtrl” was received. Once, this mes-
sage was send, the user application has to await the initialization phase of the skill prim-
itive controller, which is discussed below. After the variable declaration the controller has
to be fed by initial data. To ensure a running system, the file SPControllerAccessories.h
contains all default values for the skill primitive controller, but it is highly recommended,
to use an own initialization file. This can be embedded by the parameter
spc -f InitFileName
when starting the control program. The default file ”spcontroller.init”, whose content
is described in the following, is completely printed on page 49.
The Initialization file
ForceControlParameters The first section contains the force control parameters P, I,
and D. P is determined in mm/N whereas the I time and the D time value are given
in nanoseconds.
TorqueControlParameters As analogue to the force control parameters the torque con-
trol parameters are described here. The P value’s unit is ◦/Nm.
ImportantValues Here, several important values are named.
CycleTime determines the cycle time of the hybrid controller in nanoseconds.
BufferTime All values, which are determined by the controller (positions, velocities,
accelerations, and forces/torques) are stored in a huge buffer. This buffer is used
to calculate average values that may be required for any stop condition. The
length of this buffer is limited since a static data structure is used for this feature
(a dynamic one would afford too much performance for the allocation process
of the operating system). But the actual length (in ms) can determined by the
user. E.g. if we assume a cycle time of 4 milliseconds, a buffer time of 2000
milliseconds requires 500 values of each magnitude.
DefaultTimeOut As described in 5.4.6 on page 60 there are three different stop
conditions. Amongst other magnitudes, the default exit condition includes a
standard timeout value that is determined here. I.e. if a skill primitive has
not been finished after this certain time in milliseconds, the skill primitive ends
itself.
5 Software Architecture 49
[ForceControlParameters]
P=0.00005
I=4000000.0
D=200000000.0
[TorqueControlParameters]
P=0.0058
I=10000000.0
D=80000000.0
[ImportantValues]
CycleTime=8000000.0
BufferTime=2000.0
DefaultTimeOut=30000.0
RobotEnable=1
FTSensorEnable=1
[DH-Parameters]
a0=0.0
a1=350.0
a2=0.0
a3=0.0
a4=0.0
a5=0.0
d0=630.0
d1=0.0
d2=0.0
d3=350.0
d4=0.0
d5=70.0
alpha0=90.0
alpha1=0.0
alpha2=-90.0
alpha3=90.0
alpha4=-90.0
alpha5=0.0
theta0=0.0
theta1=0.0
theta2=0.0
theta3=0.0
theta4=0.0
theta5=0.0
[MinJointValues]
J Min0=-66010
J Min1=137670
J Min2=10905
J Min3=20010
J Min4=18500
J Min5=100000
[MaxJointValues]
J Max0=66010
J Max1=-21080
J Max2=-44505
J Max3=-20010
J Max4=-18500
J Max5=-100000
[SensorFrame]
s0=0.0
s1=0.0
s2=50.0
s3=0.0
s4=0.0
s5=45.0
[RobotBaseFrame]
r0=600.0
r1=800.0
r2=31.5
r3=0.0
r4=0.0
r5=0.0
[MaxJointSpeeds]
J MaxSpeed0=300.0
J MaxSpeed1=180.0
J MaxSpeed2=640.0
J MaxSpeed3=960.0
J MaxSpeed4=600.0
J MaxSpeed5=800.0
[MaxJointAccelerations]
J MaxAccel0=1060.0
J MaxAccel1=820.0
J MaxAccel2=2430.0
J MaxAccel3=3440.0
J MaxAccel4=3320.0
J MaxAccel5=5340.0
[MaxCartSpeeds]
MaxCartSpeedTx=800.0
MaxCartSpeedTy=800.0
MaxCartSpeedTz=800.0
MaxCartSpeedRx=800.0
MaxCartSpeedRy=800.0
MaxCartSpeedRz=800.0
[MaxCartAccelerations]
MaxCartAccelTx=1600.0
MaxCartAccelTy=1600.0
MaxCartAccelTz=1600.0
MaxCartAccelRx=1600.0
MaxCartAccelRy=1600.0
MaxCartAccelRz=1600.0
[MaximumForces]
MaxForceX=200.0
MaxForceY=200.0
MaxForceZ=400.0
MaxTorqueX=12.0
MaxTorqueY=12.0
MaxTorqueZ=12.0
[MinimumForceThresholds]
MinThresh0=2.0
MinThresh1=2.0
MinThresh2=2.0
MinThresh3=0.15
MinThresh4=0.15
MinThresh5=0.15
[EpsilonEnvironments]
EpsilonXT=0.01
EpsilonVT=0.01
EpsilonAT=0.01
EpsilonXR=0.01
EpsilonVR=0.01
EpsilonAR=0.01
EpsilonF=0.5
EpsilonFD=0.01
EpsilonT=0.05
EpsilonTD=0.001
[FileNames]
OutputFile=spc-output.dat
ValueFile=spc-values.xls
TrajectoryFile=spc-tra.dat
[Output]
Normal=0
Values=0
Trajectory=0
Figure 5.4: Example for an initialization file
5 Software Architecture 50
RobotEnable To provide the possibility to program the robot off-line, e.g. on a
single QNX PC without any robot hardware extensions, this option can be set.
The robot can be disabled by setting this value to zero.
FTSensorEnable Similar to the last parameter, this one can disable the force torque
sensor. Hence, the user is able to program without any reception cards for the
force torque sensor. ATTENTION!!! This option has to be set to 1 (sensor
enabled) if the ”RobotEnable” option is set to 1 and the robot task requires
force torque values. Otherwise any parts of the manipulator system might be
damaged or destroyed.
DH-Parameters These parameters contain the Denavit-Hartenberg parameters for the
utilized robot. The values for ai and di are supposed to be millimeter while the
angles for αi and θi are given in degrees.
MinJointValues Here, the minimum joint values are specified in increments. Since there
is a (historical) bug regarding the expression of the joint angles, the sign may confuse.
8.5 proposes some improvements for this interface.
MaxJointValues Analog the minimum joint values in increments, the maximum ones are
determined.
SensorFrame The position and the orientation of the force/torque sensor mounted on
the robot’s hand are named. To determine the transformation frame from the hand
frame to the sensor frame, the standard transformation from a six-tuple position
vector is applied, i.e.
T = Trans(sx, sy, sz) · RotRPY(σz, σy, σx),
where sx, sy, and sz specify the translatory displacement of the sensor frame w.r.t.
the hand frame and σx, σy, and σz the rotatory one in RPY angles.
RobotBaseFrame The same transformation as used for the sensor frame is applied for the
robot base frame w.r.t. the world frame. An illustrated representation of all frames
can be seen on page 59.
MaxJointSpeeds The maximum joint speeds are taken from the robot specifications and
are given in ◦/s.
MaxJointAccelerations These values whose unit is ◦/s2 for the maximum joint accel-
eration are also taken from the specifications.
MaxCartSpeeds Regarding to the robot dynamics, the maximum Cartesian speed depends
on the manipulator’s position, but since it would take too much system performance
to calculate the maximum speed every cycle, these velocity limits are determined
statically in mm/s or ◦/s, respectively.
5 Software Architecture 51
MaxCartAccelerations The maximum Cartesian accelerations are applied in the same
manner as the maximum Cartesian velocities are. Units are mm/s2 or ◦/s2, respec-
tively.
MaximumForces Depending on the utilized force/torque sensor, these values specify the
maximum forces and torques that can be suffered by the sensor without any loss of
accuracy caused by overload. E.g. the applied JR3 sensor can suffer a force of 400 N
in z direction, whereas the overload is specified by 800 N. The default stop condition
checks the forces in the sensor frame continuously and if a certain threshold, here
100% of each maximum value is exceeded, the skill primitive aborts. More about
the default exit condition is discussed below in this section.
MinimumForceThresholds Two functions are enabled by these values. Every force/torque
controlled DOF of a skill primitive requires a force/torque setpoint and a thresh-
old. If the force/torque exceeds this threshold, the hybrid controller switches from
position to force control and vice versa. The minimum value for this threshold is
determined by the values of the initialization file. The second function refers to the
fulfillment of an exit condition. Right after a stop condition becomes true, the new
force and position setpoints as well as an respective compliance selection matrix is
set up. The reference frame is maintained, all DOFs whose measured force/torque
exceeds this value are set to force control while the remaining ones keep their current
position. If a skill primitive was defined in joint space, a problem occurs, because
of the impossibility of force control in joint space. The current implementation
arranges the current position as new setpoint. But if the robot is in motion, the
trajectory planner generates an overshoot to fulfill its acceleration and deceleration
bounds. That means, if a movement is executed in joint space and a collision, that
causes forces/torques, which are higher than the maximum allowed ones, happens,
the manipulator’s target position is the current one, in fact, but the movement is
continued and the force might increase the more, that the sensor might be damaged
or destroyed. Here, another solution should be found in future time. Two approaches
are presented in 8.5 on page 98.
EpsilonEnvironments Because of manipulator inaccuracies and numerical discrepancies,
no setpoint can be reached exactly, i.e. the comparison operator ”=” can not be
provided. Instead of ”equal”, an epsilon environment is applied for this operation.
Depending on the ε value a comparison operation becomes true or not:
(a
ε= b
):=
{1 : ((a ≥ b − ε) ∧ (a ≤ b + ε))
0 : else
Of course, the epsilon value has to depend on the magnitudes to be compared.
Hence, for every magnitude another epsilon rules, as can be seen in table 5.3. This
5 Software Architecture 52
Name Magnitude Unit
EpsilonXT Position translatory mm
EpsilonVT Velocity translatory mm/s
EpsilonAT Acceleration translatory mm/s2
EpsilonXR Position rotatory ◦
EpsilonVR Velocity rotatory ◦/s
EpsilonAR Acceleration rotatory ◦/s2
EpsilonF Force N
EpsilonFD Time derivative force N/s
EpsilonT Torque Nm
EpsilonTD Time derivative torque Nm/s
Table 5.3: Epsilon environments and their respective units
functionality is also needed for the skill primitive stop condition. Here, all setpoints
are compared with the current values. Once all (epsilon) comparisons are true, the
skill primitive stops. More about the generation of the skill primitive exit condition
can be found in 5.4.9.
FileNames As already has been introduced in 5.4.2 there are up to three output files.
The file names are determined by these three parameters. More details are presented
in 5.4.14 on page 69.
Output There are two possibilities to enable writing certain data into any output file.
The first one (to add a parameter at start up in the command line) was discussed
in the section above and the second one is to set the respective of these parameters
to one. This way might be helpful for a temporary generation of one of the output
files, since the skill primitive control process usually gets started by MiRPA, i.e. the
command line for the process is not entered manually.
Generating the Default Exit Condition
After the initialization file was read, the output files are opened and a certain header
containing data of the skill primitive controller is written into the files, if the user has
enabled any of the three files. Afterwards the default exit condition is generated. Its
contents depend on the data read from the initialization file. To prevent any force/torque
sensor damage, the force/torque values monitored in the sensor frame and are compared
to the maximum allowed values. To keep the manipulator in workspace, the default exit
condition checks all joint angles. Lastly the standard timeout is added. An example
default stop condition according to the depicted initialization file from page 49 can be
viewed here:
5 Software Architecture 53
(sensorFx > 200N) ∨ (sensorFy > 200N) ∨ (sensorFz > 400N) ∨ (sensorτx > 12Nm) ∨(sensorτy > 12Nm) ∨ (sensorτz > 12Nm) ∨ (jointp1 < −165.025◦) ∨
(jointp2 < −206.504◦) ∨ (jointp3 < −58.419◦) ∨ (jointp4 < −200.100◦) ∨(jointp5 < −115.625◦) ∨ (jointp6 < −833.333◦) ∨ (jointp1 > 165.025◦) ∨
(jointp2 > 31.620◦) ∨ (jointp3 > 238, 415◦) ∨ (jointp4 > 200.100◦) ∨(jointp5 > 115.625◦) ∨ (jointp6 > 833.333◦) ∨ (t > 30000ms)
Registration of messages
After the skill primitive controller is connected to MiRPA, the following messages are
registered:
EndFCtrl ends the skill primitive controller, i.e. the manipulator stops jerkily and re-
mains in its current position.
SetFPV The parameters of this message contain a single skill primitive. Proceedings after
the reception of this message are discussed in 5.4.9 on page 65.
GetFPValues The user application can request any control values from the skill primitive
controller. This REQUEST message is attended by a parameter, which describes
the needed values. Further details about the handling of this message are given by
5.4.11 on page 68.
FTSReset This GENERAL message without any parameter resets the values of the
force/torque sensor. Before initiating any force control procedures, this has to be
done by the user application.
GetDefaultExitConditionString If a skill primitive ends, the user application must be
acknowledged, which part of the whole exit condition became true. Since the user
application only knows its own user stop condition and the skill primitive exit condi-
tion (all setpoints), it must be enabled to interrogate the default exit condition. This
parameterless REQUEST message becomes answered with the default exit condition
as attached parameter. More about the answer, that is send to the user application,
if a skill primitive becomes true, can be found in 5.4.6 on page 60.
GetBufferTime The buffer time of the hybrid controller (given by the initialization file)
specifies the past time interval, in which all control values are stored. The user
application must know this time because if an exit condition contains the calculation
of an average value about a certain time interval, this interval is not allowed to
be greater than the buffer time. Otherwise the average value computation would
be erroneous. The user application sends this REQUEST message without any
parameter, while the answer of the skill primitive controller contains the buffer time.
5 Software Architecture 54
SetCtrlPara This GENERAL message is a vestige of the first force controller introduced
in [KROEGER02]. The user application is able to change the force/torque control
parameters. The usage of this message is only recommended for any changes of the
force/torque controller. In general the force/torque control parameters are set up for
a very high environmental stiffness. When utilizing materials with a low stiffness,
e.g. foam, the controller needs a certain time to reach the setpoint. For such a
case, the parameters might be changed to get a faster controller or the force/torque
threshold might to enable force/torque control be increased.
The handling of all messages, except SetCtrlPara, is embedded in the class ”skillprim”,
which is introduced in 5.5 on page 71. Because of this class, the user is not concerned
with any sending or receiving of messages.
Starting the Robot
According to the cycle time given by the initialization file, a timer, that is attached to a
proxy, is set up. Afterwards, the robot start command is send to the position controller
and the user has to activate the robot. The user application receives a message named
”SPControllerReady”, but as well as the ”StartSPCtrl” message is, the proceeding of
these messages is executed by the ”skillprim” class.
5.4.4 The Hybrid Controller
Once the robot is started, the skill primitive controller waits for the reception of a mes-
sage. A message can be a trigger signal from the proxy or one of the registered messages,
presented in the last section. The handling of these message is described in sections 5.4.9
to 5.4.13. This section refers to the hybrid controller itself.
First of all, all respective control values get refreshed, i.e. the position, velocity and
acceleration values from the last control cycle are stored, that the respective variable can
be reset to the values from the current cycle. To refresh the force/torque values, the
sensor gets interrogated. The values from the driver are given w.r.t. the task frame, i.e.
the force values have to be transformed into the hand and into the sensor frame, which
is done by applying the transformation introduced in 5.3 on page 42. For any future
projects with hybrid control in any task frame given w.r.t. the world frame, the forces
have to be transformed into the respective frame, too. More information about this can
be found in 8.1 on page 93. Depending on which reference frame is applied, a new position
(i.e. a Cartesian position and a joint position) are computed.
5 Software Architecture 55
World Frame
Here, only position and speed control is enabled. Firstly, a difference frame from the
world frame translatory displaced into the current hand frame to the goal hand position
becomes calculated:
world∗T<hand> = Trans(worldp<hand>
x ,world p<hand>y ,world p<hand>
z
) ·Trans
(worldp
handx ,world p
handy ,world p
handz
)−1
·RPY
(worldϕ<hand>
z ,world ϕ<hand>y ,world ϕ<hand>
x
) ·RPY
(worldϕ
handz ,world ϕ
handy ,world ϕ
handx
)−1
where the index ”world*” represents the world frame translatory displaced into the hand
frame. The respective position vector world∗�p <hand> is calculated subsequently and is led
to the trajectory planner. Depending on the compliance selection matrix (position or
speed control) either a value from the speed trajectory planner zerovelocity�p
next or the value
from the position path planner zeroposition�p next is taken to calculate a position difference
vector hand�p hand , where one component i is determined by the compliance selection
matrices positionS and velocityS
handphandi =
{ zerovelocity�p next :
(velocitySii = 1
)zeroposition�p next :
(positionSii = 1
) ,
I.e. hand�p hand contains the hand position difference from the last cycle’s position to the
one for this cycle. The calculation of zerovelocity�p
next and zeroposition�p next is discussed in 4.2
and 4.3 on page 28 et seqq. The resulting vector can be applied to generate the new
transformation frame worldThand:
worldThand = Trans(handphand
x ,hand phandy ,hand phand
z
) ·Trans
(worldphand
x ,world phandy ,world phand
z
) ·RPY
(handϕhand
z ,hand ϕhandy ,hand ϕhand
x
) ·RPY
(worldϕhand
z ,world ϕhandy ,world ϕhand
x
).
The resulting joint position is calculated by utilization of the inverse kinematic model.
Here, the new target frame must be given w.r.t. the robot base:
robotThand = worldT−1robot ·world Thand
Task Frame
As well as for the hand and the sensor frame, position, velocity, and force control are
possible. The force/torque controller engages, if the given force/torque threshold value is
exceeded into the right direction, i.e. if a skill primitive is desired to enable force control
5 Software Architecture 56
at a threshold value of 10 Newton, the setpoint is -18 Newton and the sensor delivers a
value of 12 Newton, the respective entry in the contact matrix C remains at zero. Only
if the force gets lower than -10 Newton, force control becomes enabled. This way, the
robot moves position controlled away from its environment and the force decreases very
fast, otherwise, the force controller would be enabled if the force value is greater than 10
Newton and would be disabled for forces greater than -10 Newton to find contact is the
correct direction. To prevent from any acceleration to any given approach speed, if the
force/torque setpoint is zero, the contact matrix is also set to one for a setpoint of zero
Newton or Newton meter, respectively. I.e. for the contact matrix C:
Cii =
1 :
(∣∣taskFi
∣∣ > F thresholdi
) ∧((
sign(F setpoint
i
)= sign
(taskFi
)) ∨(F setpoint
i = 0) )
0 : else
To calculate the position for current cycle, the on-line trajectory planner is fed by the
vector task�p <task>, which is calculated out of this frame
taskT<task> = worldT−1task ·world T<task> .
As the control method depends on the compliance selection matrices, the resulting posi-
tion vector for the new position w.r.t. the old one task�p task can be computed as
taskptaski =
zeropositionpnext
i :(positionSii = 1
)zerovelocitypnext
i :(velocitySii = 1
) ∨ ((Cii = 0) ∧ (
forceSii = 1))
zeroforcep
nexti :
((Cii = 1) ∧ (
forceSii = 1)) ,
where zeroforce�p
next is the resulting position difference of the force controller from [KROEGER02].
Our aim is still to calculate the new hand frame position w.r.t. the robot base frame to
be able to apply the inverse kinematic model. Hence,
robotThand = worldT−1robot ·world Ttask · Trans
(taskptask
x ,task ptasky ,task ptask
z
) ·RPY
(taskϕtask
z ,task ϕtasky ,task ϕtask
x
) ·hand T−1task .
Hand Frame
The proceeding for the hand frame and the sensor frame is analogous the one of the task
frame. The only change is that all calculations are referred to the hand or sensor frame
respectively. I.e. the contact matrix C for the hand frame is calculated as
Cii =
{1 :
(∣∣handFi
∣∣ > F thresholdi
) ∧((
sign(F setpoint
i
)= sign
(handFi
)) ∨(F setpoint
i = 0N))
0 : else .
5 Software Architecture 57
The distance to go to reach the goal hand position can be described as
handT<hand> = worldT−1hand ·world T<hand> ,
that the position difference vector from the last cycle to the current one is
handphandi =
zeropositionpnext
i :(positionSii = 1
)zerovelocitypnext
i :(velocitySii = 1
) ∨ ((Cii = 0) ∧ (
forceSii = 1))
zeroforcep
nexti :
((Cii = 1) ∧ (
forceSii = 1)) ,
and the new hand position frame can be computed:
robotThand = worldT−1robot ·world Thand · Trans
(handphand
x ,hand phandy ,hand phand
z
) ·RPY
(handϕhand
z ,hand ϕhandy ,hand ϕhand
x
).
Sensor Frame
Here, the same way is applied to calculate the contact matrix C, the position difference
frame to the target sensor frame sensorT<sensor>, the position difference vector sensorpsensor
and the new hand position frame:
Cii =
{1 :
(|sensorFi| > F thresholdi
) ∧((
sign(F setpoint
i
)= sign (sensorFi)
)∨
(F setpoint
i = 0N))
0 : else
sensorT<sensor> = worldT−1sensor ·world T<sensor>
sensorpsensori =
zeropositionpnext
i :(positionSii = 1
)zerovelocitypnext
i :(velocitySii = 1
) ∨ ((Cii = 0) ∧ (
forceSii = 1))
zeroforcep
nexti :
((Cii = 1) ∧ (
forceSii = 1))
robotThand = worldT−1robot ·world Tsensor · Trans
(sensorpsensor
x ,sensor psensory ,sensor psensor
z
) ·RPY
(sensorϕsensor
z ,sensor ϕsensory ,sensor ϕsensor
x
) ·hand T−1sensor
5 Software Architecture 58
Joint Space
Joint space controlling takes an exceptional position within this concept. As already has
been discussed in 4.4.2 on page 37 only Cartesian interpolated movements do not suffice
to provide a universal trajectory planning system. Hence, also position (and velocity)
control in joint space has been implemented. The handling in joint space is much easier,
of course. Only the current position, the current speed, the target position, the maximum
speed, the maximum acceleration value, and the maximum deceleration value per DOF
are transmitted to the trajectory planner that computes the next desired position directly.
To obtain the corresponding Cartesian position, the kinematic forward transformation is
applied.
5.4.5 About Transformations
No matter which frame is used to be reference frame, after the hybrid controller has
calculated the new Cartesian hand position robotThand and the respective joint values, all
values have to be transformed into all possible frames in normal mode as well as in mixed
mode. All values means, all position, velocity, and acceleration values. Where the frames
are located at, is shown on page 59. Regarding to figure 2.1 on page 10, the new frames
can be calculated easily:
worldThand = worldTrobot ·robot Thand
worldTtask = worldThand ·hand Ttask
worldTsensor = worldThand ·hand Tsensor
A complete overview about all position, velocity and acceleration transformations would
cause a very voluminous and complex section, so all respective transformations can be
found in appendix A.1 on page 103. Here, just the general principe of how to calculate
velocities and acceleration values is explained: the respective frame from the last cycle
and the one from the current control cycle are taken to compute a difference frame, as
exemplary shown for the task frame speed in normal mode:
taskTtask = worldT−1task ·world Ttask (normal mode)
The translatory fraction of this frame and the RPY angles build the respective position
vector task�p task, which is divided by the cycle time to obtain the velocity vector task�v task,
i.e. the task frame speed w.r.t. the task frame (normal mode).
task�v task =task�p task
tcycle(normal mode)
The speed has also to be calculated w.r.t. the world frame, i.e. in mixed mode.
5 Software Architecture 59
X
z
y
X
z
y
X
z
y
X
z
y
Figure 5.5: Top left: World frame, top left: robot base frame, bottom left: hand frame,bottom right: sensor frame
5 Software Architecture 60
worldTtask∗ = worldTtask ·world T−1task (mixed mode)
worldTtask∗ represents the task frame displacement w.r.t. the world frame. The corre-
sponding position vector world�p task∗ divided by the control cycle time results in the task
frame velocity in mixed mode:
world�v task =world�p task∗
tcycle(mixed mode)
Two successional velocities, i.e. one from the last cycle the other one from the current
cycle, can be used to calculate the corresponding acceleration value in normal mode as
well as in mixed mode:
task�a task =task�v task − task�v task
tcycle(normal mode)
world�a task =world�v task − world�v task
tcycle(mixed mode)
As already mentioned, the complete overview is given in appendix A.1 on page 103.
5.4.6 Checking Stop Conditions
After all position, velocity, and acceleration values are computed, the check of stop con-
ditions starts. Section 3.4 on page 21 has already introduced the functionality of stop
conditions. All three exit conditions, the default stop condition (see 5.4.3 on page 52), the
skill primitive exit condition (5.4.9 on page 65), and user exit condition are OR-conjunct.
Each one is represented by a string that needs to comply with a certain notation, which
is explained in the following. In particular, the user has to set up the exit condition
regarding to these demands. Each conjunct comparison contains a value from the hybrid
controller, whose computation is described in the section above, an operator, and a real
number as comparison value. To nest several comparisons, the reverse Polish notation is
used here. A semicolon is taken to separate the elements within the exit condition string.
It does not matter if upper or lower case letters are used. Firstly, the specification as well
as the optional computation of absolute or the average value of a certain time interval is
discussed. The first character of such an element determines the reference frame:
Reference Character
Task frame T
World frame W
Hand frame H
Sensor frame S
Joint space J
5 Software Architecture 61
The second character chooses a magnitude within this frame, either in mixed mode or in
normal mode:
Magnitude Normal mode Mixed mode
Position X Y
Velocity V W
Acceleration A B
Force/torque F F
Time derivative force/torque D D
Assuming, that the robot’s joints are numbered from one to six, the respective DOF is
determined as follows:
Cartesian space Joint Space Character
Translatory x 1 0
Translatory y 2 1
Translatory z 3 2
Rotatory x 4 3
Rotatory y 5 4
Rotatory z 6 5
The fourth character determines the function to be applied to the value described by the
first three characters:
Function Character
|a| A
a V
|a| B
|a| W
nof N
From the fifth character on an floating point number given in this format
[+|−] a [.b]
follows. The sign is optional as well as the decimal places b and the decimal point. I.e.
at least one character is necessary for a. This number’s purpose is to determine the time
interval in which the average value is calculated in. If no average calculation is required
(fourth character ”N” or ”A”), this value is redundant. The maximum string length of a
real number is determined in the file ”SPControllerAccessories.h” and is set to 25 by
default. For a better understanding, some examples are given:
5 Software Architecture 62
”TY2N0” → worldptaskz ”SF0V100” → sensorFx
100ms
”TX2N0” → taskptaskz ”HF5W50.0” → |handτz|
50ms
”TX1A0” →∣∣∣taskptask
y
∣∣∣ ”TF2B12.34” →∣∣∣ taskFz
50ms∣∣∣
”WV4N15.16” → worldωhandy ”JX5N0” → jointp6
The representation of the timeout value constitutes an exception, the character ”O”
followed by a real number, whose format equals the above one, determines the user
timeout value in milliseconds. The check for timeout always refers to the beginning of a
skill primitive. If the value is greater than the one from the default exit condition, the
user timeout value declaration is redundant. Beside the values from the hybrid controller,
we need to specify the following operators.
Operator Character(s)
> ’>’
< ’<’
≥ ’>=’ or ’=>’
≤ ’<=’ or ’=<’
= ’=’ or ’==’
The length of the real number that acts as comparison value to the value from the
controller is also limited to 25 characters. Its format is identical to the one from the
time interval value needed for average calculations. The results of comparisons can be
concatenated by AND and OR conjunctions, which can be specified as:
Conjunction Character(s)
AND ’&’ or ’&&’
OR ’|’ or ’||’As already mentioned, a semicolon is used to separate elements within an exit condition.
Therewith, all elements occurring in exit conditions are introduced. Finally, for a better
understanding, the string for the example from page 22 is given:
((taskFz
20 ms< −50N
)∨
((handphand
x > 40mm)∧ (|worldvtask
x | > 75mm/s)) ∨ (t ≥ 5000ms)
)”TF2V20;-50;<;HX0N0;40;>;TY0A0;75;>;&;O5000;|;|;”
Due to average calculation two problems arise. The first one accords to any reset of
force/torque values, the second one to any task frame change.
The measured force/torque values before any sensor reset, can not be used to compute
an average value, since they are not valid anymore. Since force control is able w.r.t. the
hand frame, the sensor frame, and the task frame, all three frames are affected. The same
5 Software Architecture 63
problem appears if the task frame changes. All values are transferred into the task frame,
and after a change another transformation is applied, i.e. the values calculated by the
older transformation are invalid, either. One possibility would be to transform all past
buffer values to the new force/torque values or task frame values, respectively. But since
this proceeding would hurt all real-time demands, it can not be applied. Another solution
is to store the point of time, where the value reset or the task frame change was executed
and to inhibit any comparisons concerning these values. This solution is implemented.
The average calculation engages as soon as the time interval for calculation is greater
than the elapsed time since the reset/change. To enable the user to know at which point
of time the average calculation of the affected values becomes enabled at the latest, the
user application can request the buffer time (see 5.4.12 on page 69).
5.4.7 Answering the User Application
If one of the three stop conditions becomes true, the skill primitive controller generates
transition setpoints and sends a message named ”SkillPrimResult” to the user applica-
tion. The setpoint generation, which gets applied if not all setpoints have been reached
(i.e. if the skill primitive exit condition has not become true), is described in 3.4 on
page 25. During the default and the user stop condition check, every comparison result
as well as all conjunctions get stored in a result string. Regarding to the example user
exit condition from above: if the hand frame moves more than 40 mm in x direction and
absolute value of the task frame speed exceeds 75 mm/s, the string looks like
”011&0||” .
This way the user application is able to determine, which part of the user exit condition
became true. If the system exit condition became true, the user application can also
detect, which part got fulfilled, since the user application can interrogate the default exit
condition string (see 5.4.12 on page 69). Beside this information, the result of the user
and the default exit condition is sent as integer value (i.e. ’0’ or ’1’). If the skill primitive
exit condition becomes true, i.e. all setpoints are reached, the user application gets also
acknowledged, but for this case, the result values of the user and the system stop condition
are both zero. Hence, the user application can easily determine, which stop condition
became fulfilled. In any case, the setpoints of for the hybrid controller are transmitted to
the user application. I.e. if skill primitive exit condition became true, the old setpoints of
the skill primitive stay. For any error cases also a string containing an error message is sent
to the user application. According to the current implementation, this parameter only
contains data, if the formulation of the user exit condition was erroneous. As result, the
”SkillPrimResult” message, whose type is GENERAL, contains the following parameters:
• User stop condition result string
• Default stop condition result string
5 Software Architecture 64
• Error data string
• User stop condition result value
• Default stop condition result value
• New compliance selection matrix
• New position setpoint
• New force setpoint
Based on this data, the user application can evaluate, which skill primitive to send next.
5.4.8 Error Handling
Default Exit Condition Becomes True
During the execution of a skill primitive, several errors might occur. In general, errors
are supposed to be detected by the default exit condition. Here, the controller generates
new setpoints in the reference frame as the last skill primitive was sent in (see 3.4 on page
25). These setpoints are applied, to prevent any damage. Due to the answer to the user
application, the user application can initiate any damage-preventive steps. But if any
movement in joint space or w.r.t. the world frame is applied, force control is inhibited.
If the force or the torque reaches the sensor’s limits during these movements, the new
setpoints are position based and might let the force/torque increase further more. Hence,
the user application must react immediately, in any other case the manipulator including
the force/torque sensor might be damaged. For this case, the implemented concept does
not suit that well. A suggestion for future work to bypass this problem is given in 8.5 on
page 98.
Error at the Inverse Kinematics
Beside the above mentioned errors, a problem with the inverse kinematics can arise, if
the robot needs to reconfigure itself (see 4.4.2 on page 37) or has reached the workspace
limit. If such a case occurs, the robot is stopped jerkily and remains in this position
because of security and stability reasons. Unfortunately, the current implementation
does not sent any acknowledge message to the user application. This is a second item to
be discussed in 8.5 on page 98. After the immediate stop, the speed values are set to zero,
of course. And sometimes, the robot can accelerate again, since the steps are very small
at the beginning of the new acceleration. Depending on the problem with the inverse
kinematics, sometimes the robot can relieve a singularity, but usually the skill primitive
does not stop until the timeout value is exceeded.
5 Software Architecture 65
Invalid Skill Primitive Parameter
Of course, the user can set up erroneous skill primitives. E.g. a desired position might
be out of workspace, force control might be applied in joint space or w.r.t. the world
frame, some DOFs might be given w.r.t. one frame while others might be given w.r.t. to
another one, etc. The skill primitive controller keeps the old setpoints for such a case, to
guarantee stability. Old setpoints means: the old setpoints from the last skill primitive, if
the skill primitive exit condition became true, or the self-generated transition setpoints,
if the user or the system exit condition became true.
Erroneous Stop Condition String
Naturally, a human programmer might mistake while setting up the user stop condition
string. This is not recognized straight after the reception of the skill primitive, but within
the first control cycle after the reception of the manipulation primitive. For this case,
new setpoints are generated as presented in 3.4 on page 25 and the user application is
acknowledged immediately.
Error Messages
This documentation is not supposed to cover all eventual error cases. The skill primitive
controller writes all occurring errors to the error output stream of the QNX operating
system, that the user can identify the respective error. An overview about all error
messages is given in appendix B on page 109.
5.4.9 Reception of a Skill Primitive
To send a skill primitive to the skill primitive controller, the user application has to set up
a message called ”SetFPV” (Set Force/torque Position Values). This message contains 13
parameters, the twelve ones from 3.2 on page 14 plus a floating point value specifying the
jerk. The jerk is currently not considered within the trajectory planning concept. Further
details are presented in 8.2 on page 94. After the reception of a skill primitive several
checks are applied to ensure that the skill primitive parameters are valid. Depending on
the transmitted compliance selection matrix S the individual selection matrices positionS,velocityS, and forceS are generated.
Task Frame Change
Another problem arises if the task frame gets changed. Here, all values (position, velocity,
acceleration, and force) have to be transformed to the new task frame immediately to
ensure a continuous motion. In particular, this is important, if the task frame changes
during a movement. For this transformation all values are firstly transformed back to the
hand frame and a second transformation determines all values for the new task frame.
5 Software Architecture 66
Past values, which are responsible for an eventual average calculation are not transformed
(see 5.4.6 on page 62).
Skill Primitive Execution Time
Since the reception of a skill primitive is always the beginning of a new one, the skill
primitive execution time is reset and all positions are stored in mixed mode and in normal
mode. This time represents the comparison operand for any timeout value within exit
conditions.
Compute the Position Setpoint
Afterwards the position setpoint is interpreted. As already introduced in 3.3 on page 18
the position setpoint can be determined w.r.t. several coordinate systems to comply with
any exit condition. Here, a respective transformation is applied to enable a movement
w.r.t. one defined center of compliance. Since the variety of position specifications is
rather voluminous (see 3.3 on page 18), all transformations are summarized in appendix
A.2 on page 107.
Generation of the Skill Primitive Stop Condition
After the position setpoint and the compliance selection matrix is calculated, the skill
primitive stop condition can be set up. The specification of the default stop condition
and the user one has already been presented. According to same scheme, figure 5.6 shows
the flow chart to generate the skill primitive exit condition. Firstly, stop conditions for all
six DOFs are concatenated, i.e. depending on the compliance selection matrices positionS,velocityS, and forceS, the hold time and the respective setpoints, the first part is set up.
Of course, all comparisons are applied w.r.t. the reference frame of the skill primitive.
Since there are always two solutions for the RPY angles, these have be added either and
have to be OR-conjunct to the first solution. If the manipulation primitive has been set
up in joint space, this second part is not applied. The third part of the skill primitive
exit condition concerns only position controlled DOFs. The presupposition that position
controlled DOFs have reached their setpoint, is that the velocity is zero and the current
position equals the target one. But if the velocity is unequal to zero, the final state is
not reached, hence, the velocities for position controlled DOFs are compared to zero. As
result, such an exit condition could exemplary look like
”TF0V100;0;=;TF1V100;0;=;TX2V100;-100;=;TX3V100;0;=;TX4V100;0;=;TX5V100;0;=;&;&;TX3V100;180;=;TX4V100;180;=;TX5V100;-180;=;&;&;|;&;&;&;TV2N0;0;=;&;TV3N0;
0;=;&;TV4N0;0;=;&;TV5N0;0;=;&;” .
5 Software Architecture 67
Each DOF
Reference frame
Magnitude (positions only in normal mode)
DOF
Average function
Hold time ;
Setpoint ;
= ;
& ; & ;
Calculate 2nd
RPY solution & ; & ; & ;
Each rotatory DOF
Reference frame
Magnitude
DOF
Average function
Hold time ;
Setpoint ;
= ;
& ; & ; | ; & ; & ; & ;
Each DOF
Reference frame
Velocity
DOF
No function
= ;
0 ;
& ;
Reference = Joint?
Position controlled?
yn
n y
Figure 5.6: Flow chart to generate the skill primitive exit condition
Written out:
(taskFx
100 ms ε= 0N)
∧(
taskFy
100 ms ε= 0N)
∧(
taskptaskz
100 msε= −100mm
)∧(( (
taskϕtaskx
100 msε= 0◦
)∧
(taskϕtask
y
100 msε= 0◦
)∧
(taskϕtask
z
100 msε= 0◦
))∨
((taskϕtask
x
100 msε= 180◦
)∧
(taskϕtask
y
100 msε= 180◦
)∧
(taskϕtask
z
100 msε= −180◦
) ))∧(
taskvtaskz
ε= 0mm/s)
∧(
taskωtaskx
ε= 0 ◦/s)
∧(
taskωtasky
ε= 0 ◦/s)
∧(
taskωtaskz
ε= 0 ◦/s)
5 Software Architecture 68
Store Positions
Since the exit condition might contain comparisons to any position at the beginning of the
skill primitive execution, all positions (task frame w.r.t. world frame, hand frame w.r.t.
world frame, sensor frame w.r.t. world frame, and world frame translatory displaced into
the hand frame) have to be stored. By notation, these positions are referred to as
worldTtask , worldThand , worldTsensor and worldTworld∗ .
Check if Parameters are in Range
Regarding to the initialization file (5.4.3 on page 48), all parameters have to be within a
certain range. If any parameter is out this range, its value is set to the maximum allowed
one.
5.4.10 Resetting Force/Torque Values
Prior to any set of skill primitives using force/torque control, the force values have to be
reset to create a reference point. Otherwise the setpoint would be meaningless since there
is no point of origin. By sending the GENERAL message ”FTSReset” (Force/Torque
Sensor Reset), all force/torque values in the task frame are reset to zero. As consequence
the force/torque values in the sensor frame as well as the ones in the hand frame have to
be reset. As second consequence, the old force/torque difference values (setpoint values
minus current values) have to be recalculated to ensure a continuous force/torque control
behavior.
5.4.11 Request any Control Values
To enable the user application the request of any magnitudes from the hybrid controller,
two solutions arise: the first one is to send the values as answer parameters to the user
application when a skill primitive ends; the second solution lets the user application
request any individual values at any time. When applying the first solution, the answer
message would become too voluminous (240 values, 8 byte each), which would slow down
the system and would be quite inconvenient for the user application programmer, since
he might choose wrong values to determine the next skill primitive of the net. Another
point concerning the first solution is, that real-time demands might be violated. Hence,
the second solution has been implemented. The application send a REQUEST message
named ”GetFPValues” (Get Force/torque Position Values) is attached by a string that
specifies the requested values. The answer always consists of 6 floating point values,
which are taken from the hybrid controller. The value determination string accords to
the format from 5.4.6 on page 60 with the exception, that the DOF is dropped out. Here
are some examples:
5 Software Architecture 69
”TYN0” → world�p task ”SFV100” → sensor �F100ms
”TXN0” → task�p task ”HFW50.0” →∣∣∣hand �F
∣∣∣50ms
”TXA0” →∣∣∣task�p task
∣∣∣ ”TFB12.34” →∣∣∣∣ task �F
50ms∣∣∣∣
”WVN15.16” → world�v hand ”JXN0” → joint�p
After the attached string is analyzed, the respective vector is sent as parameter of an
ANSWER message to the user application.
5.4.12 Request Buffer Time and Default Exit Condition
The chapters above assumed, that the user application knows the the buffer time for
all hybrid control values as well as the system stop condition string. Since these values
depend on the initialization file (5.4.3 on page 48) the user application has to be enabled to
request these values. The buffer time is mainly required to limit the value for any average
calculation. Beside this fact, the user also knows the latest point of time after a task frame
change or a force/torque value reset, where all values are definitely provided again. This
time can be interrogated by the message ”GetBufferTime”, whose type is REQUEST.
The ANSWER message contains the time value as floating point number. Regarding to
the default exit condition, the user application always receives the result of the user exit
condition as well as the one from the default exit condition. To interpret the default
exit condition’s result, the condition must be known by the user application. Hence,
the user application can (initially) request the string of the system exit condition by
the REQUEST message ”GetDefaultExitConditionString”. The ANSWER message
is attached by a string parameter containing the default stop condition.
5.4.13 Ending the skill primitive controller
The user process can end the skill primitive controller by sending a GENERAL message
called ”EndSPCtrl” (End Skill Primitive Controller). If this message was received, the
skill primitive controller ends immediately, i.e. no more control cycles are executed. The
joint position controller ends right after the trigger signal is turned off. All eventually
opened files get closed. The user application should only send this message if the robot
is already stalled.
5.4.14 The Output Files
Three different output files can be created for debugging, setpoint and/or threshold seek-
ing, off-line control analyzing etc. These files, the event output file, the value output file
and the trajectory output file are introduced in this section.
5 Software Architecture 70
The General Output File
This event output file documents all events concerning the skill primitive controller. Start-
ing with initial data about the record time and date. All values taken from the initializa-
tion file are stored. Afterwards all received messages, all sent messages and all occurred
errors are documented. If a skill primitive is received, its parameters are adapted for the
hybrid controller. This log file contains both, the original data from the user application
and the adapted values. If any problem with the inverse kinematic model arises, the
respective frames are recorded. The very first lines of such a file are exemplary shown in
appendix C.1 on page 118.
The Value Output File
Whereas the general output file usually remains tight, because data is added only at
certain events, this value output file might grow very fast, since values from the hybrid
controller are written each cycle. Because of this reason, it is highly recommended to
increase the cycle time for the generation of this file. Otherwise realtime demands might
not be fulfilled and the QNX system might hang up. This file is supposed to be analyzed
by software (Microsoft Excel or Origin by OriginLab). In this first version of the skill
primitive controller, only some important values are written into this file:
• Reference frame
• positionS
• forceS
• velocityS
• taskC
• task �F
• �F setpoint − task �F
• �F setpoint
• �F threshold
• world�p hand
• world�v task
• joint�p
• joint�v
• Position setpoint
• Velocity setpoint
Of course, this provides only a very inflexible interface for to analyze the robot’s behavior.
Improvements are given by 8.5 on page 98. An sample output file is illustrated in appendix
C.2 on page 120.
The Trajectory Planner Output File
The third and minor important output file logs all calculations of the trajectory planner.
As well as it is recommended for the value output file, the cycle time should be raised
either.
5 Software Architecture 71
int StartSkillPrimController(connection *MiRPAPtr);int EndSkillPrimController();int SendSkillPrim();int ResetFTSValues();int GetCurrentFPValues(double *FPValues,
const char RequestedValuesString[MAX VALUE LEN+3+1]);double GetSingleValue(const char RequestedValuesString[MAX VALUE LEN+3+1],
const short Number);int GetCurrentTaskPos(double TP[6]);int GetCurrentWorldPos(double WP[6]);int GetCurrentTaskFor(double TF[6]);int GetCurrentWorldSpd(double WS[6]);int GetCurrentJointPos(double JP[6]);double GetCurrentForceTX();double GetCurrentForceTY();double GetCurrentForceTZ();double GetCurrentForceRX();double GetCurrentForceRY();double GetCurrentForceRZ();void SetReferenceTask();void SetReferenceWorld();void SetReferenceHand();void SetReferenceSensor();void SetReferenceJoint();void SetExitCondition(const char EC[EXIT CONDITION LENGTH]);void SetJerk(const double J);void SetHoldTime(const double T);void SetReference(const short R);void SetPosition(const double P[6]);void SetPosition( const double P0, const double P1, const double P2
, const double P3, const double P4, const double P5);void SetSMatrix(const short S);void SetSMatrix(const short S[6]);void SetSMatrix( const short S0, const short S1, const short S2
, const short S3, const short S4, const short S5);void SetThreshold(const double T[6]);void SetThreshold( const double T0, const double T1, const double T2
, const double T3, const double T4, const double T5);void SetForce(const double F[6]);void SetForce( const double F0, const double F1, const double F2
, const double F3, const double F4, const double F5);...
Figure 5.7: Methods of the class ”skillprim” (1)
5.5 The class ”skillprim”
This class represents the actual user interface, it’s applied within the user application. All
details from the above sections are hidden by this class. The complete message handling
5 Software Architecture 72
...void SetDeceleration(const double D);void SetDeceleration(const double D[6]);void SetDeceleration( const double D0, const double D1, const double D2
, const double D3, const double D4, const double D5);void SetAcceleration(const double A);void SetAcceleration(const double A[6]);void SetAcceleration( const double A0, const double A1, const double A2
, const double A3, const double A4, const double A5);void SetSpeed(const double S);void SetSpeed(const double S[6]);void SetSpeed( const double S0, const double S1, const double S2
, const double S3, const double S4, const double S5);void SetTaskFrame(const double TF[6]);void SetTaskFrame( const double TF0, const double TF1, const double TF2
, const double TF3, const double TF4, const double TF5);short GetReference();short IsDefaultExitConditionTrue();short IsUserExitConditionTrue();double GetBufferTime();int GetDefaultEC();
Figure 5.8: Methods of the class ”skillprim” (2)
via MiRPA is embedded here, i.e. the user just has to set up desired skill primitives, i.e.
a skill primitive net, and does not have to pay attention to any control subjects. Starting
the skill primitive controller as well as ending, setting up skill primitives, sending skill
primitives, interrogating values and analyzing answers from the skill primitive controller
are supported by the ”skillprim” class, whose method prototypes are shown in figure
5.7 and figure 5.8. Of course, there are also some attributes, but for the understanding
of how to apply the skill primitive concept, attributes are no points of interest. All
methods are detailed in the following. To determine a reference frame, table 5.4 shows
all implemented definitions. According to 3.3 on page 18, the setpoint of a skill primitive
Reference frame Value
Task frame SP REF TASK
World frame SP REF WORLD
Hand frame SP REF HAND
Sensor frame SP REF SENSOR
Joint space SP REF JOINT
Table 5.4: Reference frame definitions from the class”skillprim”
5 Software Architecture 73
can be determined w.r.t. several reference frames. Hybrid control is only enabled, if
all setpoints refer to the same frame. Force/torque control can only be engaged w.r.t.
the hand, sensor, or task frame. Positions can be determined in many different ways.
Table 5.5, which accords to the one from section 3.3 shows all supported possibilities.
Any superscript ”reference” term in the table, represents the current reference frame
Magnitude Setpoint for Value
reference�p <reference> SP POS
joint�p SP POS JOINT
world�p <hand> SP POS WORLD
task�p <task> SP POS TASK
sensor�p <sensor> SP POS SENSOR
hand�p <hand> SP POS HAND
Position joint/world/world∗�p <reference> SP POSM
joint�p SP POSM JOINT
world∗�p <hand> SP POSM WORLD
world�p <task> SP POSM TASK
world�p <sensor> SP POSM SENSOR
world�p <hand> SP POSM HAND
reference�p reference (hold) SP POS HOLD
VelocityTF, SF, HF: reference�v reference
WF : world�v hand
JS : joint�v
SP SPEED
Force/torque reference �F SP FORCE
WF : World frame
TF : Task frame
SF : Sensor frame
HF : Hand frame
JS : Joint space
Table 5.5: Setpoint specification of the class ”skillprim”
as frame. These values are entered in the compliance selection matrix to determine the
respective controller and the kind of position specification. For a better understanding,
an example program code is presented in section 5.6 on page 79.
StartSkillPrimController
Before any skill primitive can be executed, the skill primitive controller has to be started.
Prior to this message, the user application has to be connected to MiRPA, the connec-
tion handler is transmitted to this method. To enable communication between the skill
primitive controller and the user application, two messages have to be registered:
SPControllerReady After the skill primitive controller gets started, it takes a certain
5 Software Architecture 74
time, until the robot is ready for operation. To acknowledge the user application
about this, this GENERAL message is sent by the skill primitive controller.
SkillPrimResult When the stop condition of a skill primitive becomes true, the user
application receives a GENERAL answer message, containing the stop condition’s
result, eventual error messages, and the new setpoints including force setpoint, po-
sition setpoint, and compliance selection matrix.
Subsequently to the registration of these messages, the message ”StartSPCtrl” is sent
to MiRPA. The skill primitive control process starts. To enable the robot, the joint
position controller gets started and the user has to activate the robot. Afterwards the
skill primitive controller sends the ”SPControllerReady message, that means, the skill
primitive controller is ready to receive any skill primitives. The system stop condition
string as well as the buffer time of hybrid controller are finally interrogated by this method.
The result value of this method equals zero for a successful execution.
EndSkillPrimController
Here, the message ”EndSPCtrl” is sent to the skill primitive controller, which stops the
robot jerkily, ends the joint position controller, and finally itself. This message should
only be transmitted, if the robot is standing still.
SetReference
This method’s argument sets up the reference frame of a skill primitive. The argument
can be chosen from table 5.4. The following five methods use this functionality without
any argument.
SetReferenceTask
Sets the task frame as reference one.
SetReferenceWorld
If the world frame is supposed to be reference frame of a skill primitive, this method is
applied.
SetReferenceHand
”SetReferenceHand” sets the hand frame as reference frame.
5 Software Architecture 75
SetReferenceSensor
Usually, movements w.r.t. the sensor frame are applied rarely. Here the sensor frame
becomes reference frame.
SetReferenceJoint
Free space movements, i.e. gross motion, should be executed in joint space.
”SetReferenceJoint” sets up the joint space to move in.
SetSMatrix
Table 5.5 shows all possibilities for any value of the diagonal compliance selection matrix.
These values can be set either as one single value, which is valid for all six DOFs, as six
fielded integer array, or as six individual values. Regarding to the position setpoints, it
is meaningful to choose the right reference system for any position determination. Of
course, the position setpoints have to be set up in correspondence to these values.
SetTaskFrame
If a skill primitive is executed w.r.t. the task frame, it has to be setup, of course. But
the task frame can also be used to used monitor any values. The values are transferred
either via an array or by six single values of double-precision floating point.
SetPosition
In correspondence to the setup of the compliance selection matrix, the position setpoint
can be arranged by this function. The setpoint can either be determined by a six-field
array or by six single double-precision floating point values.
SetSpeed
The speed can also be set up in three different ways: as single value for all DOFs, as
array, or as six individual values. These values’ function accords to the descriptions from
the skill primitive chapter.
SetAcceleration
The acceleration value for the trajectory planner is transmitted with a call of this method.
As well as for the velocity setpoint, three different formats are supported.
SetDeceleration
As analogous to the acceleration values, the deceleration values are set up by this function.
5 Software Architecture 76
SetJerk
The time derivative acceleration might be used for any future trajectory planning con-
cepts. This values is currently ignored, but to be prepared for any future extensions, the
interface is designed for this feature. In comparison with the positions, velocities, and
accelerations, this value is given only one dimensional instead of six dimensional.
SetForce
Force and torque setpoint values are arranged by this function. The values are limited by
the maximum sensor load. If the manipulator rules in free space, the controller heads for
contact in the respective direction, i.e. for a negative force setpoint, the robot accelerates
in positive direction. A setpoint of zero Newton does not cause any approach movements.
SetThreshold
To transmit the threshold for force control activation this function is provided. The
values are either given as six individual values or as six-field array. Since there are
different minimum values for these thresholds, the user should keep in mind, that the
units are different for the force and for the torque threshold. Regarding to the noise of
the JR3 sensor, common minimum values are 2 Newton for forces and 0.15 Newton meter
for torques.
SetExitCondition
The user stop condition string of a skill primitive is arranged by this method. Currently,
there is no exit condition checker, i.e. if the exit condition is erroneous, the skill primitive
controller keeps the old setpoints from the last skill primitive. Hence, the user should
verify the exit condition string carefully. How to setup the string is described in 5.4.6 on
page 60.
SetHoldTime
If a skill primitive’s execution has reached all setpoints for this certain time interval, the
skill primitive exit condition is fulfilled. A good, empirical found value is 50 milliseconds.
SendSkillPrim
When all twelve (or 13, including the jerk) parameters are set up, the skill primitive can be
transmitted for execution. Therefore the message ”SetFPV” is set up and sent to the skill
primitive controller. The user should check the skill primitive’s parameters very carefully
to prevent from any damage caused by undesired movements. After the transmission,
the process remains in receive-blocked state until the skill primitive controller answers.
5 Software Architecture 77
The GENERAL message ”SkillPrimResult contains the result values of the sent skill
primitive. After the reception of this message the user can access the following strings to
analyze the skill primitive’s execution:
• DefaultExitConditionResultString contains the system stop condition result,
e.g. ”0010000000000000000 |||||||||||||||||||”, here the force in z-direction
would have exceeded the maximum value.
• UserExitConditionResultString a result string like ”011&|” lets the user know,
which condition became true.
• ErrorExitConditionResultString if the exit condition is erroneous, this string
contains an error message.
The methods IsDefaultExitConditionTrue and IsUserExitConditionTrue can be used
to check easily if the default exit condition or the user exit condition became fulfilled.
ResetFTSValues
This method sends the ”FTSReset” message to reset the force/torque values in the task
frame. To generate a point of reference for the force controller, this should be done prior
to any sequence of skill primitives using force control.
GetCurrentFPValues
As described in 5.4.11, the possibility to interrogate values from the skill primitive con-
troller is provided. The values are always delivered as a six-tuple. The first argument
of this method contains the requested values, i.e. a pointer to a double-precision float-
ing point array of six elements is required. To specify the desired values, the string
is transmitted as second argument. The following methods, ”GetCurrentFPValues”,
”GetCurrentTaskPos”, ”GetCurrentWorldPos”, ”GetCurrentTaskFor”, ”GetCurrentWorldSpd”,
”GetCurrentJointPos”, ”GetCurrentForceTX” , ”GetCurrentForceTY”, ”GetCurrentForceTZ”,
”GetCurrentForceRX” , ”GetCurrentForceRY”, and ”GetCurrentForceRZ”, also use this
function to interrogate values, i.e. these functions are not really necessary, but it makes
the chore much simpler.
GetSingleValue
Similar to ”GetCurrentFPValues”, where six values are requested, this method only
interrogates one single value. The value is specified in the same manner as the six ones
are, but to select a single value the DOF (0..5) has to be given additionally.
5 Software Architecture 78
GetCurrentTaskPos
The task frame w.r.t. the world frame world�p task is an often requested magnitude. The
double-precision floating point pointer refers to a six-tuple of values, that contain this
position.
GetCurrentWorldPos
Similar to ”GetCurrentTaskPos”, the hand frame position w.r.t. the world frame world�p hand
is delivered.
GetCurrentTaskFor
The forces and torques in the task frame are also frequently interrogated, hence this
method was implemented for an easy handling, when requesting task �F .
GetCurrentWorldSpd
To get the current hand frame speed w.r.t. the world frame world�v hand.
GetCurrentJointPos
If the user needs the current joint positions, the function delivers joint�p.
GetCurrentForceTX .. GetCurrentForceRZ
Many robot tasks, require the calculation of any point in space in dependence on current
force and torque values. For an easy chore, these six functions provide each single value:taskFx,
taskFy,taskFz,
taskτx,taskτy, and taskτz.
GetReference
This method returns the current reference frame.
IsDefaultExitConditionTrue
If the system stop condition became true, this method returns a one, if not, a zero is
returned.
IsUserExitConditionTrue
As well as the default exit condition can be analyzed, the user exit condition of the last
skill primitive can be checked by this method.
5 Software Architecture 79
GetBufferTime
Actually, this function not required for any further use. It is taken to receive the buffer
time value from the skill primitive controller, which is done right after the startup of the
controller.
GetDefaultEC
To analyze the default exit condition string, the user application can request this one by
this method, but as well as for the ”GetBufferTime” method, this function is already
applied during the initialization phase.
A method to setup tool commands has not been implemented yet, this interface still has
to be specified by future works.
5.6 Sample Code for a Single Skill Primitive
Regarding to 5.5, this section delivers a very short sample program. Figure 5.9 and figure
5.10 show compilable code for two simple skill primitives. After the variable declaration,
the MiRPA connection has to be established. Therefore, we search for the object server
and connect the user application via the ”connection” class. The connection pointer
”*OSConnect” is an argument of the ”StartSkillPrimController” method, that starts
the skill primitive controller. After the robot has been activated, the skill primitive
controller is ready for data reception. The following lines set up the first skill primitive:
Reference : Joint space
positionS = (1, 1, 1, 1, 1, 1)T
velocityS = (0, 0, 0, 0, 0, 0)T
forceS = (0, 0, 0, 0, 0, 0)T
�k = (0 mm, 0 mm, 75 mm, 0 ◦, 0 ◦, 0 ◦)T
world�p task = (999.5 mm, 502.5 mm, 341.5 mm, 0 ◦, 180 ◦,−105 ◦)T
joint�v = (150 ◦/s, 150 ◦/s, 150 ◦/s, 150 ◦/s, 150 ◦/s, 150 ◦/s)T
joint�a = (200 ◦/s2, 200 ◦/s2, 200 ◦/s2, 200 ◦/s2, 200 ◦/s2, 200 ◦/s2)T
joint�d = (200 ◦/s2, 200 ◦/s2, 200 ◦/s2, 200 ◦/s2, 200 ◦/s2, 200 ◦/s2)T
thold = 100ms
Stop condition : (t > 10000ms)
5 Software Architecture 80
#include "skillprim.h"#include <stdio.h>#include <Message.h>#include <connection.h>#include <sys/proxy.h>#include <sys/name.h>
int main(void){
connection *OSConnect; // connection for the object serverMessage msg, // message to be sent/recieved to another process
reply; // reply messagepid t OS; // pid of the object serverskillprim SkPr; // new force-position value given by the user
// search object serverif((OS = qnx name locate(0, "iRP/ObjectServer", 0, NULL)) == -1){
printf("Error: Object server not found\n");return(-1);
}
// Establish object server connectionOSConnect = new(connection)(connection::CON IPC, OS);
SkPr.StartSkillPrimController(OSConnect);
printf("Default exit condition: \"%s\"\n", SkPr.DefaultExitConditionString);
SkPr.SetReference(SP REF JOINT);SkPr.SetTaskFrame(0.0, 0.0, 75.0, 0.0, 0.0, 0.0);SkPr.SetPosition(999.5, 502.5, 341.5, 0.0, 180.0, -105.0);SkPr.SetSpeed(150.0, 150.0, 150.0, 150.0, 150.0, 150.0);SkPr.SetAcceleration(200.0, 200.0, 200.0, 200.0, 200.0, 200.0);SkPr.SetDeceleration(100.0, 100.0, 100.0, 100.0, 100.0, 100.0);SkPr.SetSMatrix(SP POSM TASK);SkPr.SetForce(0.0, 0.0, 0.0, 0.0, 0.0, 0.0);SkPr.SetThreshold(2.0, 2.0, 2.0, 0.15, 0.15, 0.15);SkPr.SetHoldTime(100.0);SkPr.SetExitCondition("O10000.0");
SkPr.SendSkillPrim();
// sample code continues on the next page in figure 5.9
Figure 5.9: Sample code for a simple user application (1)
5 Software Architecture 81
SkPr.ResetFTSValues();
SkPr.SetReference(SP REF TASK);SkPr.SetTaskFrame(0.0, 0.0, 75.0, 0.0, 0.0, 0.0);SkPr.SetPosition(0.0, 0.0, 0.0, 0.0, 0.0, 0.0);SkPr.SetSpeed(150.0, 150.0, 15.0, 150.0, 150.0, 150.0);SkPr.SetAcceleration(200.0, 200.0, 200.0, 200.0, 200.0, 200.0);SkPr.SetDeceleration(100.0, 100.0, 100.0, 100.0, 100.0, 100.0);SkPr.SetSMatrix(SP POS, SP POS, SP FORCE, SP POS, SP POS, SP POS);SkPr.SetForce(0.0, 0.0, -20.0, 0.0, 0.0, 0.0);SkPr.SetThreshold(2.0, 2.0, 12.0, 0.15, 0.15, 0.15);SkPr.SetHoldTime(100.0);SkPr.SetExitCondition("TX2N0;100;>;O25000.0;|;");
SkPr.SendSkillPrim();
//...
SkPr.EndSkillPrimController();
msg.NewMsg ("logoff");msg.SetMsgID (Message::CONTROL);reply.SetMsgID (Message::INVALID);msg.MSend(OSConnect, &reply); // log off from object server
if(reply.GetID() != Message::MSGOK){
printf("Can not log off from object server.\n");return(-1);
}
return(0);}
Figure 5.10: Sample code for a simple user application (2)
This joint space movement forces the manipulator to the Cartesian position determined
by world�p task. After this first skill primitive is sent, the force/torque values in the task
frame are reset and a second skill primitive is set up and sent:
Reference : Task frame
positionS = (1, 1, 0, 1, 1, 1)T
velocityS = (0, 0, 1, 0, 0, 0)T
5 Software Architecture 82
forceS = (0, 0, 0, 0, 0, 0)T
�k = (0 mm, 0 mm, 75 mm, 0 ◦, 0 ◦, 0 ◦)T
task�p task = (0 mm, 0 mm, 0 mm, 0 ◦, 0 ◦, 0 ◦)T
task�v task = (150mm/s, 150 mm/s, 15 mm/s, 150 ◦/s, 150 ◦/s, 150 ◦/s)T
task�a task = (200mm/s2, 200 mm/s2, 200 mm/s2, 200 ◦/s2, 200 ◦/s2, 200 ◦/s2)T
task �d task = (200mm/s2, 200 mm/s2, 200 mm/s2, 200 ◦/s2, 200 ◦/s2, 200 ◦/s2)T
task �F = (0 N, 0 N,−20 N, 0 Nm, 0 Nm, 0 Nm)T
task�r = (2 N, 2 N, 12 N, 0.15 Nm, 0.15 Nm, 0.15 Nm)T
thold = 100ms
Stop condition :((
t > 25000ms)
∨(task
ptaskz > 100 mm
))
Here, the task frame position w.r.t. the task frame at the beginning of the skill primitive
is kept. To head for contact, the manipulator accelerates with a value of 200 mm/s2 in
positive z-direction (task frame) up to a speed of 15 mm/s. As soon as the threshold of
twelve Newton is exceeded, the force controller becomes activated and sets up a force of
-20 Newton. Subsequent to this skill primitive, several other ones might follow. When
the whole robot task execution is finished, the skill primitive controller ends and the user
application logs off from the object server.
6 Experimental Results 83
6 Experimental Results
6.1 Trajectory Planning
-200
-150
-100
-50
0
50
100
150
200
0 0,5 1 1,5 2 2,5 3 3,5 4 4,5
t/s
�/°/
s
-150
-100
-50
0
50
100
150
200
250
0 0,5 1 1,5 2 2,5 3 3,5 4 4,5
t/s
�/
°
Joint 1 Joint 2 Joint 3 Joint 4 Joint 5 Joint 6
Figure 6.1: Experimental results of the six DOF trajectory planner
The trajectory planner, described in chapter 4 on page 27, is applied in joint space as well
as in Cartesian one, of course. Figure 6.1 shows a joint space movement for the six-joint
manipulator manutec r2. The top part represents the joint rates whereas the bottom
one shows the position’s course. After 0.5 seconds, the first joint starts accelerating
by a value of 300 ◦/s2. After one second is elapsed, a new setpoint is set for all six
DOFs. The acceleration value for all DOFs amounts to 300 ◦/s2, again, the deceleration
6 Experimental Results 84
value amounts only to 100 ◦/s2. The first DOF has to decelerate, while all other DOFs
accelerate to their recalculated velocity. By and by, all DOFs start decelerating to reach
their respective target position. Shortly after 4.5 seconds, the procedure is completed.
6.2 Bayonet Insertion
-0,2
-0,15
-0,1
-0,05
0
0,05
0,1
0 1 2 3 4 5 6 7 8 9
t/s
Tz/N
m
-30
-20
-10
0
10
20
0 1 2 3 4 5 6 7 8 9
t/s
Fz/N
a b c d e f
a b c d e f
Figure 6.2: Force taskFz and torque taskτz while inserting the bulb into the bayonet socket
To verify the functionalities of the hybrid controller, same example robot tasks have been
implemented. The first one screws a light bulb, e.g. from a motor vehicle, into a bayonet
socket. The complete procedure is illustrated by the photo sequence of figure 6.3. The
relevant values occurring during the execution of the example skill primitive net, Fz and
6 Experimental Results 85
12 4
3
Figure 6.3: Picture sequence while inserting a bulb into a bayonet socket
6 Experimental Results 86
Tz, are illustrated in figure 6.2. During the first phase (a) the robot has already gripped
the light bulb and heads for contact with the socket. The z-axis of the task frame is
coincident with the center axis of the light bulb. At the transition from phase (a) to
phase (b) contact is detected, since the force Fz has reached the stop condition’s value
of −20N . The desired contact force of the following skill primitive is set to −25N and
the robot starts to rotate the light bulb on the socket surface. The noise of Tz in phase
(b) is caused by the unevenness of the socket surface. When the bayonet pins are in the
correct position at the transition from (b) to (c), the light bulb moves into the socket’s
hole, which is recognizable because of the zero-converging force. The increasing force
at the end of phase (c) is caused by the socket spring, which is completely compressed
at the changeover to (d), where the second rotation about the socket’s axis starts. At
the end of phase (d) the torque Tz increases, i.e. the light bulb pins are at the initial
stop; subsequently the device is pushed upwards in phase (e) until the force Fz reaches
a positive signed threshold value. The light bulb insertion is completed, i.e. the gripper
opens in phase (f).
6.3 Object Placing
As second example, the placing of a polyhedral object onto a plane surface was chosen,
since translational and rotational DOFs are not independent. The relevant magnitudes
Fz, Tx and Ty are shown in figure 6.4. Starting with phase (a) the robot heads for contact
with the surface. The z-axis of the task frame equals the direction of motion towards the
surface. The increasing force at the transition from (a) to (b) indicates contact. A very
short test movement is executed to establish a force that uses a moment arm, which is
orthogonal to the surface. Depending on the measured torques and forces the point of
contact is calculated and set as task frame at the beginning of phase (c). I.e. the leap to
zero in the torque diagram is not caused by a motion, but by a translational displacement
of the task frame. Subsequently, the object rotates about the x- and the y-axes of the
task frame in phase (c) until a torque threshold value about one of these axes is exceeded.
The contact ‘edge’ is calculated and the task frame is rotated about the z-axis, such that
the x-axis or the y-axis is coincident to the contact edge. This axis is used as rotation
axis in phase (d). After a torque threshold value is exceeded, the object plane and the
surface plane are coplanar and the final state is reached in phase (e). Four pictures of
the sequence of this robot task are shown in figure 6.5.
6 Experimental Results 87
-2,5
-2
-1,5
-1
-0,5
0
0,5
1
1,5
2
0 1 2 3 4 5 6 7 8
t/s
Tx,
Ty
/N
m
-45
-40
-35
-30
-25
-20
-15
-10
-5
0
5
0 1 2 3 4 5 6 7 8
t/s
Fz/N
a b c d e
a b c d e
Figure 6.4: Force taskFz, torque taskτx, and taskτy during the block placing procedure
6 Experimental Results 88
12 4
3
Figure 6.5: Picture sequence while placing an aluminium block onto a solid table
7 Real Time Problems 89
7 Real Time Problems
-125
-100
-75
-50
-25
0
-5 -3 -1 1 3 5 7 9 11 13 15
t/ms
Fz/N
2.5 mm/s
5 mm/s
7.5 mm/s
10 mm/s
12.5 mm/s
15 mm/s
17.5 mm/s
20 mm/s
22.5 mm/s
25 mm/s
27.5 mm/s
30 mm/s
-125
-100
-75
-50
-25
0-5 -3 -1 1 3 5 7 9 11 13 15
t/ms
Fz/N
2.5 mm/s
5 mm/s
7.5 mm/s
10 mm/s
12.5 mm/s
15 mm/s
17.5 mm/s
20 mm/s
22.5 mm/s
25 mm/s
27.5 mm/s
30 mm/s
Figure 7.1: Contact force experiments with different sample times; top diagram: 1 kHz;bottom diagram: 4 kHz
Our general aim is to obtain high control rates for the hybrid controller. The joint posi-
tion controller runs at the highest priority (29) followed by the skill primitive controller
(25). Figure 7.1 illustrates the experimental result of a contact force experiment, which
indicates that high control rates are definitely necessary for an excellent manipulator be-
havior. The figure shows an array of curves, which represents the forces in z direction.
The manipulator’s tool consists of an electric cone end. Within the robot’s workspace,
7 Real Time Problems 90
0
0,5
1
1,5
2
2,5
3
P4 2.4 GHz, cycle
time: 4ms, with
position controller
P4 2.4 GHz, cycle
time: 8ms, with
position controller
P4 2.4 GHz, cycle
time: 4ms, without
position controller
P4 2.4 GHz, cycle
time: 8ms, without
position controller
P3 800 MHz, cycle
time: 4ms, without
position controller
P3 800 MHz, cycle
time: 8ms, without
position controller
Latency Control cycle Transformations Exit condition check Send position
a b c d e f
Figure 7.2: Average run times in milliseconds of one control cycle; different processor plat-forms (a-d ←→ e-f); with and without position controller (a-b ←→ c-f)
a table with an electric isolated conducting surface is installed. The manipulator moves
perpendicular towards the table, only guarded by the joint position controller, that re-
ceives one sample point each cycle. As soon as the spike of the manipulator gets electric
contact with the table, the robot is supposed to move back with maximum speed. The
top part shows the occurring forces in z direction for different approach velocities. Here
the electric contact is interrogated as well as next position setpoint is transmitted with
frequency of 1 kHz. In comparison to this, the bottom part illustrates the results for the
same experiment configuration, but the interrogation frequency is increased to 4 kHz. As
clearly can be seen, the higher the control frequency is the better the control behavior is,
of course. To guarantee real-time behavior, the cycle time of the skill primitive controller
had to be increased by more than factor two (in comparison to [KROEGER02]). To
detect, which part of the controller takes the longest execution time, some measurements
have been arranged and are shown in figure 7.2. Regarding to figure 5.3 on page 46, the
hybrid controller is subdivided into four parts:
1. Calculation of the new desired position (control cycle)
2. Value transformations into all desired frames (transformations)
3. Check of the exit condition (exit condition check)
4. Sending the new position setpoint to the joint position controller (send position)
7 Real Time Problems 91
The required times for each of these four parts plus a latency time are used to generate
the figure 7.2. All values are average ones of 6500 cycles and to ensure, that there is no
delay caused by any hard disk accesses, all measurements have been written into memory
and have been stored afterwards. The figure compares three different subjects:
• Cycle time (4 ms ←→ 8 ms)
• Pentium III CPU, 800 MHz ←→ Pentium IV CPU, 2.4 GHz
• with joint position controller ←→ without joint position controller
To prevent from any influence of the joint position controller, the values of (c) to (f) are
measured without it, hence the fourth part (sending position) is left out. (a) and (b)
are influenced by the joint position controller, which runs at a higher priority than the
skill primitive controller. As can be seen the influence of the joint position controller is
not quite enormous. The measurements (a) to (d) are taken on a Pentium IV system at
2,4 MHz, whereas the left ones, (e) and (f), come from a Pentium III platform at 800
MHz. Here, the Pentium III system takes almost twice the execution time of the Pentium
IV platform. The bars (a), (c), and (e) are measured with a control cycle time of four
milliseconds, the other ones are taken from a measurement with eight milliseconds. The
major cognition of these test series is, that the transformations into all occurring frames
take the longest time within one control cycle. To compensate this in future times, an-
other memory model or a smaller data structure for the hybrid controller might help.
Another problem concerning real time problem occurs, when the skill primitive controller
receives a new skill primitive. Here, the hybrid controller might not be able to start as
soon as the trigger message has been received, since the verification of the skill primitive
parameters takes a certain time. To redress, another software structure would be very
smart. Figure 8.5 on page 98 suggests another software architecture, where the calcula-
tion of the control parameters, according to the received skill primitive and the hybrid
controller itself, are separated. The contents of this illustration are discussed in 8.4 on
page 97.
A similar problem would occur, if an off-line trajectory planner would be used. A first
solution calculated the whole trajectory for a complete skill primitive execution. But dur-
ing this calculation, a time of two to three milliseconds elapsed and hence no real-time
demands could be complied, of course.
The last item concerning real-time problems accords to the generation of output files.
Writing the general output file does not take much system performance, hence the cycle
time needs not to be adapted, whereas the other two files might become very huge. It
might slow down the system, that the cycle time should by increased by two to three
milliseconds per file. The lowest cycle time that has been tested was two milliseconds
7 Real Time Problems 92
(Pentium IV system) and four milliseconds (Pentium III system). If there is any of the
huge output files written, the cycle time should be in the area of eight milliseconds.
8 Suggestions for Future Work 93
8 Suggestions for Future Work
8.1 About Force Control w.r.t. a Task Frame Given w.r.t the World Frame
WF
RF
HF
TF
SF
X
y
WFRFHFSFTF
World frameRobot base frameHand frameSensor frameTask frame w.r.t. the WF
z
Figure 8.1: Torque control in a task frame w.r.t. the world frame is contradictory
Force control in world space still needs discussion. Of course, the force/torque values
can be transformed from the sensor frame to the world frame or to any task frame given
w.r.t. the world frame. Therefore, just the transformation parameters have to be changed
every cycle, since the kinematic transformation from the robot base frame to the robot
hand frame changes permanently. To realize such a concept, the task frame must be
fixed w.r.t. the world frame and all setpoints as well as the compliance selection matrix
must be interpreted w.r.t. this frame. But it is senseless to interpret the values from
the torque sensor, which is mounted on the robot’s hand, as values w.r.t. a task frame
that is determined w.r.t. the world frame. Contradictions appear, since the effect of a
robot motion is not coherent, i.e. there is no correlation between a robot motion and the
measured torque values. This can only be achieved, when the task frame is given w.r.t.
the hand frame and the point of contact, i.e. the task frame, is calculated. An example
shows figure 8.1. Here, the torque about the z-axis of the task frame (w.r.t. the world
frame) is supposed to be torque controlled, but therefore we need to control the force in
y-direction in the sensor frame. Regarding to figure 8.2, torque control in the task frame
w.r.t. the world frame can not be realized.
8 Suggestions for Future Work 94
8.2 Trajectory Planning
Chapter 4 presents the implemented trajectory planning concept. As mentioned, this is
just a second order path planner. To achieve better trajectories, a higher order trajectory
planner is desirable. Hence, a third order concept should be embedded in future time.
The only disadvantage of such a concept is, that the complexity and, according to that,
the sample point calculation time increases. Another expansion would be to implement
other kinds of trajectory planning systems, e.g. a special Cartesian trajectory planner,
that applies one constant rotation axis. For this concept, the rotation speed, can only
be given for the resulting axis, not separately for each rotatory DOF. Another solution
might be a circular trajectory planner, but here, a path needs other specifications: veloc-
ity of circulation, and a via point. The realization of such a concept still needs discussion.
To adapt the DOFs, that do not take the longest execution time for a desired movement,
several possibilities are given. The current concept determines a new maximum speed,
but instead of this, also a new maximum acceleration can be calculated. Regarding to a
higher order system, also the jerk might be adapted for this case.
8.3 Skill Primitives
robotThand
handTsensor(1)
handTtask
Robot Base
Hand
Sensor (1)
TaskWorld
worldTrobot
handTsensor()
handTsensor(n)
Sensor (2)
Sensor (n)
Task
Sensor (n+1)
Sensor (n+2)
Sensor (n+m)
worldTtask
world T se
nso
r(n+1)
worldT
sensor(
n+
2)
worldT
sensor(n+m)
Fixed transformation
User transformation
Kinematic f transformationorward
Figure 8.2: Suggestion for a general frame assignment
This work constitutes only a very first approach for the skill primitive concept. As a
matter of course, the concept still needs discussion. Some items for future improvements
are presented by this section.
8 Suggestions for Future Work 95
Frame Assignments
Chapter 3 defines a single skill primitive. One general aim is to enable all imageable
manipulator movements, but this requires a more flexible concept for setpoints and exit
conditions. Regarding to the current implementation, a setpoint is always a fixed num-
ber, but to obtain a more flexible system a setpoint might also be a function of time or a
function of any control values. This matter should also rule for exit conditions. Currently,
a stop condition only compares a fixed number to any control value, but for an universal
concept, the user must be enabled to setup functions of control values to compare two
control values or even two functions of control values. Another aspect accords to figure
8.2, where several frames are illustrated. Currently, a task frame can only be given w.r.t.
the robot’s hand frame. n sensors might be mounted to the robots hand. To execute
movements w.r.t. any frame given w.r.t. the world frame, the possibility of determining
a task frame w.r.t. world frame must be provided. Of course, several sensors, i.e. m
sensors, might be mounted within the robot’s work cell. The respective positions are
determined by sensor frames w.r.t. the world frame.
Stop Conditions: Position Interpretation
X
y
z
1 2 3
X
y
z
X
y
z
4 5 6
X
y
z
X
y
zX
y
z
80
mm
Figure 8.3: Example for a force controlled task frame movement in two dimensional space:Setpoints: Fx = 0N , Fy = −5N , and τz = 0Nm, stop condition: (taskptask
y > 80mm); the stopcondition rules w.r.t. the task frame at the beginning of the skill primitive execution
There are several possibilities to interpret position values from exit condition. The over-
8 Suggestions for Future Work 96
line notation, e.g. task�p task, determines the covered distance since the beginning of the
skill primitive execution, as shown in figure 3.4 on page 23. This interpretation associates
with the application of position setpoints. Hence, it should not be changed. Contrary
to this proceeding, a position based exit condition can also be interpreted as a frame
w.r.t. the respective frame at the beginning of the skill primitive execution. Figure 8.3
shows the difference. To highlight this significant difference, figure 8.4 has been set up.
X
y
1 2 3 4 5
1
2
3
4
5
6
0 X
y
1 2 3 4 5
1
2
3
4
5
6
0
Start Start
z z
Figure 8.4: Skill primitive: movement in free space, w.r.t. the task frame, positive signedvelocity setpoint in x direction, negative signed velocity setpoint in ϕz direction, exit condi-tion:
(taskptask
x > 5); left part: exit condition rules w.r.t. the task frame at the beginning
of the skill primitive execution; right part: exit condition w.r.t. the covered distances
We treat a planar free-space movement w.r.t. the task frame, the x-direction is assigned
by a positive velocity setpoint, whereas the rotatory z-direction is supposed to rotate
with a negative signed angular speed and the y-direction shall keep its position. The
exit condition(
taskptaskx > 5
)rules in two different ways. On the left hand side, the
exit condition’s position is measured w.r.t. the task frame at the beginning of the skill
primitive execution world�p task, i.e. the skill primitive ends when the task frame exceeds
the world x position of 5.5 (dashed line). Since the skill primitive’s movement results in
a circular trajectory, the exit condition will never be fulfilled. Strictly speaking, we need
a new nomenclature for this description of exit conditions. The right hand side shows
the behavior according to the current implementation. If the task frame has covered a
distance of 5 in x-direction, the skill primitive ends. Which one of these is smarter, is
supposed to be discussed in future time. May be, both solutions should be supported by
the system.
8 Suggestions for Future Work 97
Velocity Control and Skill Primitive Exit Condition
To prevent from any undesired manipulator movements, the skill primitive exit condition
has been introduced. Otherwise the user could set up a skill primitive without any user
exit condition or with wrong exit conditions, respectively. I.e. the skill primitive would
end, if the system exit condition becomes true. The skill primitive exit condition makes
sense in any way, because the robot would maintain the setpoints anyway.
A problem occurs when the user sets up a velocity controlled DOF. The skill primitive
exit condition becomes true as soon as the desired speed is reached. The user application
gets acknowledged, but if the next skill primitive is not transmitted immediately, the
manipulator holds the desired speed until the system exit condition becomes true. This
is an undesired behavior, that might occur, if the user application, i.e. the skill primitive
net, is erroneous. To prevent from this, this mode might be abolished in future time. For
any speed controlled movements, the user can choose force control with a defined approach
speed. Here, the same behavior is achieved, but the skill primitive stop condition does
not get true, when the desired speed is achieved. Which of these two solutions is the
better way, depicts another point of discussion.
8.4 Software Architecture
The major target is to develop a low-maintenance system. The most important point to
reach this goal is to implement a very modular system, which is significantly facilitated
by MiRPA. To provide a multi-sensor system with exchangeable sensors and controllers,
figure 8.5 proposes a further software architecture, which comes along with figure 8.2 on
page 94. Initial point is the robot task process, that contains the skill primitive net. The
skill primitive is sent to the skill primitive execution process, which calculates parameters
for the hybrid controller, changes respective values in the buffer of the hybrid controller,
and sends the calculated parameters to the hybrid controller. This one interrogates
position values from the trajectory planner and/or the controllers. Each sensor, that
is mounted onto the robot’s hand (1 to n) and each sensor, that is mounted within
the work cell (n+1 to n+m), sends its data to a respective controller, that computes a
position difference for the hybrid controller. After the hybrid controller has received all
values, the new desired position is calculated, the exit conditions are checked, and the
inverse kinematic model is applied to determine the new joint configuration, which is
finally sent to the joint position controller, that communicates with the robot driver. If
the exit condition becomes true within the hybrid controller, the skill primitive control
process becomes acknowledged, computes new setpoints and sends an answer to the user
application, which can use this answer to determine the next manipulation primitive. To
comply with real-time demands, the processes need different priorities. The following list
starts with the highest priority: robot driver, joint position controller, hybrid controller,
8 Suggestions for Future Work 98
MiRPA
Skill primitive execution
Receive skill primitive
Calculate parameters
Set parameters
Robot and sensors
Sensor driver (1)
Robot task
Position controller
Robot driver
Sensor driver (2)
Sensor driver (n+m)
Controller (1)
Controller (n+m)
Controller (2)
Hybrid controller
Request values (1)
Request values (2)
Check stop conditions
Inverse kinematic model
Request values (n+m)
...
Compute new position
Send new joint position
Collect answers
Input
Output
Input
Output
Input
Output
Request position
Receive state
Send answer if true
Send answer
Receive answer
Send skill primitive
On-linetrajectoryplanner
Sharedbuffer
Figure 8.5: Suggestion for a future software architecture
trajectory planner and controllers 1 to n+m, skill primitive controller, user application.
Probably, MiRPA is surcharged by this architecture. Therefore a distributed MiRPA is
desirable. The joint position controller, the hybrid controller and the single controllers
might run on one machine, the skill primitive controller and the user application run on
another system. To obtain such a system, lots of development work has to be done in
future time.
8.5 Skill Primitive Controller
The previous section discussed changes of the general software architecture. Here, we
want to point out some improvement possibilities of the skill primitive controller itself.
8 Suggestions for Future Work 99
Communication to the Joint Position Controller
The current joint position controller receives the joint values in increments. Caused by
this, we obtain a very inflexible system. If the same controller architecture is applied to
another manipulator, the manipulation controller has to be adapted, because of the new
conversion from increments to degrees. A better solution would be to use degrees for the
joint position transmission. As consequence, the minimum and maximum joint values of
a manipulator should be also given in degrees.
Value Output File
To document values of the hybrid controller, the current implementation writes the list
from page 70 into an output file. Nevertheless, the user often demands more than these
few values, or he demands values, which have been treated by any function. To embed
this feature, a further interface might be created, which delivers any desired values from
the hybrid controller. Either the initialization file gets expanded by this feature, the
command line to start the skill primitive controller supports another attribute, or a new
message, whose parameters determine the values to be stored, might be implemented.
Movements in Joint Space and w.r.t. the World Frame
A significant problem concerning movements in joint space and for movements w.r.t.
the world frame arises, when a skill primitive gets ended because of the default exit
condition caused by a force. In this case, the manipulator found undesired contact with
its environment. Regarding to any movement w.r.t. the hand, the sensor, or the task
frame, the force controller engages for this case, stability is kept, and damage has been
prevented. But since the reference frame does not change in the current implementation,
the skill primitive controller would set the current position as new desired one, in fact, but
because of the subsequent deceleration process, the manipulator, the force/torque sensor,
and the environment might be damaged. To prevent from this case, another solution has
to be found in future time. E.g. the reference frame changes immediately to the sensor
frame, i.e. force control would be enabled. Even if force control w.r.t. the world frame
would be possible (compare 8.1 on page 93), the torque would degenerate.
Error Messages from the Skill Primitive Controller to the User Application
The current implementation only supports one single error message to a received skill
primitive: the exit condition string is erroneous. This might be extended in future time.
That the user application gets acknowledged about several events, e.g.: problem with the
inverse kinematic model, parameter limit violation, control parameter violation etc.
8 Suggestions for Future Work 100
Setting Up a New Task Frame
Regarding to all control engineering aspects, it is impossible to change the task frame ad
hoc, since there is always a leap from the old task frame to the new one. To redress this,
a trajectory planner for task frame changes might be considered. As consequence, the
task frame (not the manipulator) would be accelerated up to a certain speed, and would
be decelerated to reach its final position. This way, smooth transitions concerning force
control are enabled, if the task frame changes. Unfortunately, this smooth changing comes
along with new difficulties: the task frame change to the new position needs time and
during this time, the manipulator’s behavior as is has been defined by the skill primitive
must be calculated in a sophisticated way, because the reference frame (i.e. the task
frame) changes permanently.
9 Conclusion 101
9 Conclusion
This documentation just tries to give a conceptional overview, but the major part, the
implementation of the skill primitive controller, can be found on the enclosed CD, whose
contents are given by appendix D on page 121. This work constitutes a first approach
to try out the skill primitive concept. There is still an enormous need for discussion, as
can be seen in chapter 8, which proposes improvement possibilities for future time. But
beside all this unsatisfying points, there are lots of successful results.
Coming back to the target, which has been pointed out in the introductory chapter,
this concept is far away from universal sensor interface system. Interfaces to distance
controllers and computer vision systems have not been specified yet. Even the feedback
from the skill primitive controller to the user application containing the skill primitive
net needs further development. Only a rudimentary solution has been embedded here.
The interface to automatically generated skill primitive nets is still in a very initial stage
and needs more detailed specification.
Velocity control is one of the major discussion points. Actually, it can be completely
replaced by force control with a defined approach speed. Especially regarding to sensor
damage, force control instead of velocity control maintains advantages. Eying the skill
primitive exit condition, the setpoint would not be reached if the desired speed is reached,
here the user exit condition can rule. Another point concerning control engineering is,
that the dynamic behavior of the hybrid controller has not been analyzed yet.
Many literature sources use the term ”CoC” instead of ”task frame”, but this formulation
is quite unclear and, above all, it is no magic bullet. It is just a simplifying, but confusing
expression, especially when the CoC is given w.r.t. the world frame. Apparently, task
frame and CoC are used as synonyms, but a task frame is always given w.r.t. another
frame, which can be clearly expressed by handTtask for example. The expression ”task
frame” is much more appropriate.
On the other hand of all these awkward circumstances, many brilliant features have been
developed during this work. In fact, there is no universal programming interface to assem-
bly planning systems, but a robot programming language with excellent functionalities
is ready for use.
A robot programmer can setup movements in several spaces, the manipulator can reach
each point of its workspace in any desired configuration, because joint space movements
9 Conclusion 102
are as well possible as Cartesian ones. Both can be commanded by one single interface.
Velocities can be set up with defined acceleration and deceleration values. The on-line
trajectory planning system delivers outstanding results in joint space as well as in Carte-
sian space.
Regarding to the hybrid controller, a simple but very powerful solution has been created.
Force control, velocity control, and position control are enabled w.r.t. any task frame
given w.r.t. the robot’s hand frame. The results of hybrid control in six-dimensional
space are a one at Lloyd’s. Due to the contact matrix C, a stable behavior can be guar-
anteed.
The modular software architecture provides an well suited and easy-expandable process
environment. Of course, MiRPA can not be used to close every control cycle, but this
powerful middleware makes the chore very simple, that new processes or even the splitting
of processes can be implemented without much affords. E.g. the skill primitive controller
could be divided into two or more modules.
The class ”skillprim” represents an easy-to-learn programming environment. All con-
trol engineering details are embedded in the hybrid controller, i.e. the programmer does
not have to care about this area. Only setpoints and threshold have to be determined by
the program developer.
The two examples, object placing and bayonet insertion, proof, that the concept works
well. The implementation of these two robots task took a few hours only. Several other
examples will be found in future time to corroborate this implementation. The placing of
a disorientated polyhedral aluminium block into a right angled corner was also performed,
but the description has been neglected here.
One last point about the skill primitive concept is, that there are uncountable possibilities
to realize a respective control architecture for this idea. So finally, let us close this thesis
by an inspiring cite of Henry Louis Mencken: ”There is always an easy solution to every
human problem — neat, plausible, and wrong.”
A Transformations 103
A Transformations
A.1 Calculation of Position, Velocity and Acceleration Values
world�p sensor
worldTsensor =⇒ world�p sensor
world�p task
worldTtask =⇒ world�p task
world�p hand
worldThand =⇒ world�p hand
sensor�p sensor
sensor�p sensor = sensor�p sensor + sensor�p sensor
task�p task
task�p task = task�p task + task�p task
hand�p hand
hand�p hand = hand�p hand + hand�p hand
world�v sensor
RPY(
worldϕsensor∗z ,world ϕsensor∗
y ,world ϕsensor∗x
)= Trans
(worldpsensor
x ,world psensory ,world psensor
z
)−1 ·world Tsensor·(Trans
(worldpsensor
x ,world psensory ,world psensor
z
)−1 ·world Tsensor
)−1
A Transformations 104
world�v sensor = 1tcycle
·
worldpsensorx − worldpsensor
x
worldpsensory − worldpsensor
y
worldpsensorz − worldpsensor
z
worldϕsensor∗x
worldϕsensor∗y
worldϕsensor∗z
world�v task
RPY(
worldϕtask∗z ,world ϕtask∗
y ,world ϕtask∗x
)= Trans
(worldptask
x ,world ptasky ,world ptask
z
)−1 ·world Ttask·(Trans
(worldptask
x ,world ptasky ,world ptask
z
)−1 ·world Ttask
)−1
world�v task = 1tcycle
·
worldptaskx − worldptask
x
worldptasky − worldptask
y
worldptaskz − worldptask
z
worldϕtask∗x
worldϕtask∗y
worldϕtask∗z
world�v hand
RPY(
worldϕhand∗z ,world ϕhand∗
y ,world ϕhand∗x
)= Trans
(worldphand
x ,world phandy ,world phand
z
)−1 ·world Thand·(Trans
(worldphand
x ,world phandy ,world phand
z
)−1 ·world Thand
)−1
world�v hand = 1tcycle
·
worldphandx − worldphand
x
worldphandy − worldphand
y
worldphandz − worldphand
z
worldϕhand∗x
worldϕhand∗y
worldϕhand∗z
A Transformations 105
sensor�v sensor
sensorTsensor = worldT−1sensor ·world Tsensor =⇒ sensor�p sensor
sensor�v sensor =sensor�p sensor
tcycle
task�v task
taskTtask = worldT−1task ·world Ttask =⇒ task�p task
task�v task =task�p task
tcycle
hand�v hand
handThand = worldT−1hand ·world Thand =⇒ hand�p hand
hand�v hand =hand�p hand
tcycle
world�a sensor
world�a sensor =world�v sensor − world�v sensor
tcycle
world�a task
world�a task =world�v task − world�v task
tcycle
world�a hand
world�a hand =world�v hand − world�v hand
tcycle
sensor�a sensor
sensor�a sensor =sensor�v sensor − sensor�v sensor
tcycle
task�a task
task�a task =task�v task − task�v task
tcycle
A Transformations 106
hand�a hand
hand�a hand =hand�v hand − hand�v hand
tcycle
Regarding to figure 8.3 on page 95, the positions sensor�p sensor, task�p task, hand�p hand,andworld∗�p hand would be calculated as follows. The position world∗�p hand can not be calculated
in a reasonable way if we consider that figure 3.4 on page 23 rules.
sensor�p sensor
sensorTsensor =(Trans
(worldpsensor
x ,world psensory ,world psensor
z
)·
RPY(
worldϕsensorz ,world ϕsensor
y ,world ϕsensorx
) )−1
·worldTsensor
=⇒ sensor�p sensor
task�p task
taskTtask =(Trans
(worldptask
x ,world ptasky ,world ptask
z
)·
RPY(
worldϕtaskz ,world ϕtask
y ,world ϕtaskx
) )−1
·worldTtask
=⇒ task�p task
hand�p hand
handThand =(Trans
(worldphand
x ,world phandy ,world phand
z
)·
RPY(
worldϕhandz ,world ϕhand
y ,world ϕhandx
) )−1
·worldThand
=⇒ hand�p hand
world∗�p hand
A Transformations 107
world∗Thand = Trans(
worldphandx ,world phand
y ,world phandz
)·
Trans(
worldphandx ,world phand
y ,world phandz
)−1 ·RPY
(worldϕhand
z ,world ϕhandy ,world ϕhand
x
)·
RPY(
worldϕhandz ,world ϕhand
y ,world ϕhandx
)−1
=⇒ world∗�p hand
A.2 Computation of a New Position Setpoint
Desired positions are marked by angle brackets, e.g. worldT<hand> represents the desired
hand frame w.r.t. the world frame.
task�p <task> =⇒ worldT<hand>
worldT<hand> = worldTtask·Trans
(taskp<task>
x ,task p<task>y ,task p<task>
z
)·
RPY(
taskϕ<task>z ,task ϕ<task>
y ,task ϕ<task>x
)·(
handTtask
)−1
world�p <task> =⇒ worldT<hand>
worldT<hand> = Trans(
worldp<task>x ,world p<task>
y ,world p<task>z
)·
RPY(
worldϕ<task>z ,world ϕ<task>
y ,world ϕ<task>x
)·(
handTtask
)−1
sensor�p <sensor> =⇒ worldT<hand>
worldT<hand> = worldTsensor·Trans
(sensorp<sensor>
x ,sensor p<sensor>y ,sensor p<sensor>
z
)·
RPY(
sensorϕ<sensor>z ,sensor ϕ<sensor>
y ,sensor ϕ<sensor>x
)·(
handTsensor
)−1
world�p <sensor> =⇒ worldT<hand>
worldT<hand> = Trans(
worldp<sensor>x ,world p<sensor>
y ,world p<sensor>z
)·
RPY(
worldϕ<sensor>z ,world ϕ<sensor>
y ,world ϕ<sensor>x
)·(
handTsensor
)−1
A Transformations 108
hand�p <hand> =⇒ worldT<hand>
worldT<hand> = worldThand·Trans
(handp<hand>
x ,hand p<hand>y ,hand p<hand>
z
)·
RPY(
handϕ<hand>z ,hand ϕ<hand>
y ,hand ϕ<hand>x
)
world�p <hand> =⇒ worldT<hand>
worldT<hand> = Trans(
worldp<hand>x ,world p<hand>
y ,world p<hand>z
)·
RPY(
worldϕ<hand>z ,world ϕ<hand>
y ,world ϕ<hand>x
)
world∗�p <hand> =⇒ worldT<hand>
worldT<hand> = Trans(
world∗p<hand>x ,world∗ p<hand>
y ,world∗ p<hand>z
)·
Trans(
worldphandx ,world phand
y ,world phandz
)·
RPY(
world∗ϕ<hand>z ,world∗ ϕ<hand>
y ,world∗ ϕ<hand>x
)·
RPY(
worldϕhandz ,world ϕhand
y ,world ϕhandx
)
The desired position for sensor and task frames can be calculated by combining the re-
spective equation from above and the corresponding from below:
worldT<sensor> = worldT<hand> ·hand Tsensor
worldT<task> = worldT<hand> ·hand Ttask
B Error Messages 109
B Error Messages
001 JR3 PC-Card found, but no sensor available!
There is no sensor connected to the JR3 receiver card (no matter wether ISA or
PCI). Check if the sensor is connected correctly.
002 No JR3 PC-Card found!
The current version of the skill primitive controller only supports JR3 force/torque
sensors. Regarding to this error message, there is no JR3 receiver card installed.
003 Error while opening output file FileName.
The controller could not open the file FileName, which is one of the three output
files. Check if this file is not already opened or read/write protected.
004 Buffer length Length1 can’t be larger than Lenght2 , i.e. buffer time must
lower than Time. Both are set the respective maximum value.
The buffer time, which is taken from the initialization file exceeds the maximum
value, that is determined by the maximum buffer length multiplied by the control
cycle time. To enable usual operation, the buffer time is set to its maximum value,
Time. The default maximum and minimum value for the buffer length are defined
in the file ”SPControllerAccessories.cpp”.
005 Buffer length Length1 can’t be lower than Lenght2 , i.e. buffer time must
lower than Time. Both are set the respective maximum value.
The buffer time, which is taken from the initialization exceeds the minimum value,
that is determined by the minimum buffer length multiplied by the control cycle
time. To enable usual operation, the buffer time is set to its minimum value, Time.
The default maximum and minimum value for the buffer length are defined in the
file ”SPControllerAccessories.cpp”.
006 Time out value must be larger than cycle time. It is set to the cycle
time value!
The timeout value for the system stop condition given by the initialization file can
not be lower than the cycle time, of course.
007 An error occurred during the generation of the default exit condition.
Values of the init file might be wrong!
The system stop condition could not be generated. Check the initialization file.
008 Object server not found!
If the skill primitive controller is started manually, i.e. it has not been started by
B Error Messages 110
MiRPA, the object server must not necessarily already have been started. The skill
primitive controller could not log on to MiRPA. Start MiRPA!
009 Error while sending the "getmsglist" message
The skill primitive controller could log on to MiRPA, but the message ”getmsglist”
could not be sent correctly. Check MiRPA messages.
010 Can not start position controller.
Because of some reason, the skill primitive controller could not start the joint position
controller. Check if it is already running or if the initialization file of MiRPA contains
an entry to start the controller.
011 Error while sending "AktPos".
The REQUEST message ”AktPos” could not be sent correctly to the joint position
controller. Check MiRPA and the position controller.
012 Message "SetPos" is not released.
If the joint position controller is already running, the skill primitive controller checks,
if another process uses the ”SetPos” message. If there is another process using this
message, it not released. End the ”SetPos” using process and release the message.
013 Error at registration of the "EndSPCtrl" message
The message ”EndSPCtrl” could not be registered at MiRPA. Check MiRPA.
014 Error while adding parameter to message "SetFPV": ParameterName
The parameter ParameterName could not be added to the ”SetFPV” message.
015 Error at registration of the "SetFPV" message
The message ”SetFPV” could not be registered at MiRPA. Check MiRPA.
016 Error while adding parameter to message "GetFPValues": ParameterName
The parameter ParameterName could not be added to the ”GetFPValues” message.
017 Error at registration of the "GetFPValues" message
The message ”GetFPValues” could not be registered at MiRPA. Check MiRPA.
018 Error at registration of the "FTSReset" message
The message ”FTSReset” could not be registered at MiRPA. Check MiRPA.
019 Error at registration of the "GetDefaultExitConditionString" message
The message ”GetDefaultExitConditionString” could not be registered at MiRPA.
Check MiRPA.
020 Error at registration of the "GetBufferTime" message
The message ”GetBufferTime” could not be registered at MiRPA. Check MiRPA.
B Error Messages 111
021 Error while adding parameter to message "SetCtrlPara"
A parameter could not be added to the ”SetCtrlPara” message.
022 Error at registration of the "SetCtrlPara" message
The message ”SetCtrlPara” could not be registered at MiRPA. Check MiRPA.
023 Error while sending "AktPos"
This error message equals error message 001.
024 Unable to attach proxy.
The proxy could not be attached to the triggering timer.
025 Unable to create timer.
The triggering timer could not be created.
026 Unable to detach proxy.
During the shutdown of the skill primitive controller, the proxy could not be detached
from the timer.
027 Can not start robot.
The message ”STARTROBOT” has been sent to the joint position controller, but the
reply was invalid. Check joint position controller.
028 No reply to the REQUEST "NSTR"
To get acknowledged by the joint position controller, the REQUEST message ”NSTR”
has been sent, but ne reply has been received.
029 Robot start position is not in work space!
The manipulators position prior to the start of the robot is not in work space. Move
the robot manually into workspace and try again. Don’t neglect calibration! The
second solution is to program joint space movements to bring the manipulator back
into workspace. This way referencing is not necessary.
030 User application does not respond to message "SPControllerReady".
When the initialization procedure of the skill primitive controller is finished, the user
application gets acknowledged, but the user application does not respond. Check
user application.
031 Error at inverse kinematics!!! Robot can not move.
If the robot is out of workspace or can not reach the next desired position without
using a new configuration, this message is generated. There is no universal solution
for this problem. Try to repeat the same procedure at another position or use
another manipulator configuration. If this message arises, it usually arises every
cycle from then on, but to safe time, this error message is printed only once. If
the kinematic problem is remedied, an all.clear message appears and the hybrid
controller continues.
B Error Messages 112
032 Current robot position is not in work space!
The manipulator has left is workspace. Move manipulator back into workspace.
(Refer to error message 029)
033 A realtime clock error occurred!
Because of some reason, the real-time clock of the QNX system could not be read.
034 Stack overflow during verification of the exit condition: ExitCondition-
String
The stop condition ExitConditionString has generated a stack, whose hight value is
too great. The default maximum value is determined by SPControllerAccessories.h
and is currently set to 1000. Check user exit condition.
035 An error (between char. No1 and char. No2) occurred during the verification
of the exit condition: ExitConditionString
The user stop condition ExitConditionString is erroneous between character No1
and No2. Check user exit condition.
036 Stack underflow during verification of the exit condition: ExitCondition-
String
The user stop condition ExitConditionString generated a stack underflow. Check
user exit condition.
037 An error occurred during the verification of the exit condition. The expression
is incomplete: ExitConditionString
The user exit condition ExitConditionString is incomplete. Check user exit condi-
tion.
038 Error while sending message "SkillPrimResult". Controller ends!
The skill primitive controller tried to send the GENERAL message ”SkillPrimResult”,
which contains the exit condition results and the new setpoints, to the user applica-
tion, but an error occurred because of some reason. As result, the user application
can not not be acknowledged, i.e. there will not be any further skill primitive to be
executed, and the skill primitive controller ends to guarantee stability.
039 Error while sending "SetPos"
The GENERAL message ”SetPos” contains the new joint position for the joint
position controller. During the transmission of this message, an error occurred. To
guarantee stability, the skill primitive controller ends. Check joint position controller.
040 NEW FPV MESSAGE WAS NOT RECEIVED CORRECTLY!!!
The GENERAL ”SetFPV” message was not received correctly, i.e. parameters are
incorrect or incomplete. The skill primitive controller ends.
B Error Messages 113
041 The hold time of skill primitive must be valued positive! It is set to
zero.
The user application has transmitted a negative signed hold time value, thold. To
go on with usual operation, the value is set to zero, i.e. if all setpoints are reached
only within one control cycle, the skill primitive stops and the user application gets
acknowledged. Check user application.
042 The hold time of skill primitive can’t be higher than the maximum buffer
time (MaximumBufferTime ms)! It is set to this maximum value!
The hold time for a skill primitive given by the user application is greater than the
maximum buffer time of MaximumBufferTime. For a continuous operation it is set
to the maximum time, which is determined by the initialization file. Check user
application.
043 Force control in world space is impossible! Value for DOF DOF is set
to Value
The sent skill primitive demands force control in world space. This feature is not
supported. The respective DOF DOF is set to its current position in world space
Value. Check user application.
044 Desired robot position is not in work space
The user application has set up a skill primitive, whose position setpoint is not in
work space. Check user application.
045 World system, determination of a position is not possible by a mixture
of values given in world space and values given w.r.t. another inertial
system! Position setpoint is set to current position.
The user has set up a position in world space by a mixture of positions w.r.t. several
reference frames. To bypass this contradiction, the current position is kept. Check
user application.
046 Translatory speed setpoint (dof: DOF) is out of range and is set to Ve-
locity mm/s.
The sent skill primitive’s translatory velocity setpoint for DOF DOF violates the
maximum value of Velocity. The maximum speed value is set to this value.
047 Translatory acceleration (dof: DOF) is out of range and is set to Ac-
celeration mm/s2.
The sent skill primitive’s translatory acceleration setpoint for DOF DOF violates
the maximum value of Acceleration. The maximum acceleration value is set to this
value.
048 Task system, determination of a position is not possible by a mixture of
values given in world space and values given w.r.t. another inertial system!
B Error Messages 114
Position setpoint is set to current position.
The user has set up a position w.r.t. the task frame by a mixture of positions w.r.t.
several reference frames. To bypass this contradiction, the current position is kept.
Check user application.
049 Hand system, determination of a position is not possible by a mixture of
values given in world space and values given w.r.t. another inertial system!
Position setpoint is set to current position.
The user has set up a position w.r.t. the hand frame by a mixture of positions w.r.t.
several reference frames. To bypass this contradiction, the current position is kept.
Check user application.
050 Sensor system, determination of a position is not possible by a mixture
of values given in world space and values given w.r.t. another inertial
system! Position setpoint is set to current position.
The user has set up a position w.r.t. the sensor frame by a mixture of positions
w.r.t. several reference frames. To bypass this contradiction, the current position is
kept. Check user application.
051 Force setpoint (dof: DOF) is out of range and is set to Force N.
The sent skill primitive’s force setpoint is higher than the maximum allowed value.
To prevent any force/torque sensor damage and to enable continuous operation, the
setpoint is corrected to the maximum value of Force. Check user application.
052 Force threshold (dof: DOF) is out of range and is set to Threshold N.
The sent skill primitive’s force threshold value is higher than the maximum allowed
value. To prevent any force/torque sensor damage and to enable continuous oper-
ation, the setpoint is corrected to the maximum value of Threshold. Check user
application.
053 Force control in joint space is impossible! Value for DOF DOF is set
to Angle deg.
The sent skill primitive’s compliance selection matrix commands force control in
DOF DOF, but since Cartesian force control is impossible in joint space, the respec-
tive DOF DOF is supposed the keep its position at Angle.
054 Fatal beer-crate error, stop reading.
Whoever reads these lines as first person, receives a crate of beer! Please consult
program author.
055 Joint system, determination of a position is not possible by a mixture
of values given in joint space and values given w.r.t. another inertial
system! Position setpoint is set to current position.
The user has set up a position in joint space by a mixture of positions w.r.t. several
B Error Messages 115
reference frames. To bypass this contradiction, the current position is kept. Check
user application.
056 Joint speed DOF setpoint is out of range and is set to Velocity deg/s.
The sent skill primitive rules in joint space, but the speed for DOF exceeds the max-
imum allowed one. To continue operation, this speed is set to Velocity.
057 Joint acceleration DOF is out of range and is set to Acceleration deg/s2.
The sent skill primitive rules in joint space, but the acceleration for DOF exceeds
the maximum allowed one. To continue operation, this acceleration value is set to
Acceleration.
058 An error occurred because of the hold time (Time1 ms) for the current skill
primitive. The time is set to the cycle time (Time2 ms).
The skill primitive’s hold time thold could not be converted into a string variable. It
might (much) too high. Otherwise please consult program author.
059 Incredible error! Can’t convert cycle time. Controller will end!!!
Caused by error 0058, the skill primitive’s cycle time could not be converted into a
string variable. Please consult program author.
060 Incredible error! Can’t convert position value. Controller will end!!!
The skill primitive’s position setpoint value could not be converted into a string vari-
able. Please consult program author.
061 Incredible error! Can’t convert speed. Controller will end!!!
The skill primitive’s velocity setpoint value could not be converted into a string
variable. Please consult program author.
062 Incredible error! Can’t convert force/torque. Controller will end!!!
The skill primitive’s force/torque setpoint value could not be converted into a string
variable. Please consult program author.
063 An error occurred during the transmission of the requested values: Re-
questedValuesString
The user application requested values from the skill primitive controller, but the
string RequestedValuesString that specifies the respective values was erroneous.
Check user application.
064 The requested values are not valid: RequestedValuesString
The user application requested force/torque values in joint space w.r.t. the world
frame from the skill primitive controller. This feature is not supported. Check user
application.
065 Error while sending message "GetFPValues". Controller ends!
The skill primitive controller tried to send the ANSWER message ”GetFPValues”,
B Error Messages 116
but the user application does not respond. Check MiRPA and user application.
Because of security reasons, the skill primitive controller ends.
066 Error while sending message "GetDefaultExitConditionString". Controller
ends!
The skill primitive controller tried to send the ANSWER message ”GetDefaultEx-
itConditionString”, but the user application does not respond. Check MiRPA and
user application. Because of security reasons, the skill primitive controller ends.
067 Error while sending message "GetBufferTime". Controller ends!
The skill primitive controller tried to send the ANSWER message ”GetBufferTime”,
but the user application does not respond. Check MiRPA and user application.
Because of security reasons, the skill primitive controller ends.
068 Can not end position controller.
The skill primitive controller tried to send the ANSWER message ”ENDEREGLER”
to end the joint position controller, but the position control process does not respond.
Check MiRPA and joint position controller. Because of security reasons, the skill
primitive controller ends.
069 Unable to delete timer.
Before the skill primitive controller ends itself the triggering timer has to be deleted.
This did not work correctly.
070 Can not log off from object server.
Because of some reason, the skill primitive controller can not log off from MiRPA.
Check MiRPA, consult program author.
071 Rotatory speed setpoint (dof: DOF) is out of range and is set to Ve-
locity deg./s.
The sent skill primitive’s rotatory velocity setpoint for DOF DOF violates the max-
imum value of Velocity. The maximum speed value is set to this value.
072 Rotatory acceleration (dof: DOF) is out of range and is set to Accel-
eration deg./s2.
The sent skill primitive’s rotatory acceleration setpoint for DOF DOF violates the
maximum value of Acceleration. The maximum acceleration value is set to this
value.
073 Translatory deceleration (dof: DOF) is out of range and is set to De-
celeration mm/s2.
The sent skill primitive’s rotatory deceleration setpoint for DOF DOF violates the
maximum value of Deceleration. The maximum deceleration value is set to this
value.
B Error Messages 117
074 Rotatory deceleration (dof: DOF) is out of range and is set to Decel-
eration deg./s2.
The sent skill primitive’s rotatory deceleration setpoint for DOF DOF violates the
maximum value of Deceleration. The maximum deceleration value is set to this
value.
075 Joint deceleration DOF is out of range and is set to JointAcceleration deg/s2.
The sent skill primitive’s joint deceleration setpoint for DOF DOF violates the max-
imum value of JointAcceleration. The maximum deceleration value is set to this
value.
C Sample Output Files 118
C Sample Output Files
C.1 General Output File
Force controller output file: /home/tkr/cpp/spc-output.dat
This file contains all important events done by the force controller.
tkr 2002
Fri Nov 8 10:00:18 2002
Time: 11
Cycle time : 8000000 [ns]
Buffer time : 2000.000 [ms]
Buffer length : 250
Force control parameters:
Force control P : 0.000050000 [mm/N]
Force control TI: 4000000.000000000 [ns]
Force control TD: 200000000.000000000 [ns]
Torque control P : 0.005800000 [deg/Nm]
Torque control TI: 10000000.000000000 [ns]
Torque control TD: 80000000.000000000 [ns]
Epsilon XT : 0.010000000 [mm]
Epsilon XR : 0.010000000 [deg]
Epsilon VT : 0.010000000 [mm/s]
Epsilon VR : 0.010000000 [deg/s]
Epsilon AT : 0.010000000 [mm/s2]
Epsilon AR : 0.010000000 [deg/s2]
Epsilon F : 0.500000000 [N]
Epsilon T : 0.050000000 [Nm]
Epsilon FD : 0.010000000 [N/s]
Epsilon TD : 0.001000000 [Nm/s]
Minimum joint values [deg] : -165.025 -206.504 -58.419 -200.100 -115.625 -833.333
Maximum joint values [deg] : 165.025 31.620 238.415 200.100 115.625 833.333
Maximum joint speeds [deg/s] : 300.000 180.000 640.000 960.000 600.000 800.000
Maximum joint accelerations [deg/s2] : 1060.000 820.000 2430.000 3440.000 3320.000 5340.000
Maximum Cart. speeds [mm/s] : 800.000 800.000 800.000 800.000 800.000 800.000
Maximum Cart. accelerations [mm/s2] : 1600.000 1600.000 1600.000 1600.000 1600.000 1600.000
Maximum forces/torques [N|Nm] : 200.000 200.000 400.000 12.000 12.000 12.000
Default speeds [mm/s|deg/s] : 200.000 200.000 200.000 200.000 200.000 200.000
Default accelerations [mm/s2|deg/s2]: 400.000 400.000 400.000 400.000 400.000 400.000
Default decelerations [mm/s2|deg/s2]: 400.000 400.000 400.000 400.000 400.000 400.000
Default force thresholds [N|Nm] : 2.000 2.000 2.000 0.150 0.150 0.150
DH-Parameters a [mm] : 0.000 350.000 0.000 0.000 0.000 0.000
DH-Parameters d [mm] : 630.000 0.000 0.000 350.000 0.000 70.000
DH-Parameters alpha [deg] : 90.000 0.000 -90.000 90.000 -90.000 0.000
DH-Parameters theta [deg] : 0.000 0.000 0.000 0.000 0.000 0.000
Sensor frame [mm|deg] : 0.000 0.000 50.000 0.000 0.000 45.000
Robot base frame [mm|deg] : 600.000 800.000 31.500 0.000 0.000 0.000
Robot is enabled.
Default exit condition string: JX0N0;-165.0250000000000;<;JX1N0;-206.5039674801626;<;|;JX2N0;-
58.41859966786309;<;|;JX3N0;-200.1000000000000;<;|;JX4N0;-115.6250000000000;<;|;JX5N0;-
833.3333333333333;<;|;JX0N0;165.0250000000000;>;|;JX1N0;31.61984190079050;>;|;JX2N0;238.4153854395457;>;|;JX3N0;
200.1000000000000;>;|;JX4N0;115.6250000000000;>;|;JX5N0;833.3333333333333;>;|;SF0A0;200.0;>;|;SF1A0;200.0;>;|;SF
2A0;400.0;>;|;SF3A0;12.0;>;|;SF4A0;12.0;>;|;SF5A0;12.0;>;|;O30000.0;|;
Control parameters:
Fa1: 0.000000
Fb0: 0.001300
Fb1: -0.002450
Fb2: 0.001250
Ta1: 0.000000
Tb0: 0.063800
Tb1: -0.117160
Tb2: 0.058000
Start time: 39672
C Sample Output Files 119
**************************************************************************************************************
Time: 39872 Message recieved: GetBufferTime ID:1
Buffer time was sent.
**************************************************************************************************************
Time: 39892 Message recieved: GetDefaultExitConditionString ID:1
Default exit condition string was sent.
**************************************************************************************************************
Time: 39913 Message recieved: FTSReset ID:0
f-t sensor values reset
**************************************************************************************************************
Time: 39916 Message recieved: SetFPV ID:0
Original values:
Position(abs) : 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000
Force : 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000
Speed : 150.000 | 150.000 | 150.000 | 150.000 | 150.000 | 150.000
SMatrix : Pos | Pos | Pos | Pos | Pos | Pos
Threshold : 2.000 | 2.000 | 2.000 | 0.150 | 0.150 | 0.150
TaskFrame : 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000
Acceleration : 200.000 | 200.000 | 200.000 | 200.000 | 200.000 | 200.000
Deceleration : 100.000 | 100.000 | 100.000 | 100.000 | 100.000 | 100.000
Jerk : 0.000
Hold time : 100.000
Reference : "Joint"
User exit condition : "O25000.0"
Received new force-position setpoint.
Message was received correctly.
Transformed values:
Position(abs) : 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000
Force : 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000
Speed : 150.000 | 150.000 | 150.000 | 150.000 | 150.000 | 150.000
SMatrix : Pos | Pos | Pos | Pos | Pos | Pos
SelectionPos : 1 | 1 | 1 | 1 | 1 | 1
SelectionFor : 0 | 0 | 0 | 0 | 0 | 0
SelectionSpd : 0 | 0 | 0 | 0 | 0 | 0
Threshold : 2.000 | 2.000 | 2.000 | 0.150 | 0.150 | 0.150
TaskFrame : 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000
Acceleration : 200.000 | 200.000 | 200.000 | 200.000 | 200.000 | 200.000
Deceleration : 100.000 | 100.000 | 100.000 | 100.000 | 100.000 | 100.000
Jerk : 0.000
Hold time : 100.000
Reference : "Joint"
User exit condition : "O25000.0"
This exit condition :
"JX0V100.0;0.00000000000000000;=;JX1V100.0;0.00000000000000000;=;JX2V100.0;0.00000000000000000;=;JX3V100.0;0.000
00000000000000;=;JX4V100.0;0.00000000000000000;=;JX5V100.0;0.00000000000000000;=;&;&;&;&;&;JV0N0;0;=;&;JV1N0;0;=
;&;JV2N0;0;=;&;JV3N0;0;=;&;JV4N0;0;=;&;JV5N0;0;=;&;"
Skill primitive execution finished, all setpoints reached.
User application acknowledged, Time: 39981
***************************************************************************************************************
Time: 39990 Message recieved: SetFPV ID:0
Original values:
Position(abs) : 997.500 | 653.000 | 319.500 | 0.000 | 180.000 | 0.000
Force : 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000
Speed : 150.000 | 150.000 | 150.000 | 150.000 | 150.000 | 150.000
SMatrix : PosWorld | PosWorld | PosWorld | PosWorld | PosWorld | PosWorld
Threshold : 2.000 | 2.000 | 2.000 | 0.150 | 0.150 | 0.150
TaskFrame : 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000
Acceleration : 200.000 | 200.000 | 200.000 | 200.000 | 200.000 | 200.000
Deceleration : 100.000 | 100.000 | 100.000 | 100.000 | 100.000 | 100.000
Jerk : 0.000
Hold time : 100.000
Reference : "Joint"
User exit condition : "O25000.0"
Received new force-position setpoint.
Message was received correctly.
Transformed values:
Position(abs) : -20.295 | -11.302 | 177.989 | 0.000 | 13.314 | -20.295
Force : 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000
Speed : 150.000 | 150.000 | 150.000 | 150.000 | 150.000 | 150.000
SMatrix : Pos | Pos | Pos | Pos | Pos | Pos
SelectionPos : 1 | 1 | 1 | 1 | 1 | 1
SelectionFor : 0 | 0 | 0 | 0 | 0 | 0
SelectionSpd : 0 | 0 | 0 | 0 | 0 | 0
...
C Sample Output Files 120
C.2 Value Output File
Force controller output file: /home/tkr/cpp/spc-values.xls
This file contains all important values of the force controller.
tkr 2002
Wed Nov 13 16:17:06 2002
Time: 16
Cycle time : 8000000 ns
SelectionPos : Selection vector for position controlled DOFs
SelectionFor : Selection vector for force controlled DOFs
SelectionSpd : Selection vector for speed controlled DOFs
Contact : Contact vector
F & T : Current forces and torques measured in the task frame
Diff_F & Diff_T : Current force and torque differnces
Des_F & Des_T... : Current forces and torques desired by the user
Thres_F & Thres_T : Current forces and torques desired by the user
X & Xr : Current Cartesian position in mm and degrees
V & Vr : Current Cartesian speed in mm/s and degrees/s
VTask & VTaskr : Current Cartesian speed in mm/s and degrees/s
J0 - J5 : Current joint position in degrees
VJ0 - VJ5 : Current joint position in degrees/s
DesPos0 - DesPos5 : Desired position (joint values in deg./s or Cartesian values)
DesSpd0 - DesSpd5 : Desired speed (joint speeds in deg./s or Cartesian speeds)
TasSpd0 - TasSpd5 : Current task speed in mm/s and degrees/s
This file is importable to Microsoft Excel.
Reference Time SelectionPos SelectionFor SelectionSpd Contact Fx Fy Fz ...
Joint 523 1-1-1-1-1-1 0-0-0-0-0-0 0-0-0-0-0-0 0-0-0-0-0-0 0,041 -0,176 0,024 ...
Joint 526 1-1-1-1-1-1 0-0-0-0-0-0 0-0-0-0-0-0 0-0-0-0-0-0 -0,071 0,071 -0,118 ...
Joint 534 1-1-1-1-1-1 0-0-0-0-0-0 0-0-0-0-0-0 0-0-0-0-0-0 -0,388 -0,182 0,402 ...
Joint 541 1-1-1-1-1-1 0-0-0-0-0-0 0-0-0-0-0-0 0-0-0-0-0-0 0,044 -0,054 0,52 ...
Joint 549 1-1-1-1-1-1 0-0-0-0-0-0 0-0-0-0-0-0 0-0-0-0-0-0 -0,014 -0,307 0,449 ...
Joint 557 1-1-1-1-1-1 0-0-0-0-0-0 0-0-0-0-0-0 0-0-0-0-0-0 -0,107 0,034 -0,26 ...
Joint 565 1-1-1-1-1-1 0-0-0-0-0-0 0-0-0-0-0-0 0-0-0-0-0-0 -0,372 0,009 0,26 ...
Joint 573 1-1-1-1-1-1 0-0-0-0-0-0 0-0-0-0-0-0 0-0-0-0-0-0 -0,038 -0,386 0,094 ...
Joint 581 1-1-1-1-1-1 0-0-0-0-0-0 0-0-0-0-0-0 0-0-0-0-0-0 -0,001 0,042 0 ...
Joint 589 1-1-1-1-1-1 0-0-0-0-0-0 0-0-0-0-0-0 0-0-0-0-0-0 0,128 -0,449 0,354 ...
Joint 597 1-1-1-1-1-1 0-0-0-0-0-0 0-0-0-0-0-0 0-0-0-0-0-0 -0,141 0,131 -0,661 ...
Joint 605 1-1-1-1-1-1 0-0-0-0-0-0 0-0-0-0-0-0 0-0-0-0-0-0 -0,383 -0,166 0,496 ...
Joint 613 1-1-1-1-1-1 0-0-0-0-0-0 0-0-0-0-0-0 0-0-0-0-0-0 0,041 -0,176 0,307 ...
Joint 621 1-1-1-1-1-1 0-0-0-0-0-0 0-0-0-0-0-0 0-0-0-0-0-0 -0,043 0,022 -0,354 ...
Joint 629 1-1-1-1-1-1 0-0-0-0-0-0 0-0-0-0-0-0 0-0-0-0-0-0 0,001 -0,271 0,52 ...
Joint 637 1-1-1-1-1-1 0-0-0-0-0-0 0-0-0-0-0-0 0-0-0-0-0-0 -0,114 0,093 -0,165 ...
Joint 645 1-1-1-1-1-1 0-0-0-0-0-0 0-0-0-0-0-0 0-0-0-0-0-0 -0,167 -0,32 0,142 ...
Joint 653 1-1-1-1-1-1 0-0-0-0-0-0 0-0-0-0-0-0 0-0-0-0-0-0 -0,033 0,054 0,142 ...
Joint 661 1-1-1-1-1-1 0-0-0-0-0-0 0-0-0-0-0-0 0-0-0-0-0-0 -0,303 0,034 -0,047 ...
Joint 669 1-1-1-1-1-1 0-0-0-0-0-0 0-0-0-0-0-0 0-0-0-0-0-0 -0,035 0,128 -0,236 ...
Joint 677 1-1-1-1-1-1 0-0-0-0-0-0 0-0-0-0-0-0 0-0-0-0-0-0 -0,428 -0,09 0,354 ...
Joint 685 1-1-1-1-1-1 0-0-0-0-0-0 0-0-0-0-0-0 0-0-0-0-0-0 0,167 -0,312 0,331 ...
Joint 693 1-1-1-1-1-1 0-0-0-0-0-0 0-0-0-0-0-0 0-0-0-0-0-0 -0,156 0,094 -0,732 ...
... ... ... ... ... ... ... ... ... ...
D Contents of the Enclosed CD 121
D Contents of the Enclosed CD
Notice: important folders contain a readme file with further information
LIST OF FIGURES 122
List of Figures
1.1 Simplified overview: from CAD data to robot task execution . . 2
1.2 Robot control architecture . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Force controller as suggested in [KROEGER02] . . . . . . . . . . 5
1.4 Proposition for a hybrid control architecture . . . . . . . . . . . . 7
1.5 Trajectory planning with a defined jerk, i.e. the time dependent
position behavior is a third order polynomial . . . . . . . . . . . . 8
2.1 Homogeneous coordinate transformations . . . . . . . . . . . . . . 10
3.1 A skill primitive net for a light bulb to be inserted into a bayonet
socket . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.2 Two-dimensional example to bring the hand frame into a desired
target position A, B, or C . . . . . . . . . . . . . . . . . . . . . . . . 19
3.3 About skill primitive parameters for movements in free space and
different reference frames . . . . . . . . . . . . . . . . . . . . . . . . 20
3.4 Example for a force controlled task frame movement . . . . . . . 23
3.5 About exit condition that contain velocity comparisons w.r.t. the
world frame or w.r.t. the task (hand, or sensor) frame . . . . . . 24
4.1 Different execution times for each DOF during trajectory planning 27
4.2 Scheme to calculate the execution time for a single DOF . . . . . 29
4.3 Flow chart to determine the time and phase for one DOF . . . . 31
4.4 How to apply the next acceleration or deceleration step for a
single DOF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
4.5 Flow chart of the velocity calculation for the next control cycle 35
4.6 Cartesian path problem type I: intermediate points unreachable 38
4.7 Cartesian path problem type II: high joint rates near singularity 38
4.8 Cartesian path problem type III: start and goal reachable in dif-
ferent solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
5.1 Four processes to execute a robot task . . . . . . . . . . . . . . . . 40
5.2 Part of an example application for the force/torque sensor . . . 45
5.3 Structure chart of the SPController.cpp file . . . . . . . . . . . . . 46
5.4 Example for an initialization file . . . . . . . . . . . . . . . . . . . . 49
5.5 Determination of world, robot base, hand, and sensor frame . . 59
5.6 Flow chart to generate the skill primitive exit condition . . . . . 67
LIST OF FIGURES 123
5.7 Methods of the class ”skillprim” (1) . . . . . . . . . . . . . . . . . 71
5.8 Methods of the class ”skillprim” (2) . . . . . . . . . . . . . . . . . 72
5.9 Sample code for a simple user application (1) . . . . . . . . . . . . 80
5.10 Sample code for a simple user application (2) . . . . . . . . . . . . 81
6.1 Experimental results of the six DOF trajectory planner . . . . . 83
6.2 Forces and torques while inserting the bulb into the bayonet socket 84
6.3 Picture sequence while inserting a bulb into a bayonet socket . . 85
6.4 Force and torque diagrams during the block placing procedure . 87
6.5 Picture sequence while placing an aluminium block onto a solid
table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
7.1 Contact force experiments with different sample times . . . . . . 89
7.2 Average run times of one control cycle . . . . . . . . . . . . . . . . 90
8.1 Torque control in a task frame w.r.t. the world frame . . . . . . 93
8.2 Suggestion for a general frame assignment . . . . . . . . . . . . . . 94
8.3 Example for a force controlled task frame movement (different
stop condition interpretation) . . . . . . . . . . . . . . . . . . . . . . 95
8.4 Exit condition w.r.t. a frame at the beginning of execution vs.
exit condition w.r.t. a variable frame . . . . . . . . . . . . . . . . . 96
8.5 Suggestion for a future software architecture . . . . . . . . . . . . 98
LIST OF TABLES 124
List of Tables
2.1 Used symbols and units . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.1 About setpoint designations . . . . . . . . . . . . . . . . . . . . . . . 18
3.2 Description of reference frames and target positions . . . . . . . . 20
3.3 Elements of one comparison term within an exit condition . . . 22
3.4 Possible combinations to specify a comparison term of an exit
condition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.5 Reference frames in normal and in mixed mode . . . . . . . . . . 23
3.6 Magnitudes provided by the hybrid controller . . . . . . . . . . . 24
5.1 Source files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
5.2 Executable files to start each process . . . . . . . . . . . . . . . . . 47
5.3 Epsilon environments and their respective units . . . . . . . . . . 52
5.4 Reference frame definitions from the class”skillprim” . . . . . . 72
5.5 Setpoint specification of the class ”skillprim” . . . . . . . . . . . 73
BIBLIOGRAPHY 125
Bibliography
[BORCHARD01] Borchard, Michael: Kommunikationsmodell einer PC-basierten
Robotersteuerung. Institut fur Robotik und Prozessinformatik, TU Braunschweig,
2001
[BREYMANN96] Breymann, Ulrich: C++ Eine Einfuhrung. Carl Hanser Verlag,
1996
[CRAIG89] Craig, John J.: Introduction to Robotics: Mechanics and Control.
Addison-Wesley Publishing Company, 1989
[HANNA01] Hanna, Jorg: Entwicklung der Betriebssoftware und Regelung einer PC-
basierten Robotersteuerung. Institut fur Robotik und Prozessinformatik, TU Braun-
schweig, 2001
[HO90] Ho, C. Y. and Sriwattanathamma, Jen: Robot Kinematics: Symbolic Au-
tomation and numerical synthesis. Ablex Publishing Corporation, 1990
[HOFFMANN00] Hoffmann, Andreas: Modellierung, Regelung und Trajektorienpla-
nung des Experimentalroboters EDDA in Matlab/Simulink. Institut fur Robotik und
Prozessinformatik, TU Braunschweig, 2000
[KERNIGHAM90] Kernighan, Brian W. and Ritchie, Dennis M.: Program-
mieren in C. Carl Hanser Verlag, 1990
[KROEGER02] Kroger, Torsten: Embedding Force Control in a Robot Control Sys-
tem. Institut fur Robotik und Prozessinformatik, TU Braunschweig, 2002
[LUTZ95] Lutz, Holger and Wendt, Wolfgang: Taschenbuch der Regelungstech-
nik. Verlag Harri Deutsch, 1995
[MASON83] Mason, Matthew T.: Compliant Motion. In ”Robot Motion”, MIT Press,
pp. 305-322, 1983
[MCKERROW91] McKerrow, Phillip John: Introduction to Robotics. Addison
Wesley, 1991
[OLOMSKI89] Olomski, Jurgen: Bahnplanung und Bahnfuhrung mit Industrier-
obotern. Verlag Vieweg, 1989
[OTTEN02] Otten, Niels: Rapid Control Prototyping mit einer auf der Middleware
MiRPA basierenden Robotersteuerung. Institut fur Robotik und Prozessinformatik,
TU Braunschweig, 2002
BIBLIOGRAPHY 126
[RATKE00] Ratke, Holger: Entwicklung der WindowsNT-Benutzeroberflache einer
PC-basierten Robotersteuerung. Institut fur Robotik und Prozessinformatik, TU
Braunschweig, 2000
[SCHROEDER99] Schroder, Andre: Entwicklung einer Anpaßbaugruppe und
Regelung fur eine PC-basierte Robotersteuerung. Institut fur Robotik und Prozess-
informatik, TU Braunschweig, 1999
[SCHUTTER88-1] Schutter, J. De and Brussel, H. Van: Compliant Motion I.
A Formalism for Specifying Compliant Motion Tasks. The International Journal of
Robotics Research, Vol.7, No. 4, pp. 3-17, 1988
[SCHUTTER88-2] Schutter, J. De and Brussel, H. Van: Compliant Motion II. A
Control Approach Based on External Control Loops. The International Journal of
Robotics Research, Vol.7, No. 4, pp. 18-33, 1988
[SIEGERT96] Siegert, Hans-Jurgen and Bocionek, Siegfried: Robotik: Pro-
grammierung intellegenter Roboter. Springer Verlag, 1996
[WAHL02] Wahl, Friedrich M. and Thomas, Ulrike: Robot Programming - From
Simple Moves to Complex Robot Tasks. Proceding of the first international collo-
quium of the Collaborative Research Centre 562,Braunschweig May 29-30, 2002,
Fortschritte in der Robotik, Band 7, Shaker, Aachen, 2002, pp. 245-259
[WATCOM96] Watcom Software Systems Ltd.: C Library Reference, QNX soft-
ware systems, 1996