Upload
others
View
7
Download
0
Embed Size (px)
Citation preview
c© Die Zeit, 14.6.2007
Feed-Forward Neural NetworksMathematical Models Based on Neural Networks
Dieter Kilsch
eh. Technische Hochschule Bingen, FB 2 • Technik, Informatik und WirtschaftGSE und APL-Germany
Böblingen
November 28th, 2016
Feed-Forward Neural NetworksMathematical Models Based on Neural Networks
Dieter Kilsch
eh. Technische Hochschule Bingen, FB 2 • Technik, Informatik und WirtschaftGSE und APL-Germany
Böblingen
November 28th, 2016
My first „date“ with APL: 1984
My first APL-conference: St. Petersburg 1992
Analysis, Modelling and Solutions My Vision of Mobility
1 Analysis, Modelling and Solutions
2 Neurons and Neural Networks
3 Accident Severity
4 Comfort in Cabriolet: Active Torsion Damping
5 Further Examples and Conclusion
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 4 / 48
Analysis, Modelling and Solutions My Vision of Mobility
1 Analysis, Modelling and SolutionsMy Vision of MobilityObjectives
2 Neurons and Neural Networks
3 Accident Severity
4 Comfort in Cabriolet: Active Torsion Damping
5 Further Examples and Conclusion
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 5 / 48
Analysis, Modelling and Solutions My Vision of Mobility
Convenience by Autonomous Vehicles
2007: Vision
c© Die Zeit, 14.6.2007
The car manages itself,
the driver’s mind
is free to enjoy live.
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 6 / 48
Analysis, Modelling and Solutions My Vision of Mobility
Convenience by Autonomous Vehicles
2007: Vision
c© Die Zeit, 14.6.2007
The car manages itself,
the driver’s mind
is free to enjoy live.
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 6 / 48
Analysis, Modelling and Solutions My Vision of Mobility
Convenience by Autonomous Vehicles
heute: ? Vision ?
c© Die Zeit, 14.6.2007
The car manages itself,
the driver’s mind
is free to enjoy live.
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 6 / 48
Analysis, Modelling and Solutions My Vision of Mobility
Vision: Requirements
Vision
c© ADAC, Juli 2006
The car senses
and controls.
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 7 / 48
Analysis, Modelling and Solutions My Vision of Mobility
Autonomous Cars Come True
“First Autonomes Car on Public Roads”
TU Braunschweig, https://www.tu-braunschweig.de/presse/medien/presseinformationen?year=2010&pinr=133
The car drives, the driver enjoys . . .
Leonie 8.10.2010Weltweit erstes automatisches Fahrenim realen StadtverkehrForschungsfahrzeug „Leonie“ fährt au-tomatisch auf dem BraunschweigerStadtringWeltpremiere in Braunschweig: Erstmalsfährt heute ein Fahrzeug automatisch imalltäglichen Stadtverkehr. Im Rahmendes Forschungsprojekts „Stadtpilot“hat die Technische Universität Braun-schweig in ihrem Kompetenzzentrum,dem Niedersächsischen Forschungszen-trum Fahrzeugtechnik, ein Forschungs-fahrzeug entwickelt, dass automatischeine vorgegebene Strecke im regulärenVerkehr fährt.
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 8 / 48
Analysis, Modelling and Solutions My Vision of Mobility
Autonomous Cars Come True
Self-Driving Car Test: Steve Mahan(youtube)
Autopiloten 4.10.2012Das Wort Geisterfahrer bekommt inKalifornien gerade eine ganz neue Be-deutung: Seit vergangener Wochedürfen auf den Highways selbsts-teuernde Autos cruisen. Wunderndarf sich der Kalifornier also nicht,wenn er demnächst ein Auto ohneFahrer neben sich hat. Denn dersitzt vermutlich auf der Rückbank undguckt Fernsehen, während der Autopi-lot lenkt.Das neue System stammt von Google,dem es nun nicht mehr reicht, Straßennur abzufilmen. 300 000 Meilenhätten die Gefährte schon unfallfreizurückgelegt, teilte der Internetkonz-ern mit.
CHRISTINA KYRIASOGLOU c© Die Zeit
Das Auto lenkt, der Mensch denkt, . . .Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 9 / 48
Analysis, Modelling and Solutions My Vision of Mobility
Autonomous Cars Come True
Freightliner Inspiration Truck Unveiled at Hoover Dam LAS VEGAS, 5-5-2015, DTNA
First Licensed Autonomous Commercial Truck to Drive on U.S. Public HighwayIn a spectacular evening ceremony at Hoover Dam, Daimler Trucks North America(DTNA) unveiled the Freightliner Inspiration Truck to several hundred international newsmedia, trucking industry analysts and officials.
Daimler Inspiration Truck - So fährt derRobo-Truck von Mercedes (youtube)
The Freightliner Inspiration Truck isthe first licensed autonomous commercialtruck to operate on an open public high-way in the United States. Developed byengineers at DTNA, it promises to unlockautonomous vehicle advancements that re-duce accidents, improve fuel consumption,cut highway congestion, and safeguard theenvironment.
http://www.freightlinerinspiration.com/http://www.freightlinerinspiration.com/newsroom/press/inspiration- truck-unveiled/https://www.youtube.com/watch?v=mRkOGU3Gz9Yhttps://www.youtube.com/watch?v=LL4dbq-n8Pghttps://www.youtube.com/watch?v=LJz4Ms_5AXE
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 10 / 48
Analysis, Modelling and Solutions My Vision of Mobility
Autonomous Cars Come True
Freightliner Inspiration Truck Unveiled at Hoover Dam LAS VEGAS, 5-5-2015, DTNA
First Licensed Autonomous Commercial Truck to Drive on U.S. Public HighwayIn a spectacular evening ceremony at Hoover Dam, Daimler Trucks North America(DTNA) unveiled the Freightliner Inspiration Truck to several hundred international newsmedia, trucking industry analysts and officials.
Daimler Inspiration Truck - So fährt derRobo-Truck von Mercedes (youtube)
The Freightliner Inspiration Truck isthe first licensed autonomous commercialtruck to operate on an open public high-way in the United States. Developed byengineers at DTNA, it promises to unlockautonomous vehicle advancements that re-duce accidents, improve fuel consumption,cut highway congestion, and safeguard theenvironment.
http://www.freightlinerinspiration.com/http://www.freightlinerinspiration.com/newsroom/press/inspiration- truck-unveiled/https://www.youtube.com/watch?v=mRkOGU3Gz9Yhttps://www.youtube.com/watch?v=LL4dbq-n8Pghttps://www.youtube.com/watch?v=LJz4Ms_5AXE
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 10 / 48
Analysis, Modelling and Solutions My Vision of Mobility
Autonomous Cars Come True
Freightliner Inspiration Truck Unveiled at Hoover Dam LAS VEGAS, 5-5-2015, DTNA
First Licensed Autonomous Commercial Truck to Drive on U.S. Public HighwayIn a spectacular evening ceremony at Hoover Dam, Daimler Trucks North America(DTNA) unveiled the Freightliner Inspiration Truck to several hundred international newsmedia, trucking industry analysts and officials.
Daimler Inspiration Truck - So fährt derRobo-Truck von Mercedes (youtube)
The Freightliner Inspiration Truck isthe first licensed autonomous commercialtruck to operate on an open public high-way in the United States. Developed byengineers at DTNA, it promises to unlockautonomous vehicle advancements that re-duce accidents, improve fuel consumption,cut highway congestion, and safeguard theenvironment.
http://www.freightlinerinspiration.com/http://www.freightlinerinspiration.com/newsroom/press/inspiration- truck-unveiled/https://www.youtube.com/watch?v=mRkOGU3Gz9Yhttps://www.youtube.com/watch?v=LL4dbq-n8Pghttps://www.youtube.com/watch?v=LJz4Ms_5AXE
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 10 / 48
Analysis, Modelling and Solutions My Vision of Mobility
Autonomous Cars Come True
Freightliner Inspiration Truck Unveiled at Hoover Dam LAS VEGAS, 5-5-2015, DTNA
First Licensed Autonomous Commercial Truck to Drive on U.S. Public HighwayIn a spectacular evening ceremony at Hoover Dam, Daimler Trucks North America(DTNA) unveiled the Freightliner Inspiration Truck to several hundred international newsmedia, trucking industry analysts and officials.
Daimler Inspiration Truck - So fährt derRobo-Truck von Mercedes (youtube)
The Freightliner Inspiration Truck isthe first licensed autonomous commercialtruck to operate on an open public high-way in the United States. Developed byengineers at DTNA, it promises to unlockautonomous vehicle advancements that re-duce accidents, improve fuel consumption,cut highway congestion, and safeguard theenvironment.
http://www.freightlinerinspiration.com/http://www.freightlinerinspiration.com/newsroom/press/inspiration- truck-unveiled/https://www.youtube.com/watch?v=mRkOGU3Gz9Yhttps://www.youtube.com/watch?v=LL4dbq-n8Pghttps://www.youtube.com/watch?v=LJz4Ms_5AXE
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 10 / 48
Analysis, Modelling and Solutions Objectives
How to Solve a Problem?
Algorithm
Intuitively build amodelDeduce a numeri-cal algorithm
Put it into a pro-gramUse it respectingthe preconditions
Expert System
Intuitively build anmodelFormulate rules
Apply Rules
may solve relatedproblems
Neural Network
Intuitively build anmodelneeds samplingpointsgeneralizes based onsampling data
applies to relatedproblems
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 11 / 48
Analysis, Modelling and Solutions Objectives
How to Solve a Problem?
Algorithm
Intuitively build amodelDeduce a numeri-cal algorithm
Put it into a pro-gramUse it respectingthe preconditions
Expert System
Intuitively build anmodelFormulate rules
Apply Rules
may solve relatedproblems
Neural Network
Intuitively build anmodelneeds samplingpointsgeneralizes based onsampling data
applies to relatedproblems
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 11 / 48
Analysis, Modelling and Solutions Objectives
How to Solve a Problem?
Algorithm
Intuitively build amodelDeduce a numeri-cal algorithm
Put it into a pro-gramUse it respectingthe preconditions
Expert System
Intuitively build anmodelFormulate rules
Apply Rules
may solve relatedproblems
Neural NetworkIntuitively build anmodelneeds samplingpointsgeneralizes based onsampling data
applies to relatedproblems
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 11 / 48
Analysis, Modelling and Solutions Objectives
Physical Performance: An Engine on a Test Bench
load, throttle walve, ignition an-gle, dwell angle, mixture, voltage,temperature of engine, air and oil
→rotational speed, con-sumption, temperatureand amount of emission
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 12 / 48
Analysis, Modelling and Solutions Objectives
Physical Performance: An Engine on a Test Bench
TargetsCreate the optimal engine characteristic map.. . . also regarding start situation.Reduce test bench time.
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 12 / 48
Analysis, Modelling and Solutions Objectives
Mathematical Model of an Engine
f
Mathematical model: abstractionLook at the engine as a functionAssumes functional dependencies (one-one)
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 13 / 48
Analysis, Modelling and Solutions Objectives
Mathematical Model of an Engine
KNN
Models with artificial neural networkArtificial neural networks should learn to “behave” like an engine.The knowledge must come from (measured) data.
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 14 / 48
Analysis, Modelling and Solutions Objectives
Application: Reduce Test Bench Time
Optimization of characteristic maps using artificial neural networks
KNN
e o e o
f f
Fill the neural networks with the “knowledge” of several engines:measured data from test benchNew engine: extending the knowledge base with a few data from testbenchOptimize the characteristic map using the trained neural network
R. Stricker, BMW AG, 1996
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 15 / 48
Analysis, Modelling and Solutions Objectives
Curve Fitting (Least Square Method, Regression)
“Learning” only in the last layer
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 16 / 48
Analysis, Modelling and Solutions Objectives
Curve Fitting (Least Sq. M., Regression): Examples
Polynomial Fitting
f (x) =n∑
k=0akxk
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 17 / 48
Analysis, Modelling and Solutions Objectives
Curve Fitting (Least Sq. M., Regression): Examples
Polynomial Fitting
f (x) =n∑
k=0akxk
Fourier Fitting
f (x) = a0 +n∑
k=1ak cos(ωkx) + ak sin(ωkx)
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 17 / 48
Analysis, Modelling and Solutions Objectives
Curve Fitting (Least Sq. M., Regression): Examples
Polynomial Fitting
Fourier Fitting
Stress-Strain-Diagram BMW 1996
f (x) = a−1x + .1 + a0 + a1x + a2x2 + a3x3
− provides:a−1 = 2.2677;a0 = 297.9072;a1 = 71.4932;a2 = −33.7959;a3 = 5.5016.
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 17 / 48
Neurons and Neural Networks NeuroScience
1 Analysis, Modelling and Solutions
2 Neurons and Neural NetworksNeuroScienceArtificial Neuron, Linear SeperationNeural Network Learning
3 Accident Severity
4 Comfort in Cabriolet: Active Torsion Damping
5 Further Examples and Conclusion
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 18 / 48
Neurons and Neural Networks NeuroScience
Brain — Computer: a Comparison
Brain and standard computershighly perform w.r.t. to different tasks:
BrainHighly parallelFault tolerantPattern recognitionGeneralizationSelf-organizingca. 1011 neurons, reducingto 107
Every neuron has ca. 104connected neurons.
Computer
PreciseFaultless storingFast algorithmic calculations
von Neumann architecture
Nearly stand alone
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 19 / 48
Neurons and Neural Networks NeuroScience
Brain — Computer: a Comparison
Brain and standard computershighly perform w.r.t. to different tasks:
BrainHighly parallelFault tolerantPattern recognitionGeneralizationSelf-organizingca. 1011 neurons, reducingto 107
Every neuron has ca. 104connected neurons.
Computer
PreciseFaultless storingFast algorithmic calculations
von Neumann architecture
Nearly stand alone
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 19 / 48
Neurons and Neural Networks NeuroScience
Brain — Computer: a Comparison
Brain and standard computershighly perform w.r.t. to different tasks:
BrainHighly parallelFault tolerantPattern recognitionGeneralizationSelf-organizingca. 1011 neurons, reducingto 107
Every neuron has ca. 104connected neurons.
Computer
PreciseFaultless storingFast algorithmic calculations
von Neumann architecture
Nearly stand alone
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 19 / 48
Neurons and Neural Networks NeuroScience
The Biological Neuron
Principles of Operation 1 Impulse through the axon.2 Synapses collect impulse.(Chemical reaction)
3 Dendrites transmit it.4 Nucleus gets impulse.5 Overall impulse:Excitation of the neuron.
6 Threshold target reached:
Learning:Synapses, dendrites enhance their connection.
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 20 / 48
Neurons and Neural Networks NeuroScience
The Biological Neuron
Principles of Operation 1 Impulse through the axon.2 Synapses collect impulse.(Chemical reaction)
3 Dendrites transmit it.4 Nucleus gets impulse.5 Overall impulse:Excitation of the neuron.
6 Threshold target reached:
Learning:Synapses, dendrites enhance their connection.
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 20 / 48
Neurons and Neural Networks NeuroScience
The Biological Neuron
Principles of Operation 1 Impulse through the axon.2 Synapses collect impulse.(Chemical reaction)
3 Dendrites transmit it.4 Nucleus gets impulse.5 Overall impulse:Excitation of the neuron.
6 Threshold target reached:
Learning:Synapses, dendrites enhance their connection.
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 20 / 48
Neurons and Neural Networks NeuroScience
The Biological Neuron
Principles of Operation 1 Impulse through the axon.2 Synapses collect impulse.3 Dendrites transmit it.4 Nucleus gets impulse.(Electrical impulse)
5 Overall impulse:Excitation of the neuron.
6 Threshold target reached:
Learning:Synapses, dendrites enhance their connection.
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 20 / 48
Neurons and Neural Networks NeuroScience
The Biological Neuron
Principles of Operation 1 Impulse through the axon.2 Synapses collect impulse.3 Dendrites transmit it.4 Nucleus gets impulse.(Electrical impulse)
5 Overall impulse:Excitation of the neuron.
6 Threshold target reached:
Learning:Synapses, dendrites enhance their connection.
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 20 / 48
Neurons and Neural Networks NeuroScience
The Biological Neuron
Principles of Operation 1 Impulse through the axon.2 Synapses collect impulse.3 Dendrites transmit it.4 Nucleus gets impulse.5 Overall impulse:Excitation of the neuron.
6 Threshold target reached:neuron sends impulse.
Learning:Synapses, dendrites enhance their connection.
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 20 / 48
Neurons and Neural Networks NeuroScience
The Biological Neuron
Principles of Operation 1 Impulse through the axon.2 Synapses collect impulse.3 Dendrites transmit it.4 Nucleus gets impulse.5 Overall impulse:Excitation of the neuron.
6 Threshold target reached:neuron sends impulse.
Learning:Synapses, dendrites enhance their connection.
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 20 / 48
Neurons and Neural Networks Artificial Neuron, Linear Seperation
The Artificial Neuron
1 input (vector) ~e = (e1, . . . , en), −1 to be used by threshold2 weights and threshold ~w = (w1, . . . , wn) and θ3 net (value), propagation net = 〈~e, ~w 〉 − θ =
∑ni=1 eiwi − θ
4 activation (primitive function), activity a = a(〈~e, ~w 〉 − θ)5 output function6 otuput o = o(a(〈~w , ~e 〉 − θ))
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 21 / 48
Neurons and Neural Networks Artificial Neuron, Linear Seperation
The Artificial Neuron
1 input (vector) ~e = (e1, . . . , en), −1 to be used by threshold2 weights and threshold ~w = (w1, . . . , wn) and θ3 net (value), propagation net = 〈~e, ~w 〉 − θ =
∑ni=1 eiwi − θ
4 activation (primitive function), activity a = a(〈~e, ~w 〉 − θ)5 output function6 otuput o = o(a(〈~w , ~e 〉 − θ))
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 21 / 48
Neurons and Neural Networks Artificial Neuron, Linear Seperation
The Artificial Neuron
1 input (vector) ~e = (e1, . . . , en), −1 to be used by threshold2 weights and threshold ~w = (w1, . . . , wn) and θ3 net (value), propagation net = 〈~e, ~w 〉 − θ =
∑ni=1 eiwi − θ
4 activation (primitive function), activity a = a(〈~e, ~w 〉 − θ)5 output function6 otuput o = o(a(〈~w , ~e 〉 − θ))
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 21 / 48
Neurons and Neural Networks Artificial Neuron, Linear Seperation
The Artificial Neuron
1 input (vector) ~e = (e1, . . . , en), −1 to be used by threshold2 weights and threshold ~w = (w1, . . . , wn) and θ3 net (value), propagation net = 〈~e, ~w 〉 − θ =
∑ni=1 eiwi − θ
4 activation (primitive function), activity a = a(〈~e, ~w 〉 − θ)5 output function6 otuput o = o(a(〈~w , ~e 〉 − θ))
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 21 / 48
Neurons and Neural Networks Artificial Neuron, Linear Seperation
The Artificial Neuron
1 input (vector) ~e = (e1, . . . , en), −1 to be used by threshold2 weights and threshold ~w = (w1, . . . , wn) and θ3 net (value), propagation net = 〈~e, ~w 〉 − θ =
∑ni=1 eiwi − θ
4 activation (primitive function), activity a = a(〈~e, ~w 〉 − θ)5 output function6 otuput o = o(a(〈~w , ~e 〉 − θ))
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 21 / 48
Neurons and Neural Networks Artificial Neuron, Linear Seperation
Sigmoidal Activation
a(x) = tanh(gx) : R→(−1, 1)
Derivative: a′(x) = g(− tanh(gx)2) = g(1− a(x)2) ; a′(0) = g
Alternative Activations:1 a(x) = 1
1+e−gx : R→(0, 1), a′(x) = g(1− a(x)) · a(x) ; a′(0) = g4
2 Piecewise parabola, easy implementation in hard ware (Carmen Stumm)
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 22 / 48
Neurons and Neural Networks Artificial Neuron, Linear Seperation
Sigmoidal Activation
a(x) = tanh(gx) : R→(−1, 1)
Derivative: a′(x) = g(− tanh(gx)2) = g(1− a(x)2) ; a′(0) = g
Alternative Activations:1 a(x) = 1
1+e−gx : R→(0, 1), a′(x) = g(1− a(x)) · a(x) ; a′(0) = g4
2 Piecewise parabola, easy implementation in hard ware (Carmen Stumm)
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 22 / 48
Neurons and Neural Networks Artificial Neuron, Linear Seperation
Sigmoidal Activation
a(x) = tanh(gx) : R→(−1, 1)
Derivative: a′(x) = g(− tanh(gx)2) = g(1− a(x)2) ; a′(0) = g
Alternative Activations:1 a(x) = 1
1+e−gx : R→(0, 1), a′(x) = g(1− a(x)) · a(x) ; a′(0) = g4
2 Piecewise parabola, easy implementation in hard ware (Carmen Stumm)
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 22 / 48
Neurons and Neural Networks Artificial Neuron, Linear Seperation
Sigmoidal Activation
a(x) = tanh(gx) : R→(−1, 1)
Derivative: a′(x) = g(− tanh(gx)2) = g(1− a(x)2) ; a′(0) = g
Alternative Activations:1 a(x) = 1
1+e−gx : R→(0, 1), a′(x) = g(1− a(x)) · a(x) ; a′(0) = g4
2 Piecewise parabola, easy implementation in hard ware (Carmen Stumm)
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 22 / 48
Neurons and Neural Networks Artificial Neuron, Linear Seperation
Bipolar and Binary Threshold Function
a(x) = sign(x) : R→[−1, 1]
Activity: a(x) = sign(〈~e, ~w 〉 − θ) = 2(〈~e, ~w 〉 − θ ≥ 0)− 1
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 23 / 48
Neurons and Neural Networks Artificial Neuron, Linear Seperation
Bipolar and Binary Threshold Function
a(x) = sign(x) : R→[−1, 1]
Activity: a(x) = sign(〈~e, ~w 〉 − θ) = 2(〈~e, ~w 〉 − θ ≥ 0)− 1
Alternative Activities:a(x) = (x ≥ 0) = 1+sign(x)
2 = 〈x − 0〉0 : R→[0, 1]
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 23 / 48
Neurons and Neural Networks Artificial Neuron, Linear Seperation
Bipolar and Binary Threshold Function
a(x) = sign(x) : R→[−1, 1]
Activity: a(x) = sign(〈~e, ~w 〉 − θ) = 2(〈~e, ~w 〉 − θ ≥ 0)− 1
Linear SeparationThreshold neurons linearly separate input data, logically combinedthreshold neurons define a simplex.
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 23 / 48
Neurons and Neural Networks Artificial Neuron, Linear Seperation
Linearly separable Sets
Hidden neurons separate linearly
xy
−10
≥ 1.5
0
−10
= −1.5 x
y
0−1
≥ 01.5
0−1
= −1.5 x
y
11
≥ 31.5
11
= 4.5
The output neuron gathers these results usingthe logical OR-function.A positive answer (o = 1) signals that theelement belongs to the outer region (positiveregion).
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 24 / 48
Neurons and Neural Networks Neural Network Learning
Multi-layered Feed Forward Networks
Feed forward network with topology 3-4-4-2
Learning: Change weights and threshold until the result satisfies.
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 25 / 48
Neurons and Neural Networks Neural Network Learning
Multi-layered Feed Forward Networks
Feed forward network with topology 3-4-4-2
rûs Bpforw ein;anzs;aus;is;netanzsûÙÒbpanþausûnetûbpanÒ¡0þaus[1]ûÚôein¡Ú÷bpte ã Eing. trans.isû1DO4:ý(anzs<isûis+1)/UNDO4 ã Schleife ..uber Schichten: Ausgabenaus[is]ûÚ1ß1+*-bpap«Ønet[is]ûÚ(isØbpbi)+[1](rØbpgw)+.«(rûis-1)ØausþýDO4UNDO4:rûô(anzsØaus)¡Ú÷bpta
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 25 / 48
Neurons and Neural Networks Neural Network Learning
Topology of a Feed-forward Network
Theorem (Kolmogorow, 1957)Every vector-valued function f : [0, 1]n→Rm can be written as a 3-layerfeed-forward network with n input neurons, 2n + 1 hidden neurons and moutput neurons. The activation functions depend on f and n.
Remark1 The proof shows the existence in a non-constructive way.2 It does not give the activation functions.3 The theorem has no direct practical impact.
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 26 / 48
Neurons and Neural Networks Neural Network Learning
Topology of a Feed-forward Network
Theorem (Kolmogorow, 1957)Every vector-valued function f : [0, 1]n→Rm can be written as a 3-layerfeed-forward network with n input neurons, 2n + 1 hidden neurons and moutput neurons. The activation functions depend on f and n.
ZusatzFür die stetige Funktion f : [−1, 1]n → [−1, 1] gibt es Funktionen g undgi (i = 1, . . . , 2n + 1) in einem Argument und Konstantenλj (j = 1, . . . , n) mit
f (x1, . . . , xn) =2n+1∑i=1
g
n∑j=1
λjgi (xj)
.
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 26 / 48
Neurons and Neural Networks Neural Network Learning
Topology of a Feed-forward Network
Theorem (Kolmogorow, 1957)Every vector-valued function f : [0, 1]n→Rm can be written as a 3-layerfeed-forward network with n input neurons, 2n + 1 hidden neurons and moutput neurons. The activation functions depend on f and n.
Satz (Annäherung durch Netze)Jede Funktion kann durch Netze mit einer verdeckten Schicht angenähertwerden.
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 26 / 48
Neurons and Neural Networks Neural Network Learning
Multi-layered Feed Forward Networks
Input layerContinuous input:Linear transformation into [-1; 1]Discrete input:One neuron per value, transformed onto -1,1
Multi-layer network
Output layer using a tangential activity functionTarget activities should be equally distributed in the interval[−0.6, 0.6]!The inverse of the output function could be:
f (x) =
[m, M] → [−0.6, 0.6]x 7→ −0.6 + 1.2
(x−mM−m
)s; s > 0
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 27 / 48
Neurons and Neural Networks Neural Network Learning
Multi-layered Feed Forward Networks
Input layerContinuous input:Linear transformation into [-1; 1]Discrete input:One neuron per value, transformed onto -1,1
Multi-layer network
Output layer using a tangential activity functionTarget activities should be equally distributed in the interval[−0.6, 0.6]!The inverse of the output function could be:
f (x) =
[m, M] → [−0.6, 0.6]x 7→ −0.6 + 1.2
(x−mM−m
)s; s > 0
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 27 / 48
Neurons and Neural Networks Neural Network Learning
Multi-layered Feed Forward Networks
Input layerContinuous input:Linear transformation into [-1; 1]Discrete input:One neuron per value, transformed onto -1,1
Multi-layer network
Output layer using a logarithmic activity functionTarget activities should be equally distributed in the interval [0.2, 0.8]!The inverse of the output function could be:
f (x) =
[m, M] → [0.2, 0.8]x 7→ 0.2 + 0.6
(x−mM−m
)s; s > 0
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 27 / 48
Neurons and Neural Networks Neural Network Learning
Learning in multi-layered networks
TargetChange the weights and thresholds in such a way that the errors in thetraining data get small.
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 28 / 48
Neurons and Neural Networks Neural Network Learning
Learning in multi-layered networks
TargetChange the weights and thresholds in such a way that the errors in thetraining data get small.
Calculations
error: E (~w) = 12
n∑i=1‖~zi − ~oi (w)‖2
gradient: −−→grad w E (~w) =(∂E (~w)∂w1
,∂E (~w)∂w2
, . . . ,∂E (~w)∂w3
)
Delta-Rule (Gradient descent)
∆~w (t) = −σ−−→grad w E (~w); ~w (t) = ~w (t−1) + ∆~w (t) + µ∆~w (t−1)
σ decreasing, z.B. von 0.9 auf 0.1, µ increasing, z.B. µ = 1− σ.
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 28 / 48
Neurons and Neural Networks Neural Network Learning
Learning in multi-layered networks
TargetChange the weights and thresholds in such a way that the errors in thetraining data get small.
Calculations
error: E (~w) = 12
n∑i=1‖~zi − ~oi (w)‖2
gradient: −−→grad w E (~w) =(∂E (~w)∂w1
,∂E (~w)∂w2
, . . . ,∂E (~w)∂w3
)Delta-Rule (Gradient descent)
∆~w (t) = −σ−−→grad w E (~w); ~w (t) = ~w (t−1) + ∆~w (t) + µ∆~w (t−1)
σ decreasing, z.B. von 0.9 auf 0.1, µ increasing, z.B. µ = 1− σ.
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 28 / 48
Neurons and Neural Networks Neural Network Learning
Error Back Propagation
Step-by-step error back propagation using the net error δi :~δi := ∂E
∂~ni= ∂E
∂~ni+1· ∂~ni+1∂~oi
· ∂~oi∂~ni
= ~δi+1 ·Wi+1 · A(~ni )
∂E∂Wi ,rs
= ∂E∂~ni· ∂~ni∂Wi ,rs
= ~δi · oi−1,s er = δi ,r oi−1,s
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 29 / 48
Neurons and Neural Networks Neural Network Learning
Error Back Propagation
Step-by-step error back propagation using the net error δi :~δi := ∂E
∂~ni= ∂E
∂~ni+1· ∂~ni+1∂~oi
· ∂~oi∂~ni
= ~δi+1 ·Wi+1 · A(~ni )
∂E∂Wi ,rs
= ∂E∂~ni· ∂~ni∂Wi ,rs
= ~δi · oi−1,s er = δi ,r oi−1,s
rûziel Bpback aus;anzs;dgwa;err;is;lr(aus lr)ûauserrûbpanÒ¡0dgwaûdgw ã dgw globalisûanzsûÙÒbpan ã Anzahl SchichtenrûanzsØaus ã Fehler letzte Schichterr[anzs]ûÚ-2«bpap«(r«1-r)«ziel-r ã NettofehlerDO:ý(1>isûis-1)/UNDO ã B.P. ..uber alle Sch.dgw[is]ûÚ(-(1+is)Øerr)Ê.«rûisØaus ã ÈGewicht je Schichtâ(is>1)/'err[is]ûÚbpap«(r«1-r)«(ôisØbpgw)+.«Øerr[1+is]' ã NettofehlerýDOUNDO:bpgwûbpgw+lr«dgw+(1-lr)«dgwa ã Gewichte ..andernbpbiûbpbi+(dbiû-lr«err)+(1-lr)«dbi ã Bias ..andernrûanzsØerr ã Fehler in letzter Sch.
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 29 / 48
Neurons and Neural Networks Neural Network Learning
Learning in multi-layered networks
Levenberg-Marquardt-Method
E (~w) = 12
⟨~f (~w), ~f (~w)
⟩mit ~f (~w) = ~z − ~o(~w)
~0 = E ′(~w) = −−→grad E (~w) = f ′T (~w)~f (~w)E ′′(~w) = f ′T (~w)f ′(~w) für f ′′T (~w) small!~wk+1 = ~wk − E ′′(~w)−1E ′(~w)
∆~w = −(
f ′T (~w)f ′(~w))−1
f ′T (~w)~f (~w)
System of linear equations to be solved:
f ′T (~w)f ′(~w)∆~w = −f ′T (~w)~f (~w)
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 30 / 48
Neurons and Neural Networks Neural Network Learning
Learning in multi-layered networks
Levenberg-Marquardt-Method
E (~w) = 12
⟨~f (~w), ~f (~w)
⟩mit ~f (~w) = ~z − ~o(~w)
~0 = E ′(~w) = −−→grad E (~w) = f ′T (~w)~f (~w)
E ′′(~w) = f ′T (~w)f ′(~w) für f ′′T (~w) small!~wk+1 = ~wk − E ′′(~w)−1E ′(~w)
∆~w = −(
f ′T (~w)f ′(~w))−1
f ′T (~w)~f (~w)
System of linear equations to be solved:
f ′T (~w)f ′(~w)∆~w = −f ′T (~w)~f (~w)
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 30 / 48
Neurons and Neural Networks Neural Network Learning
Learning in multi-layered networks
Levenberg-Marquardt-Method
E (~w) = 12
⟨~f (~w), ~f (~w)
⟩mit ~f (~w) = ~z − ~o(~w)
~0 = E ′(~w) = −−→grad E (~w) = f ′T (~w)~f (~w)
E ′′(~w) =(
f ′T (~w)~f (~w))′
= f ′′T (~w)~f (~w) + f ′T (~w)f ′(~w)
= f ′T (~w)f ′(~w) für f ′′T (~w) small!
~wk+1 = ~wk − E ′′(~w)−1E ′(~w)
∆~w = −(
f ′T (~w)f ′(~w))−1
f ′T (~w)~f (~w)
System of linear equations to be solved:
f ′T (~w)f ′(~w)∆~w = −f ′T (~w)~f (~w)
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 30 / 48
Neurons and Neural Networks Neural Network Learning
Learning in multi-layered networks
Levenberg-Marquardt-Method
E (~w) = 12
⟨~f (~w), ~f (~w)
⟩mit ~f (~w) = ~z − ~o(~w)
~0 = E ′(~w) = −−→grad E (~w) = f ′T (~w)~f (~w)
E ′′(~w) =(
f ′T (~w)~f (~w))′
= f ′′T (~w)~f (~w) + f ′T (~w)f ′(~w)
= f ′T (~w)f ′(~w) für f ′′T (~w) small!~wk+1 = ~wk − E ′′(~w)−1E ′(~w)
∆~w = −(
f ′T (~w)f ′(~w))−1
f ′T (~w)~f (~w)
System of linear equations to be solved:
f ′T (~w)f ′(~w)∆~w = −f ′T (~w)~f (~w)
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 30 / 48
Neurons and Neural Networks Neural Network Learning
Learning in multi-layered networks
Levenberg-Marquardt-Method
E (~w) = 12
⟨~f (~w), ~f (~w)
⟩mit ~f (~w) = ~z − ~o(~w)
~0 = E ′(~w) = −−→grad E (~w) = f ′T (~w)~f (~w)E ′′(~w) = f ′T (~w)f ′(~w) für f ′′T (~w) small!~wk+1 = ~wk − E ′′(~w)−1E ′(~w)
∆~w = −(
f ′T (~w)f ′(~w))−1
f ′T (~w)~f (~w)
System of linear equations to be solved:
f ′T (~w)f ′(~w)∆~w = −f ′T (~w)~f (~w)
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 30 / 48
Neurons and Neural Networks Neural Network Learning
Evaluation of a trained network
Error in training datamaximal errormean errorstandard deviation
InsiderEvaluation of forecasts
Error in testing data20%-40% of available datamaximal and mean errorstandard deviation
Auto correlation
KNNKNN
e
fo
fe
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 31 / 48
Neurons and Neural Networks Neural Network Learning
Evaluation of a trained network
Error in training datamaximal errormean errorstandard deviation
InsiderEvaluation of forecasts
Error in testing data20%-40% of available datamaximal and mean errorstandard deviation
Auto correlation
KNNKNN
e
fo
fe
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 31 / 48
Neurons and Neural Networks Neural Network Learning
Evaluation of a trained network
Error in training datamaximal errormean errorstandard deviation
InsiderEvaluation of forecasts
Error in testing data20%-40% of available datamaximal and mean errorstandard deviation
Auto correlation
KNNKNN
e
fo
fe
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 31 / 48
Neurons and Neural Networks Neural Network Learning
Evaluation of a trained network
Error in training datamaximal errormean errorstandard deviation
InsiderEvaluation of forecasts
Error in testing data20%-40% of available datamaximal and mean errorstandard deviation
Auto correlation
KNNKNN
e
fo
fe
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 31 / 48
Accident Severity Prediction of Accident Severity
1 Analysis, Modelling and Solutions
2 Neurons and Neural Networks
3 Accident SeverityPrediction of Accident SeverityLearning Strategy
4 Comfort in Cabriolet: Active Torsion Damping
5 Further Examples and Conclusion
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 32 / 48
Accident Severity Prediction of Accident Severity
Accident Severity with A. Kuhn, J. Urbahn, BMW AG, 2000
t0: decision to fire airbag ...
tZ : ignition of airbag(t1 − tZ ≈ 30ms)
t1: driver starts forwarddisplacement
t2: acceleration decreases
Targets1 predict the severity of the accident2 help deciding which action to be taken3 protect the passengers as good as possible
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 33 / 48
Accident Severity Prediction of Accident Severity
Accident Severity with A. Kuhn, J. Urbahn, BMW AG, 2000
t0: decision to fire airbag ...
tZ : ignition of airbag(t1 − tZ ≈ 30ms)
t1: driver starts forwarddisplacement
t2: acceleration decreases
Targets1 predict the severity of the accident2 help deciding which action to be taken3 protect the passengers as good as possible
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 33 / 48
Accident Severity Prediction of Accident Severity
Targets of the Project
Accident severity: possible parameters
1 (mean) velocity of passengers (time, forwarddisplacement)
2 mean acceleration of passengers
Data baseData from parameter variations with Monte-Carlomethod:1 variation of relevant parameters and testing mode2 FEM simulations using PamCrash3 150 - 300 data sets for every 14 models
Data from some real crash tests
Made possible by
more computer power!
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 34 / 48
Accident Severity Prediction of Accident Severity
Targets of the Project
Accident severity: possible parameters
1 (mean) velocity of passengers (time, forwarddisplacement)
2 mean acceleration of passengers
Data baseData from parameter variations with Monte-Carlomethod:1 variation of relevant parameters and testing mode2 FEM simulations using PamCrash3 150 - 300 data sets for every 14 models
Data from some real crash tests
Made possible by
more computer power!
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 34 / 48
Accident Severity Prediction of Accident Severity
Targets of the Project
Accident severity: possible parameters
1 (mean) velocity of passengers (time, forwarddisplacement)
2 mean acceleration of passengers
Data baseData from parameter variations with Monte-Carlomethod:1 variation of relevant parameters and testing mode2 FEM simulations using PamCrash3 150 - 300 data sets for every 14 modelsData from some real crash tests
Made possible by
more computer power!
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 34 / 48
Accident Severity Prediction of Accident Severity
Targets of the Project
Accident severity: possible parameters
1 (mean) velocity of passengers (time, forwarddisplacement)
2 mean acceleration of passengers
Data baseData from parameter variations with Monte-Carlomethod:1 variation of relevant parameters and testing mode2 FEM simulations using PamCrash3 150 - 300 data sets for every 14 modelsData from some real crash tests
Made possible by
more computer power!
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 34 / 48
Accident Severity Prediction of Accident Severity
Using the Power of Neural Networks
3- or 4-layer networks Input
accelerations, velocities, dis-placementsmaximal and mean values
Output
1 velocity2 mean acceleration
(impact to passengers)
Learning:
activation function: tangential, piecewise parabolalearning method: gradient descent, Levenberg-Marquardt
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 35 / 48
Accident Severity Prediction of Accident Severity
Using the Power of Neural Networks
3- or 4-layer networks Input
accelerations, velocities, dis-placementsmaximal and mean values
Output
1 velocity2 mean acceleration
(impact to passengers)
Learning:
activation function: tangential, piecewise parabolalearning method: gradient descent, Levenberg-Marquardt
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 35 / 48
Accident Severity Prediction of Accident Severity
Using the Power of Neural Networks
3- or 4-layer networks Input
accelerations, velocities, dis-placementsmaximal and mean values
Output
1 velocity2 mean acceleration
(impact to passengers)
Learning:
activation function: tangential, piecewise parabolalearning method: gradient descent, Levenberg-Marquardt
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 35 / 48
Accident Severity Learning Strategy
Training the Networks: Learning Strategy
random choice of 60% learning, 40% testing datastop training when the error in testing data increases
Mean learning error
0 10 20 30 40 50 60 70 8010
−5
10−4
10−3
10−2
10−1
100
Performance is 0.00631592, Goal is 1e−005
81 Epochs
Tra
inin
g−B
lue
Goa
l−B
lack
Val
idat
ion−
Gre
en T
est−
Red
Weights in the first layer
0 1 2 3 4 5 6 7
1
2
3
4
Input
Neu
ron
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 36 / 48
Accident Severity Learning Strategy
Training the Networks: Influence of the Input
random choice of 60% learning, 40% testing datastop training when the error in testing data increases
Mean learning error
0 10 20 30 40 50 60 70 8010
−5
10−4
10−3
10−2
10−1
100
Performance is 0.00631592, Goal is 1e−005
81 Epochs
Tra
inin
g−B
lue
Goa
l−B
lack
Val
idat
ion−
Gre
en T
est−
Red
Weights in the first layer
0 1 2 3 4 5 6 7
1
2
3
4
Input
Neu
ron
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 36 / 48
Accident Severity Learning Strategy
Optimization
Statistics on the number of neurons (1 - 2 hidden layers)
0 50 100 150 200 250 3000
Anzahl der Gewichte
σ(fe
hler
)
Standardabweichungen: µ(σ(fehler)) ± σ(σ(fehler))
LerndatenTestdaten
0 50 100 150 200 250 300
100
Anzahl der Gewichte
cor(
a,z)
Korrelationen: µ cor(a,z) ± σ cor(a,z)
LerndatenTestdaten
Graphs:σ(σ(oi − zi ))correlation
Expectation:σ(σ) gets smaller upto saturation.error in learning dataonly a bit better thantesting data
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 37 / 48
Accident Severity Learning Strategy
Results
Models1 FEM simulation data gives a good data base.2 Usable topologies: e.g.: 4-15-8-1, 4-33-13 Usable parameter: mean acceleration4 Usable input:mean accelerations and velocities
σ2-method allows:to choose a network of an appropriate size.to judge on the quality of the data.
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 38 / 48
Accident Severity Learning Strategy
Results
Models1 FEM simulation data gives a good data base.2 Usable topologies: e.g.: 4-15-8-1, 4-33-13 Usable parameter: mean acceleration4 Usable input:mean accelerations and velocities
σ2-method allows:to choose a network of an appropriate size.to judge on the quality of the data.
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 38 / 48
Comfort in Cabriolet: Active Torsion Damping Disturbance of Comfort
1 Analysis, Modelling and Solutions
2 Neurons and Neural Networks
3 Accident Severity
4 Comfort in Cabriolet: Active Torsion DampingDisturbance of ComfortActive Torsion Damping using Neural networks
5 Further Examples and Conclusion
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 39 / 48
Comfort in Cabriolet: Active Torsion Damping Disturbance of Comfort
Active Damping of Torsion with Ch. Hornung, G. Pflanz, BMW AG, 2005
Problem of a Cabrio: lack of torsion stiffness Mx/dy
Limousine: 100 % Cabrio 7.3%
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 40 / 48
Comfort in Cabriolet: Active Torsion Damping Disturbance of Comfort
Origin of Unwanted Vibration
AnregungCar excitation mainly caused bywheel resonance,
⇒ Excitation is transmitted by jointsthrough the axes and spring strut,
⇒ Vibration is observed by passengers.
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 41 / 48
Comfort in Cabriolet: Active Torsion Damping Disturbance of Comfort
Origin of Unwanted Vibration
AnregungCar excitation mainly caused bywheel resonance,
⇒ Excitation is transmitted by jointsthrough the axes and spring strut,
⇒ Vibration is observed by passengers.
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 41 / 48
Comfort in Cabriolet: Active Torsion Damping Disturbance of Comfort
Origin of Unwanted Vibration
AnregungCar excitation mainly caused bywheel resonance,
⇒ Excitation is transmitted by jointsthrough the axes and spring strut,
⇒ Vibration is observed by passengers.
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 41 / 48
Comfort in Cabriolet: Active Torsion Damping Active Torsion Damping using Neural networks
Active Damping: Actuators produce counter-displacement
Sensors and ActuatorsSensors realize a disturbanceActuators produce opposite displacement
⇒ No displacement at the windscreen panel
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 42 / 48
Comfort in Cabriolet: Active Torsion Damping Active Torsion Damping using Neural networks
Active Damping: Actuators produce counter-displacement
Sensors and ActuatorsSensors realize a disturbanceActuators produce opposite displacement
⇒ No displacement at the windscreen panel
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 42 / 48
Comfort in Cabriolet: Active Torsion Damping Active Torsion Damping using Neural networks
Active Damping: Actuators produce counter-displacement
Sensors and ActuatorsSensors realize a disturbanceActuators produce opposite displacement
⇒ No displacement at the windscreen panel
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 42 / 48
Comfort in Cabriolet: Active Torsion Damping Active Torsion Damping using Neural networks
Training
and Results
Models
One or all velocities
Time series up to 500 ms
Combination of accelerations
Training of the neural networks
At least 40% data for testing
Gradient descent,Levenberg-Marquardt
Termination:errors in testing data increase
Good results
time series ca. 200 ms ,
2 and 4 input signals,
both training methods
small networks ⇒ strongly linearbehaviour of car body and actuator
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 43 / 48
Comfort in Cabriolet: Active Torsion Damping Active Torsion Damping using Neural networks
Training
and Results
Models
One or all velocities
Time series up to 500 ms
Combination of accelerations
Training of the neural networks
At least 40% data for testing
Gradient descent,Levenberg-Marquardt
Termination:errors in testing data increase
Good results
time series ca. 200 ms ,
2 and 4 input signals,
both training methods
small networks ⇒ strongly linearbehaviour of car body and actuator
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 43 / 48
Comfort in Cabriolet: Active Torsion Damping Active Torsion Damping using Neural networks
Training and Results
Models
One or all velocities
Time series up to 500 ms
Combination of accelerations
Training of the neural networks
At least 40% data for testing
Gradient descent,Levenberg-Marquardt
Termination:errors in testing data increase
Good results
time series ca. 200 ms ,
2 and 4 input signals,
both training methods
small networks ⇒ strongly linearbehaviour of car body and actuator
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 43 / 48
Comfort in Cabriolet: Active Torsion Damping Active Torsion Damping using Neural networks
Validation
ValidationIntegration of trained network into a simulink-modelNeural network gives slightly better results than a linear control
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 44 / 48
Comfort in Cabriolet: Active Torsion Damping Active Torsion Damping using Neural networks
Validation
ValidationIntegration of trained network into a simulink-modelNeural network gives slightly better results than a linear control
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 44 / 48
Further Examples and Conclusion Further Example
1 Analysis, Modelling and Solutions
2 Neurons and Neural Networks
3 Accident Severity
4 Comfort in Cabriolet: Active Torsion Damping
5 Further Examples and ConclusionFurther ExampleConclusion
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 45 / 48
Further Examples and Conclusion Further Example
Pattern Recognition
Controlling vehicles and robotsNeural Network controls a vehicle orrobot. It is trained „on the job“.
Insolvency DetectionBased on annual reports a forecaston the risque of insolvency is given.
Potential terminationBased on power consumption andclient data companies that mightterminate a contract are identifiedand get discount.
Forecast of share valueForecast based on previous share val-ues and economic data of the com-pany.
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 46 / 48
Further Examples and Conclusion Further Example
Pattern Recognition
Controlling vehicles and robotsNeural Network controls a vehicle orrobot. It is trained „on the job“.
Insolvency DetectionBased on annual reports a forecaston the risque of insolvency is given.
Potential terminationBased on power consumption andclient data companies that mightterminate a contract are identifiedand get discount.
Forecast of share valueForecast based on previous share val-ues and economic data of the com-pany.
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 46 / 48
Further Examples and Conclusion Further Example
Pattern Recognition
Controlling vehicles and robotsNeural Network controls a vehicle orrobot. It is trained „on the job“.
Insolvency DetectionBased on annual reports a forecaston the risque of insolvency is given.
Potential terminationBased on power consumption andclient data companies that mightterminate a contract are identifiedand get discount.
Forecast of share valueForecast based on previous share val-ues and economic data of the com-pany.
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 46 / 48
Further Examples and Conclusion Further Example
Pattern Recognition
Controlling vehicles and robotsNeural Network controls a vehicle orrobot. It is trained „on the job“.
Insolvency DetectionBased on annual reports a forecaston the risque of insolvency is given.
Potential terminationBased on power consumption andclient data companies that mightterminate a contract are identifiedand get discount.
Forecast of share valueForecast based on previous share val-ues and economic data of the com-pany.
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 46 / 48
Further Examples and Conclusion Further Example
Pattern Recognition
Chemical reactivityPredection of its reactivity fromqauntivative properties of a bonding.
Insolvency DetectionBased on annual reports a forecaston the risque of insolvency is given.
Potential terminationBased on power consumption andclient data companies that mightterminate a contract are identifiedand get discount.
Forecast of share valueForecast based on previous share val-ues and economic data of the com-pany.
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 46 / 48
Further Examples and Conclusion Further Example
Pattern Recognition
Chemical reactivityPredection of its reactivity fromqauntivative properties of a bonding.
Origin of olive oilThe concentration of acids deter-mines the origin (region) of Italianolive oil.
Potential terminationBased on power consumption andclient data companies that mightterminate a contract are identifiedand get discount.
Forecast of share valueForecast based on previous share val-ues and economic data of the com-pany.
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 46 / 48
Further Examples and Conclusion Further Example
Pattern Recognition
Chemical reactivityPredection of its reactivity fromqauntivative properties of a bonding.
Origin of olive oilThe concentration of acids deter-mines the origin (region) of Italianolive oil.
OlfaktometerMicro crystal system with six differ-ent piezo-electric crystal sensors: Aneural network learns to recognizeflavours.
Forecast of share valueForecast based on previous share val-ues and economic data of the com-pany.
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 46 / 48
Further Examples and Conclusion Further Example
Pattern Recognition
Chemical reactivityPredection of its reactivity fromqauntivative properties of a bonding.
Origin of olive oilThe concentration of acids deter-mines the origin (region) of Italianolive oil.
OlfaktometerMicro crystal system with six differ-ent piezo-electric crystal sensors: Aneural network learns to recognizeflavours.
Structure of a proteinConclusion from the primary struc-ture of a protein to its secondaryspacial structure.
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 46 / 48
Further Examples and Conclusion Further Example
Pattern Recognition
Power consumptionPrediction of the power consumptionof companies from one year to thenext.
Origin of olive oilThe concentration of acids deter-mines the origin (region) of Italianolive oil.
OlfaktometerMicro crystal system with six differ-ent piezo-electric crystal sensors: Aneural network learns to recognizeflavours.
Structure of a proteinConclusion from the primary struc-ture of a protein to its secondaryspacial structure.
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 46 / 48
Further Examples and Conclusion Further Example
Pattern Recognition
Power consumptionPrediction of the power consumptionof companies from one year to thenext.
Neural stetoscopeA neural networks interprets thenoise coming through a stethoscopeand provides a diagnoses of a heartproblem.
OlfaktometerMicro crystal system with six differ-ent piezo-electric crystal sensors: Aneural network learns to recognizeflavours.
Structure of a proteinConclusion from the primary struc-ture of a protein to its secondaryspacial structure.
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 46 / 48
Further Examples and Conclusion Further Example
Pattern Recognition
Power consumptionPrediction of the power consumptionof companies from one year to thenext.
Neural stetoscopeA neural networks interprets thenoise coming through a stethoscopeand provides a diagnoses of a heartproblem.
Breaking torqueDetermining the breaking torquefrom hydraulic pressure and velocity.
Structure of a proteinConclusion from the primary struc-ture of a protein to its secondaryspacial structure.
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 46 / 48
Further Examples and Conclusion Conclusion
Conclusion
Neuronal networks are able tolearn and store know how of a system,map functional dependencies
using a smooth or balancing interpolation between sampling points.
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 47 / 48
Further Examples and Conclusion Conclusion
Conclusion
Neuronal networks are able tolearn and store know how of a system,map functional dependencies
using a smooth or balancing interpolation between sampling points.
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 47 / 48
Further Examples and Conclusion Conclusion
Conclusion
Neuronal networks are able tolearn and store know how of a system,map functional dependencies
using a smooth or balancing interpolation between sampling points.
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 47 / 48
Further Examples and Conclusion Conclusion
Conclusion
Neuronal networks are able tolearn and store know how of a system,map functional dependencies
using a smooth or balancing interpolation between sampling points.
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 47 / 48
Further Examples and Conclusion Conclusion
Thank you for listening to my talk!
Dieter Kilsch (eh. TH Bingen) Feed-Forward Neural Networks 28.11.2016 48 / 48