eNotes: Software Engineering
   



16.4 NEURAL NETWORKS

TOC PREV NEXT

16.4 NEURAL NETWORKS


ؕؓ

16.4.1 Neural Network Calculation of Inverse Kinematics

Objectives: To give insight into the neural network solution of the inverse kinematics problem.


16.4.1.1 - Inverse Kinematics


Forward ematics for a 3-link manipulator



Inverse Kinematics for a three link manipulator




Inverse Kinematics techniques
- explicit, with exact solution (as for the 3 link manipulator)
- iterative, for use when an infinite number of solutions exist.

Problems that occur when doing inverse kinematics with these methods are,
- both methods require a computer capable of mathematical calculations
- the methods do not adapt to compensate for damage, calibration errors, wear, etc.
- solutions may be slow, especially for iterative solutions
- solutions are valid only for a specific robot

advantages of using these methods are,
- both solutions will yield exact answers
- properties of both of these methods are well known


16.4.1.2 - Feed Forward Neural Networks

A feed forward neural network was used with a sigmoidal activation function

the back propagation learning technique was used.

disadvantage of these network is
- unpredictable errors occur in the soltuion
- discontinuous problem spaces cause problems for the networks
- training may be very slow
- these networks are not well understood
- neuro computers are not commonly available

advantages
- the architecture is fault tolerant
- faster calculations
- can be adjusted for changes in the robot configuration
- the controllers are not specific to a single robot


16.4.1.3 - The Neural Network Setup

The figure below shows how the neural network was configured to solve the problem




The first neural network estimates the proper inverse kinematics. This will contain a small error, therefore a second net is used to estimate the errors. Additional correction networks can be added after this.





The networks were generally connected with
- 10, 20 or 40 neurons in the hidden layer
- a bias input was connected to each neuron
- the layers were all fully connected
- there were runs with one, and two hidden layers


16.4.1.4 - The Training Set







The problem is reduced using either left or right arm configurations, the solution is also constrained to elbow up or elbow down.

Discontinuities were avoided by not training the neural network in the region above the origin. The elbow straight configuration is also a minor singularity problem.

Training points were evenly distributed throughout the robot workspace

Only a quarter of the robot workspace was used because of the robot symettry.

The general protocol for training was,
- apply the desired position to the input, and train for the desired joint angles.
- When accuracy was high enough, the first correction net was trained by comparing the actual errors, and the desired values. Additional correction networks were also trained in some cases.
- the error was measured by using an RMS measure of the differences





A list of results are provided below,





The results in the table were obtained for a variety of network configurations

A visual picture of the network configurations is shown below, and on subsequent pages. These are based on a set of test points that lie in a plane of the workspace.



********** Add figure of network point locations, and test conditions


As seen in the experimental results, there are distortions that occur near the origin, and the edges of the workspace, as would be expected with the singularities found there.

The errors also increased near the training boundaries


********* Add in more of the results figures


16.4.1.5 - Results

The mathematical sigularities caused by cartesian coordinates, and the +/- 180 degrees singularity could be eliminated by selecting another set of coordinates for space and the arm.

The best results were about 1 degree RMS.





TOC PREV NEXT

Search for More:

Custom Search