Applying Neural Networks: A Practical GuideMorgan Kaufmann, 1996 - 303 pages "Only a few years ago the neural networks were touted as the solution to all our worries. Artificial Intelligence the easy way! Of course nothing is ever that easy and it was not long before people realised that a lot of care and expertise are required to prevent a neural-based project going irretrievably wrong. This book presents the practical solutions required to successfully apply neural networks. Neural networks have become an invaluable tool to both academia and industry. Whether they are applied to business forecasting, machine health monitoring, process control or laboratory data analysis, researchers and managers alike are discovering the modelling power of neural networks. This book is not another selection of papers, but a fully integrated, structured approach the study and application of neural networks. It will be of interest to the industrial reader who wishes to benefit from the advantages neural networks provide and to the student or academic researcher who, rather than wishing to study neural networks for their own sake, wishes to learn how to use them. This book is Neural Networks Made Easy, showing you how to plan, run and succeed with a neural network based project. By taking the most popular type of neural network - the Multi Layer Perceptron - and presenting every step in its development, every decision to be made and every problem to overcome (plus solution!), it guides the reader to a successful project conclusion. Each chapter presents a stage in network development with an easy to follow discussion, a how-to-do-it reference section and a set of worked examples. The second half of the book is devoted to an in-depth study of a number of successful neural network applications in fields such as signal processing, financial predication, business decision support, and process monitoring and control. The book also comes complete with a disk of C and C++ programs to help you on your way. Everything you need, in fact, to put neural networks in practice"--Page 4 of cover. |
From inside the book
Results 1-5 of 49
Page 8
Sorry, this page's content is restricted.
Sorry, this page's content is restricted.
Page 9
Sorry, this page's content is restricted.
Sorry, this page's content is restricted.
Page 10
Sorry, this page's content is restricted.
Sorry, this page's content is restricted.
Page 11
Sorry, this page's content is restricted.
Sorry, this page's content is restricted.
Page 11
Sorry, this page's content is restricted.
Sorry, this page's content is restricted.
Contents
Introduction | 11 |
12 How does neural computing differ from traditional programming? | 11 |
13 How are neural networks built? | 11 |
14 How do neural networks learn? | 11 |
15 What do I need to build an MLP? | 14 |
17 The generalisationaccuracy tradeoff | 15 |
18 Implementation details | 17 |
19 Activation and learning equations | 19 |
69 Autoassociative network novelty detection | 160 |
Network use and analysis | 165 |
73 Traversing a network | 175 |
74 Summary | 176 |
75 Calculating the derivatives | 177 |
a worked example | 179 |
Managing a neural network based project | 183 |
82 Development platform | 186 |
Modelling a pendulum | 20 |
Data encoding and recoding | 23 |
22 Data type classification | 25 |
23 Initial statistical calculations | 26 |
24 Dimensionality reduction | 27 |
25 Scaling a data set | 31 |
26 Neural encoding methods | 34 |
27 Temporal data | 43 |
28 When to carry out recoding | 46 |
29 Implementation details | 47 |
Building a network | 51 |
33 Training neural networks | 65 |
34 Implementation details | 70 |
Time varying systems | 77 |
42 Neural networks for predicting or classifying time series | 82 |
43 Choosing the best method for the task | 84 |
44 Predicting more than one step into the future | 90 |
45 Learning separate paths through state space | 91 |
46 Recurrent networks as models of finite state automata | 98 |
47 Summary of temporal neural networks | 103 |
Data collection and validation | 105 |
52 Building the training and test sets | 108 |
53 Data quality | 113 |
54 Calculating entropy values for a data set | 126 |
55 Using a forwardinverse model to solve ill posed problems | 128 |
Output and error analysis | 133 |
62 What do the errors mean? | 134 |
63 Error bars and confidence limits | 135 |
64 Methods for visualising errors | 143 |
65 Novelty detection | 145 |
66 Implementation details | 148 |
67 A simple two class example | 154 |
A mail shot targeting example | 155 |
83 Project personnel | 187 |
84 Project costs | 188 |
85 The benefits of neural computing | 189 |
87 Alternatives to a neural computing approach | 190 |
89 Project documentation | 193 |
810 System maintenance | 194 |
Review of neural applications | 195 |
Introduction to Part II | 197 |
Neural networks and signal processing | 199 |
93 Preprocessing techniques for visual processing | 202 |
94 Neural filters in the Fourier and temporal domains | 206 |
95 Speech recognition | 210 |
96 Production quality control | 214 |
97 An artistic style classifier | 217 |
98 Fingerprint analysis | 220 |
Financial and Business Modelling | 221 |
103 Financial time series prediction | 223 |
104 Review of published findings | 226 |
105 Conclusion | 235 |
Industrial process modelling | 237 |
predicting driver alertness | 245 |
114 Training the neural networks | 251 |
115 Robot control by reinforcement learning | 255 |
116 Summary | 261 |
Conclusions | 263 |
Using the accompanying software | 267 |
132 Neural network code | 268 |
133 Data preparation routines | 274 |
Glossary | 281 |
291 | |
301 | |
Other editions - View all
Common terms and phrases
able activation function algorithm analysis applied average back propagation binary build calculated centre chapter choose classification coding conditional entropy constraints data points data set delay neural networks derivative describes dimension dimensionality discussed distribution efficient markets hypothesis entropy expected value Figure filter float val Fourier frequency generalisation ability hidden layer hidden units input layer input units input values input variables input vector learning linear mapping method network output neural computing noise non-linear number of hidden number of input number of weights outliers output layer output unit output values overfitting parameter perceptron possible predict probability problem produced random range re-coding receptive fields recurrent network recurrent neural network reinforcement learning representation signal simple sine wave single softmax solution speech recognition spread encoding step target output TDNN techniques threshold train a neural training data training set window zero