Advances in Kernel Methods: Support Vector Learning

Front Cover
MIT Press, 1999 - 376 pages
The Support Vector Machine is a powerful new learning algorithm for solving a variety of learning and function estimation problems, such as pattern recognition, regression estimation, and operator inversion. The impetus for this collection was a workshop on Support Vector Machines held at the 1997 NIPS conference. The contributors, both university researchers and engineers developing applications for the corporate world, form a Who's Who of this exciting new area.

Contributors: Peter Bartlett, Kristin P. Bennett, Christopher J. C. Burges, Nello Cristianini, Alex Gammerman, Federico Girosi, Simon Haykin, Thorsten Joachims, Linda Kaufman, Jens Kohlmorgen, Ulrich Kressel, Davide Mattera, Klaus-Robert Muller, Manfred Opper, Edgar E. Osuna, John C. Platt, Gunnar Ratsch, Bernhard Scholkopf, John Shawe-Taylor, Alexander J. Smola, Mark O. Stitson, Vladimir Vapnik, Volodya Vovk, Grace Wahba, Chris Watkins, Jason Weston, Robert C. Williamson.
 

What people are saying - Write a review

User Review - Flag as inappropriate

Chinese immigrants that helped build the transcontinental railroad.

User Review - Flag as inappropriate

Some websites in this book is very helpfull

Common terms and phrases

Popular passages

Page 368 - Golowich, and A. Smola. Support Vector Method for Function Approximation, Regression Estimation and Signal Processing.
Page 360 - Application of the Karhunen-Loeve procedure for the characterization of human faces," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.
Page 47 - Let T be a set of real valued functions. We say that a set of points X is 7-shattered by T relative to r...
Page 10 - This way, the influence of the individual patterns (which could be outliers) gets limited. As above, the solution takes the form (36).
Page 354 - Proc. of the 4th Midwest Artificial Intelligence and Cognitive Science Society Conference, pages 97-101, 1992.
Page 355 - PS Bradley and OL Mangasarian. Feature selection via concave minimization and support vector machines. In J. Shavlik, editor, Machine Learning Proceedings of the Fifteenth International Conference(ICML '98), pages 82-90.
Page 169 - The SVM approach transforms data into a feature space F that usually has a huge dimension. It is interesting to note that SVM generalization depends on the geometrical characteristics of the training data, not on the dimensions of the input space [21,22].
Page 4 - To prevent —ai (T/J . ((w . xi) + 6) — 1) from becoming arbitrarily large, the change in w and b will ensure that, provided the problem is separable, the constraint will eventually be satisfied. Similarly, one can understand that for all constraints which are not precisely met as equalities, ie, for which...
Page 10 - SVs x^ with a, < C, the slack variable (,, is zero (this again follows from the Karush-Kuhn-Tucker complementarity conditions), and hence m £ yjOj • k(xit Xj) +b = yi.
Page 2 - The best-known capacity concept of VC theory is the VC dimension, defined as the largest number h of points that can be separated in all possible ways using functions of the given class.

About the author (1999)

Alexander J. Smola is Senior Principal Researcher and Machine Learning Program Leader at National ICT Australia/Australian National University, Canberra.

Bernhard Schölkopf is Director at the Max Planck Institute for Intelligent Systems in Tübingen, Germany. He is coauthor of Learning with Kernels (2002) and is a coeditor of Advances in Kernel Methods: Support Vector Learning (1998), Advances in Large-Margin Classifiers (2000), and Kernel Methods in Computational Biology (2004), all published by the MIT Press.

Bibliographic information