Advances in Kernel Methods: Support Vector Learning
MIT Press, 1999 - 376 pages
The Support Vector Machine is a powerful new learning algorithm for solving a variety of learning and function estimation problems, such as pattern recognition, regression estimation, and operator inversion. The impetus for this collection was a workshop on Support Vector Machines held at the 1997 NIPS conference. The contributors, both university researchers and engineers developing applications for the corporate world, form a Who's Who of this exciting new area.
Contributors: Peter Bartlett, Kristin P. Bennett, Christopher J. C. Burges, Nello Cristianini, Alex Gammerman, Federico Girosi, Simon Haykin, Thorsten Joachims, Linda Kaufman, Jens Kohlmorgen, Ulrich Kressel, Davide Mattera, Klaus-Robert Muller, Manfred Opper, Edgar E. Osuna, John C. Platt, Gunnar Ratsch, Bernhard Scholkopf, John Shawe-Taylor, Alexander J. Smola, Mark O. Stitson, Vladimir Vapnik, Volodya Vovk, Grace Wahba, Chris Watkins, Jason Weston, Robert C. Williamson.
What people are saying - Write a review
algorithm ANOVA approach approximation approximation error benchmark bound chapter choice choose classifiers coefficients compute conjugate gradient consider constraints construct corresponding covering numbers CPU sec data set decision function decision tree decomposition defined density estimation dimensional distribution dot product e-insensitive eigenvalues eigenvectors entropy entropy numbers equation error fc(x feature space figure Gaussian given heuristic Hilbert space hyperplane input space invariant iteration kernel function kernel PCA Lagrange multipliers learning linear program loss function mapping margin matrix Mercer kernel method metric minimize multicategory nonlinear nonzero number of support obtained optimal hyperplane optimization problem parameters pattern recognition PCG chunking performance polynomial kernels principal components QP problem quadratic programming random reconstruction samples Scholkopf Smola solution solve sparse subset support vector machine SV machines SVM pairwise technique test set Theorem training data training examples training set Vapnik variables VC dimension Wahba zero
Page 368 - Golowich, and A. Smola. Support Vector Method for Function Approximation, Regression Estimation and Signal Processing.
Page 47 - Let T be a set of real valued functions. We say that a set of points X is 7-shattered by T relative to r...
Page 10 - This way, the influence of the individual patterns (which could be outliers) gets limited. As above, the solution takes the form (36).
Page 354 - Proc. of the 4th Midwest Artificial Intelligence and Cognitive Science Society Conference, pages 97-101, 1992.
Page 355 - PS Bradley and OL Mangasarian. Feature selection via concave minimization and support vector machines. In J. Shavlik, editor, Machine Learning Proceedings of the Fifteenth International Conference(ICML '98), pages 82-90.
Page 169 - The SVM approach transforms data into a feature space F that usually has a huge dimension. It is interesting to note that SVM generalization depends on the geometrical characteristics of the training data, not on the dimensions of the input space [21,22].
Page 4 - To prevent —ai (T/J . ((w . xi) + 6) — 1) from becoming arbitrarily large, the change in w and b will ensure that, provided the problem is separable, the constraint will eventually be satisfied. Similarly, one can understand that for all constraints which are not precisely met as equalities, ie, for which...
Page 10 - SVs x^ with a, < C, the slack variable (,, is zero (this again follows from the Karush-Kuhn-Tucker complementarity conditions), and hence m £ yjOj • k(xit Xj) +b = yi.