Software

Indefinite Core Vector Machine (iCVM): iCVM in Matlab
Citation: Indefinite Core Vector Machine, Schleif et al., Pattern Recognition, 71, 187-195, 2017
The archive contains matlab code and a simple test script implementing the iCVM. It includes crossvalidation code to show the out of sample extension and Nystroem code to get it scaled for larger problems. See Readme and the comments in the code for further details. Any comments, suggestions or bug reports are welcome.
An armadillo/C++ library is provided here: iCVM in C++/armadillo. (Note: the nystroem implementation in this code is still experimental - use the matlab code in case of problems) The library contains also an Interior-Point Solver and an Orthogonal Matching Pursuit (OMP) solver both, implemented in armadillo. (also available at mloss)

Probabilistic Classification Vector Machine (PCVM): PCVM in C++
Citation: Huanhuan Chen, Peter Tino, Xin Yao: Probabilistic Classification Vector Machines. IEEE Transactions on Neural Networks 20(6): 901-914 (2009)
The archive contains a library and test program code implementing the PCVM in C++ using Armadillo and boost with multi-core support. It includes crossvalidation and out of sample (test point) extensions and can be used for normalized vectorial data or normalized kernel matrized as input. See Readme for further details. The implementation was also extended soon to support the Nystroem technique discussed in one of my recent paper (see publication list). If you like the code please give credit to this webpage or the appropriate papers. Any comments, suggestions or bug reports are welcome.

Compilations and sources for Windows 8.2

Here you can also download pre-compiled windows 8.2 64bit binaries:

PCVM windows binaries (including libaries)

If you are unhappy with Windows 8.2. 64 bit code you can try to compile the code with the Visual Studio solution files PCVM Visual studio files (you may need to update the folder settings a bit)
The necessary windows boost headers + pre-compiled binaries are available at
Precompiled boost - I used boost-1.58
The armadillo libaries + header + precompiled lapack/blast for windows 64bit can be found here Armadillo 64bit Windows + sources I have not yet checked whether the code under windows is really using multiple cores.

Nystroem approximated indefinite Kernel Fisher Discriminant (iKFD) Ny-iKFD implementation in acc. to the Simbad 2015 paper
Citation: Frank-Michael Schleif, Andrej Gisbrecht, Peter Tino: Large scale Indefinite Kernel Fisher Discriminant. Similarity-Based Pattern Recognition - Third International Workshop, {SIMBAD} 2015, Copenhagen, Denmark, October 12-14, 2015. Proceedings (to appear)
The implementation is written in matlab with two demo scripts to show usage. Please cite the above mentioned paper if you use the code.

Simple GLVQ matlab implementation (covering GLVQ, RGLVQ, GMLVQ and LiRaM): Simple GLVQ (and hopefully clean and transparent)
Citation: Kerstin Bunte, Petra Schneider, Barbara Hammer, Frank-Michael Schleif, Thomas Villmann, Michael Biehl: Limited Rank Matrix Learning, discriminative dimension reduction and visualization. Neural Networks 26: 159-173 (2012)
The implementation is written to be as clean and easy to follow as possible and should scale to datasets with moderate complexity. The other toolbox (see below) permits more parameter settings but is less compositional and sometimes a bit to complicated for a start into the topic.

Matrix relevance learning code (co-authored): Matrix and Relevance LVQ toolbox
Citation:Michael Biehl, Kerstin Bunte, Frank-Michael Schleif, Petra Schneider, Thomas Villmann: Large margin linear discriminative visualization by Matrix Relevance Learning. IJCNN 2012: 1-8
The toolbox is mainly with an academic focus, e.g. it does not scale to large datasets but shows prototypical implementations of the algorithms.

cBMDS data embedding toolbox (co-authored): cBMDS available at mloss.org.
Citation: Marc Strickert, Kerstin Bunte, Frank-Michael Schleif, Eyke Hüllermeier: Correlation-based embedding of pairwise score data. Neurocomputing 141: 97-109 (2014)
The article above provides some more advanced concepts but cBMDS is a good starting point. The aim is to embed a given data relationship matrix into a low-dimensional Euclidean space such that the point distances / distance ranks correlate best with the original input relationships. Input relationships may be given as (sparse) (asymmetric) distance, dissimilarity, or (negative!) score matrices. Input-output relations are modeled as low-conditioned. (Weighted) Pearson and soft Spearman rank correlation, and unweighted soft Kendall correlation are supported correlation measures for input/output object neighborhood relationships.

Relational Generalized Learning Vector Quantization: relational_glvq.tgz
Citation: Andrej Gisbrecht, Bassam Mokbel, Frank-Michael Schleif, Xibin Zhu, Barbara Hammer: Linear Time Relational Prototype Based Learning. Int. J. Neural Syst. 22(5) (2012)
This code was part of a best paper contribution by B. Mokbel et al. ESANN 2014

Fast Soft Competitive Learning: Fast Soft Competitive Learning (contains batch relational neural gas)
Citation: Frank-Michael Schleif, Xibin Zhu, Andrej Gisbrecht, Barbara Hammer: Fast approximated relational and kernel clustering. ICPR 2012: 1229-1232
This code is a basis for: Frank-Michael Schleif: Discriminative Fast Soft Competitive Learning. ICANN 2014: 81-88

Core Soft Competitive Learning: core_scl.tgz
Citation: Frank-Michael Schleif, Xibin Zhu, Barbara Hammer: Soft Competitive Learning for Large Data Sets. ADBIS Workshops 2012: 141-151

Kernelized Generalized Learning Vector Quantization: kernel_glvq.tgz
Citation: Frank-Michael Schleif, Thomas Villmann, Barbara Hammer, Petra Schneider: Efficient Kernelized Prototype Based Classification. Int. J. Neural Syst. 21(6): 443-457 (2011)

Core vector machine (CVM) code of I. Tsang and J. Kwok in Matlab Core Vectore Machine for Matlab
Citation: (just cite the paper by Tsang and Kwok and add a reference to this webpage for the source
Here you can also use a precalculated kernel matrix as input and very large kernel matrices or approximated versions (e.g. Nystroem approximation) see Example.m, look also into the README

Liblinear code of Chih-Jen Lin - adapted to compile and run a bit smoother under Linux and matlab - as a shared library: Liblinear for Linux
Citation: (just cite the paper by Chih-Jen Lin and add a reference to this webpage for the source