glvq.GlvqModel

class glvq.GlvqModel(prototypes_per_class=1, initial_prototypes=None, max_iter=2500, gtol=1e-05, display=False, random_state=None)[source]

Generalized Learning Vector Quantization

Parameters:

prototypes_per_class : int or list of int, optional (default=1)

Number of prototypes per class. Use list to specify different numbers per class.

initial_prototypes : array-like, shape = [n_prototypes, n_features + 1],

optional

Prototypes to start with. If not given initialization near the class means. Class label must be placed as last entry of each prototype.

max_iter : int, optional (default=2500)

The maximum number of iterations.

gtol : float, optional (default=1e-5)

Gradient norm must be less than gtol before successful termination of bfgs.

display : boolean, optional (default=False)

Print information about the bfgs steps.

random_state : int, RandomState instance or None, optional

If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random.

Attributes

w_ (array-like, shape = [n_prototypes, n_features]) Prototype vector, where n_prototypes in the number of prototypes and n_features is the number of features
c_w_ (array-like, shape = [n_prototypes]) Prototype classes
classes_ (array-like, shape = [n_classes]) Array containing labels.

Methods

fit(x, y) Fit the GLVQ model to the given training data and parameters using l-bfgs-b.
get_params([deep]) Get parameters for this estimator.
predict(x) Predict class membership index for each input sample.
score(X, y[, sample_weight]) Returns the mean accuracy on the given test data and labels.
set_params(**params) Set the parameters of this estimator.
__init__(prototypes_per_class=1, initial_prototypes=None, max_iter=2500, gtol=1e-05, display=False, random_state=None)[source]

x.__init__(…) initializes x; see help(type(x)) for signature

fit(x, y)[source]

Fit the GLVQ model to the given training data and parameters using l-bfgs-b.

Parameters:

x : array-like, shape = [n_samples, n_features]

Training vector, where n_samples in the number of samples and n_features is the number of features.

y : array, shape = [n_samples]

Target values (integers in classification, real numbers in regression)

Returns:

self

get_params(deep=True)

Get parameters for this estimator.

Parameters:

deep : boolean, optional

If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns:

params : mapping of string to any

Parameter names mapped to their values.

predict(x)[source]

Predict class membership index for each input sample.

This function does classification on an array of test vectors X.

Parameters:

x : array-like, shape = [n_samples, n_features]

Returns:

C : array, shape = (n_samples,)

Returns predicted values.

score(X, y, sample_weight=None)

Returns the mean accuracy on the given test data and labels.

In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.

Parameters:

X : array-like, shape = (n_samples, n_features)

Test samples.

y : array-like, shape = (n_samples) or (n_samples, n_outputs)

True labels for X.

sample_weight : array-like, shape = [n_samples], optional

Sample weights.

Returns:

score : float

Mean accuracy of self.predict(X) wrt. y.

set_params(**params)

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Returns:self