topo.learningfn

Module

A family of function objects for changing a set of weights over time.

Learning functions come in two varieties: LearningFunction, and CFPLearningFunction. A LearningFunction (e.g. Hebbian) applies to one set of weights, typically from one ConnectionField. To apply learning to an entire CFProjection, a LearningFunction can be plugged in to CFPLF_Plugin. CFPLF_Plugin is one example of a CFPLearningFunction, which is a function that works with the entire Projection at once. Some optimizations and algorithms can only be applied at the full CFPLearningFn level, so there are other CFPLearningFns beyond CFPLF_Plugin.

Any new learning functions added to this directory will automatically become available for any model.

class topo.learningfn.IdentityLF(**params)

Bases: topo.base.functionfamily.LearningFn

Identity function; does not modify the weights.

For speed, calling this function object is sometimes optimized away entirely. To make this feasible, it is not allowable to derive other classes from this object, modify it to have different behavior, add side effects, or anything of that nature.

class topo.learningfn.BCMFixed(**params)[source]

Bases: topo.base.functionfamily.LearningFn

Bienenstock, Cooper, and Munro (1982) learning rule with a fixed threshold.

(See Dayan and Abbott, 2001, equation 8.12) In the BCM rule, activities change only when there is both pre- and post-synaptic activity. The full BCM rule requires a sliding threshold (see CFPBCM), but this version is simpler and easier to analyze.

Requires some form of output_fn normalization for stability.

param Number unit_threshold (allow_None=False, bounds=(0, None), constant=False, default=0.5, inclusive_bounds=(True, True), instantiate=False, pickle_default_value=True, precedence=None, readonly=False, time_dependent=True, time_fn=<Time Time00001>)
Threshold between LTD and LTP.
class topo.learningfn.Covariance(**params)[source]

Bases: topo.base.functionfamily.LearningFn

Covariance learning rule supporting either input or unit thresholds.

As presented by Dayan and Abbott (2001), covariance rules allow either potentiation or depression of the same synapse, depending on an activity level. By default, this implementation follows Dayan and Abbott equation 8.8, with the unit_threshold determining the level of postsynaptic activity (activity of the target unit), below which LTD (depression) will occur.

If you wish to use an input threshold as in Dayan and Abbott equation 8.9 instead, set unit_threshold to zero and change input_threshold to some positive value instead. When both thresholds are zero this rule degenerates to the standard Hebbian rule.

Requires some form of output_fn normalization for stability.

param Number input_threshold (allow_None=False, bounds=(0, None), constant=False, default=0.0, inclusive_bounds=(True, True), instantiate=False, pickle_default_value=True, precedence=None, readonly=False, time_dependent=True, time_fn=<Time Time00001>)
Threshold between LTD and LTP, applied to the input activity.
param Number unit_threshold (allow_None=False, bounds=(0, None), constant=False, default=0.5, inclusive_bounds=(True, True), instantiate=False, pickle_default_value=True, precedence=None, readonly=False, time_dependent=True, time_fn=<Time Time00001>)
Threshold between LTD and LTP, applied to the activity of this unit.
class topo.learningfn.LearningFn(**params)

Bases: param.parameterized.Parameterized

Abstract base class for learning functions that plug into CFPLF_Plugin.

class topo.learningfn.Oja(**params)[source]

Bases: topo.base.functionfamily.LearningFn

Oja’s rule (Oja, 1982; Dayan and Abbott, 2001, equation 8.16.)

Hebbian rule with soft multiplicative normalization, tending the weights toward a constant sum-squared value over time. Thus this function does not normally need a separate output_fn for normalization.

param Number alpha (allow_None=False, bounds=(0, None), constant=False, default=0.1, inclusive_bounds=(True, True), instantiate=False, pickle_default_value=True, precedence=None, readonly=False, time_dependent=True, time_fn=<Time Time00001>)

class topo.learningfn.Hebbian(**params)

Bases: topo.base.functionfamily.LearningFn

Basic Hebbian rule; Dayan and Abbott, 2001, equation 8.3.

Increases each weight in proportion to the product of this neuron’s activity and the input activity.

Requires some form of output_fn normalization for stability.

class topo.learningfn.CPCA(**params)[source]

Bases: topo.base.functionfamily.LearningFn

CPCA (Conditional Principal Component Analysis) rule.

(See O’Reilly and Munakata, Computational Explorations in Cognitive Neuroscience, 2000, equation 4.12.)

Increases each weight in proportion to the product of this neuron’s activity, input activity, and connection weights.

Has built-in normalization, and so does not require output_fn normalization for stability. Intended to be a more biologically plausible version of the Oja rule.

Submitted by Veldri Kurniawan and Lewis Ng.