## Bruno Breitmeyer and Haluk Ogmen

Print publication date: 2006

Print ISBN-13: 9780198530671

Published to Oxford Scholarship Online: April 2010

DOI: 10.1093/acprof:oso/9780198530671.001.0001

Show Summary Details
Page of

PRINTED FROM OXFORD SCHOLARSHIP ONLINE (www.oxfordscholarship.com). (c) Copyright Oxford University Press, 2017. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a monograph in OSO for personal use (for details see http://www.oxfordscholarship.com/page/privacy-policy). Subscriber: null; date: 25 February 2017

# (p.301) Appendix A Some mathematical aspects of masking models

Source:
Publisher:
Oxford University Press

(p.301) Appendix A

Some mathematical aspects of masking models

# A.1. Multiplicative (shunting) and additive equations

In Chapter 5, we introduced two ‘generic’ formalisms that have been used extensively in neural modeling (e.g. Grossberg 1988; Koch and Segev 1989). The first equation has the form of Hodgkin–Huxley equation and is written as

(A1)
The physiological interpretation of the variables and parameters is given in Chapter 5. In particular, g d and g h represent variable (active) conductances through which input signals modulate the membrane potential V m. In this sense, this is an ‘active’ model of a neuron. Because of the multiplicative interactions between the input and the membrane potential, the model is also known as the multiplicative or shunting model (e.g. Grossberg 1988). A simpler version of this model is the ‘additive’ model
(A2)
where conductances are fixed and the input modulates the membrane potential via additive currents.

The BCS model (Chapter 4, section 4.7) and the RECOD model (Chapter 5) build their dynamic network representations using equations (A1) and (A2). We will show below that the equations used in Bridgeman’s and Weisstein’s models are of the additive type.

(p.302) Consider first the equations of Bridgeman’s Hartline–Ratliff inhibitory network:

(A3)
where ri(t) is the firing rate of neuron i at time t, ei(t) is the excitatory input to the ith neuron, wj,i is the synaptic weight for the connection from the jth neuron to the ith neuron, rj,i 0 is the firing threshold for the connection between the jth and the ith neuron, and n is the number of neurons in the network. Equation (A3) is a difference equation and equation (A2) is a differential equation. The two equations can be compared by approximating the derivative by a backward-difference formula. Following the transformations used by Grossberg (1988), let
(A4)
Equation (A3) can be written as
(A5)
Using equation (A5), we obtain
(A6)
which has the same form as equation (A2) with A = 1 and with ei(t) and$∑ j = 1 n w j , i [ r j ( t − | i − j | ) − r j , i 0 ]$corresponding to the excitatory and inhibitory inputs, respectively. Note that while the state variables in the two equations correspond to different physiological variables (membrane potential in equation (A3) and firing rate in equation (A5)), the Hartline–Ratliff model applies thresholds directly to firing rates, while network formulations of equations (A1) and (A2) apply thresholds to membrane potentials.

As discussed in Chapter 4, the building blocks of Weisstein’s model are the two-factor Rashevsky–Landahl equations

(A7)
(p.303)
(A8)
(A9)
where εj and jj are the excitatory and inhibitory factors, respectively, to the jth neuron. These factors can be interpreted and modeled as excitatory and inhibitory neurotransmitters. Alternatively, we can introduce an additional ‘interneuron’, as shown in Figure A1, to express this model as a small circuit of two neurons (Öğmen 1993, Appendix B).

Fig. A.1 Two-neuron additive model equivalent for the two-factor Rashevsky-Landahl model. (Reproduced from Öğmen 1993)

Let the two neurons in Figure A1 obey the additive equations

(A10)
(A11)
with the output of x 1 given by
(A12)
This output corresponds to the output of the two-factor neuron with the following parameter identifications: α = αj, β = AjBj, γ = aj – bj, δ = bj, η = Bf, and Γ = hj.

# (p.304) A.2. The approach of Anbar and Anbar

Anbar and Anbar (1982) proposed a mathematical model for masking based on three assumptions: step visual response function (VRF) and exponential decay, temporal integration, and lateral inhibition.

## A.2.1. Step visual response function

The VRF v(t) for an input of intensity I applied at time t = 0 and of duration t 0 is assumed to have the following form:

(A13)
where β is a constant and α is a power function of the input, i.e. α = kIγ where k and γ represent two constants. Thus the response is assumed to increase stepwise to its level while the stimulus is on and to decay exponentially after the stimulus is turned off.

Using equations (A1) and (A2) we also obtain an exponential rise and decay in the response. Anbar and Anbar use step increase as a simplification and assume specific power relations for response magnitude and decay rate.

## A.2.2. Temporal integration

Temporal integration is used as a linking assumption for perceived brightness V(I):

(A14)
A temporal integration assumption is also used in other models (e.g. Weisstein’s model and the RECOD model) to link neural activities to perceived brightness.

## A.2.3. Lateral inhibition

Finally, Anbar and Anbar assume that masking results from a type of simultaneous brightness contrast effected by lateral inhibition such that the weaker of the two stimuli is suppressed by the stronger. This suppression is assumed to be a step decrease by an amount proportional to the pth power of the ratio between the two VRFs at the onset of the masking stimulus. For example, assume that the target and (p.305) the mask have intensities I T and I M, respectively, with I T < I M. Assume also that the mask is applied while the target is on. By equation (A13), when the target is on, the VRF for the target is cI T β. Similarly, the VRF for the mask is cI M β. Since the mask is stronger than the target, the VRF for the target will be suppressed by the ratio

(A15)
reaching stepwise the level
(A16)
However, if the mask is applied after the offset of the target at t = τ t 0, the VRF will be suppressed by the ratio
(A17)
reaching stepwise the level
(A18)

Inspection of these expressions shows how a U-shaped backward masking function is obtained. First, consider equation (A17). For a fixed mask intensity, larger values of τ lead to smaller values of this ratio. Since VRF is multiplied by this ratio, smaller values of the ratio imply larger suppression of activity. Thus, from the perspective of the ratio (A17), masking becomes more effective as ISI increases. On the other hand, inspection of (A18) shows that the actual suppression of VRF is given by the product of this ratio and an exponentially decaying VRF. Because larger values of ISI correspond to smaller values of the exponentially decaying VRF, from the perspective of the ongoing VRF, masking becomes less and less effective as ISI increases. Putting the two opposing tendencies together, we obtain a U-shaped function. It should be noted that a secondary effect of the drop in activity is the concomitant change in the rate of subsequent decay. However, Francis (2000) showed that a U-shaped function can be obtained without the change in decay rate, although joint changes in amplitude and decay rate produce a U-shaped masking function that is quantitatively different from that obtained solely by a change in amplitude.