Skip to main content

Input-output relations in biological systems: measurement, information and the Hill equation

Abstract

Abstract

Biological systems produce outputs in response to variable inputs. Input-output relations tend to follow a few regular patterns. For example, many chemical processes follow the S-shaped Hill equation relation between input concentrations and output concentrations. That Hill equation pattern contradicts the fundamental Michaelis-Menten theory of enzyme kinetics. I use the discrepancy between the expected Michaelis-Menten process of enzyme kinetics and the widely observed Hill equation pattern of biological systems to explore the general properties of biological input-output relations. I start with the various processes that could explain the discrepancy between basic chemistry and biological pattern. I then expand the analysis to consider broader aspects that shape biological input-output relations. Key aspects include the input-output processing by component subsystems and how those components combine to determine the system’s overall input-output relations. That aggregate structure often imposes strong regularity on underlying disorder. Aggregation imposes order by dissipating information as it flows through the components of a system. The dissipation of information may be evaluated by the analysis of measurement and precision, explaining why certain common scaling patterns arise so frequently in input-output relations. I discuss how aggregation, measurement and scale provide a framework for understanding the relations between pattern and process. The regularity imposed by those broader structural aspects sets the contours of variation in biology. Thus, biological design will also tend to follow those contours. Natural selection may act primarily to modulate system properties within those broad constraints.

Reviewers

This article was reviewed by Eugene Koonin, Georg Luebeck and Sergei Maslov.

Open peer review

Reviewed by Eugene Koonin, Georg Luebeck and Sergei Maslov. For the full reviews, please go to the Reviewers’ comments section.

Introduction

Cellular receptors and sensory systems measure input signals. Responses flow through a series of downstream processes. Final output expresses physiological or behavioral phenotype in response to the initial inputs. A system’s overall input-output pattern summarizes its biological characteristics.

Each processing step in a cascade may ultimately be composed of individual chemical reactions. Each reaction is itself an input-output subsystem. The input signal arises from the extrinsic spatial and temporal fluctuations of chemical concentrations. The output follows from the chemical transformations of the reaction that alter concentrations. The overall input-output pattern of the system develops from the signal processing of the component subsystems and the aggregate architecture of the components that form the broader system.

Many fundamental questions in biology come down to understanding these input-output relations. Some systems are broadly sensitive, changing outputs moderately over a wide range of inputs. Other systems are ultrasensitive or bistable, changing very rapidly from low to high output across a narrow range of inputs [1]. The Hill equation describes these commonly observed input-output patterns, capturing the essence of how changing inputs alter system response [2].

I start with two key questions. How does the commonly observed ultrasensitive response emerge, given that classical chemical kinetics does not naturally lead to that pattern? Why does the very simple Hill equation match so well to the range of observed input-output relations? To answer those questions, I emphasize the general processes that shape input-output relations. Three aspects seem particularly important: aggregation, measurement, and scale.

Aggregation combines lower-level processes to produce the overall input-output pattern of a system. Aggregation often transforms numerous distinct and sometimes disordered lower-level fluctuations into highly regular overall pattern [3]. One must understand those regularities in order to analyze the relations between pattern and process. Aggregate regularity also imposes constraints on how natural selection shapes biological design [4].

Measurement describes the information captured from inputs and transmitted through outputs. How sensitive are outputs to a change in inputs? The overall pattern of sensitivity affects the information lost during measurement and the information that remains invariant between input and output. Patterns of sensitivity that may seem puzzling or may appear to be specific to particular mechanisms often become simple to understand when one learns to read the invariant aspects of information and measurement. Measurement also provides a basis for understanding scale [5].

Scale influences the relations between input and output [6]. Large input typically saturates a system, causing output to become insensitive to further increases in input. Saturated decline in sensitivity often leads to logarithmic scaling. Small input often saturates in the other direction, such that output changes slowly and often logarithmically in response to further declines in input. The Hill equation description of input-output patterns is simply an expression of logarithmic saturation at high and low inputs, with an increased linear sensitivity at intermediate input levels.

High input saturates output because maximum output is intrinsically limited. By contrast, the commonly observed logarithmic saturation at low input intensity remains a puzzle. The difficulty arises because typical theoretical understanding of chemical kinetics predicts a strong and nearly linear output sensitivity at low input concentrations of a signal [7]. That theoretical linear sensitivity of chemical kinetics at low input contradicts the widely observed pattern of weak logarithmic sensitivity at low input.

I describe the puzzle of chemical kinetics in the next section to set the basis for a broader analysis of input-output relations. I then connect the input-output relations of chemical kinetics to universal aspects of aggregation, measurement, and scale. Those universal properties of input-output systems combine with specific biological mechanisms to determine how biological systems respond to inputs. Along the way, I consider possible resolutions to the puzzle of chemical kinetics and to a variety of other widely observed but unexplained regularities in input-output patterns. Finally, I discuss the ways in which regularities of input-output relations shape many key aspects of biological design.

Review

The puzzle of chemical kinetics

Classical Michaelis-Menten kinetics for chemical reactions lead to a saturating relationship between an input signal and an output response [7]. The particular puzzle arises at very low input, for which Michaelis-Menten kinetics predict a nearly linear output response to tiny changes in input. That sensitivity at low input means that chemical reactions would have nearly infinite measurement precision with respect to tiny fluctuations of input concentration. Idealized chemical reactions do have that infinite precision, and observations may follow that pattern if nearly ideal conditions are established in laboratory studies. By contrast, the actual input-output relations of chemical reactions and more complex biological signals often depart from Michaelis-Menten kinetics.

Many studies have analyzed the contrast between Michaelis-Menten kinetics and the observed input-output relations of chemical reactions [2]. I will discuss some of the prior studies in a later section. However, before considering those prior studies, it is useful to have a clearer sense of the initial puzzle and of alternative ways in which to frame the problem.

Example of Michaelis-Menten kinetics

I illustrate Michaelis-Menten input-output relations with a particular example, in which higher input concentration of a signal increases the transformation of an inactive molecule to an active state. Various formulations of Michaelis-Menten kinetics emphasize different aspects of reactions [7]. But those different formulations all have the same essential mass action property that assumes spatially independent concentrations of reactants. Spatially independent concentrations can be multiplied to calculate the spatial proximity between reactants at any point in time.

In my example, a signal, S, changes an inactive reactant, R, to an active output, A, in the reaction

S + R g S + A ,

where the rate of reaction, g, can be thought of as the signal gain. In this reaction alone, if S>0, all of the reactant, R, will eventually be transformed into the active form, A. (I use roman typeface for the distinct reactant species and italic typeface for concentrations of those reactants.) However, I am particularly interested in the relation between the input signal concentration, S, and the output signal concentration, A. Thus, I also include a back reaction, in which the active form, A, spontaneously transforms back to the inactive form, R, expressed as

A δ R .

The reaction kinetics follow

A ̇ =gS(N-A)-δA,
(1)

in which the overdot denotes the derivative with respect to time, and N=R+A is the total concentration of inactive plus active reactant molecules. We find the equilibrium concentration of the output signal, A, as a function of the input signal, S, by solving A ̇ =0, which yields

A =N S m + S ,
(2)

in which m=δ/g is the rate of the back reaction relative to the forward reaction. Note that S/(m+S) is the equilibrium fraction of the initially inactive reactant that is transformed into the active state. At S=m, the input signal transforms one-half of the inactive reactant into the active state.

Figure 1 shows the consequence of this type of Michaelis-Menten kinetics for the relation between the input signal and the output signal. At low input signal intensity, S→0, the output is strongly (linearly) sensitive to changes in input, with the output changing in proportion to S. At high signal intensity, the output is weakly (logarithmically) sensitive to changes in input, with the output changing in proportion to log(S). The output saturates at AN as the input increases.

Figure 1
figure 1

Michaelis-Menten signal transmission. The reaction dynamics transform the concentration of the input signal, S, into the equilibrium output signal, A, as given by Eq. (2). Half maximal output occurs at input S=m. The total reactant available to be transformed is N.

The Hill equation and observed input-output patterns

Observed input-output patterns often differ from the simple Michaelis-Menten pattern in Figure 1. In particular, output is often only weakly sensitive to changes in the input signal at low input intensity. Weak sensitivity at low input values often means that output changes in proportion to log(S) for small S values, rather than the linear relation between input and output at small S values described by Michaelis-Menten kinetics.

The Hill equation preserves the overall Michaelis-Menten pattern but alters the sensitivity at low inputs to be logarithmic rather than linear. Remarkably, the pattern of curve shapes for most biochemical reactions and more general biological input-output relations fit reasonably well to the Hill equation

ŷ=b x ̂ k m k + x ̂ k
(3)

or to minor variants of this equation (Table 1). The input intensity is x ̂ , the measured output is y ̂ , half of maximal response is x ̂ =m, the shape of the response is determined by the Hill coefficient k, and the response saturates asymptotically at b for increasing levels of input.

Table 1 Conceptual foundations

We can simplify the expression by using the substitutions y=ŷ/b, in which y is the fraction of the maximal response, and x= x ̂ /m, in which x is the ratio of the input to the value that gives half of the maximal response. The resulting equivalent expression is

y= x k 1 + x k .
(4)

Figure 2 shows the input-output relations for different values of the Hill coefficient, k. For k=1, the curve matches the Michaelis-Menten pattern in Figure 1. An increase in k narrows the input range over which the output responds rapidly (sensitively). For larger values of k, the rapid switch from low to high output response is often called a bistable response, because the output state of the system switches in a nearly binary way between low output, or “OFF”, and high output, or “ON”. A bistable switching response is effectively a biological transistor that forms a component of a biological circuit [13]. Bistability is sometimes called ultrasensitivity, because of the high sensitivity of the response to small changes in inputs when measured over the responsive range [14].

Figure 2
figure 2

Hill equation signal transmission. The input signal, x, leads to the output, y, as given by Eq. (4). The curves of increasing slope correspond to k=1,2,4,8.

At the k=1 case of Michaelis-Menten, the output response is linearly sensitive to very small changes at very low input signals. Such extreme sensitivity means essentially infinite measurement precision at tiny input levels, which seems unlikely for realistic biological systems. As k increases, sensitivity at low input becomes more like a threshold response, such that a minimal input is needed to stimulate significant change in output. Increasing k causes sensitivity to become logarithmic at low input. That low input sensitivity pattern can be seen more clearly by plotting the input level on a logarithmic scale, as in Figure 3.

Figure 3
figure 3

An increasing Hill coefficient, k , causes logarithmic sensitivity to low input signals. At k=1 (left curve), the sensitivity is linear with a steady increase in output even at very low input levels, implying infinite precision. As k increases, sensitivity at low input declines, and the required threshold input level becomes higher and sharper to induce an output response of 1% of the maximum (y=0.01). The curves of increasing slope correspond to k=1,2,4,8 in Eq. (4), with logarithmic scaling of the input x plotted here.

Alternative mechanisms for simple chemical reactions

My goal is to understand the general properties of input-output relations in biological systems. To develop that general understanding, it is useful to continue with study of the fundamental input-output relations of simple chemical reactions. Presumably, most input-output relations of systems can ultimately be decomposed into simple component chemical reactions. Later, I will consider how the combination of such components influences overall system response.

Numerous studies of chemical kinetics report Hill coefficients k>1 rather than the expected Michaelis-Menten pattern k=1. Resolution of that puzzling discrepancy is the first step toward deeper understanding of input-output patterns (Table 2). Zhang et al. [2] review six specific mechanisms that may cause k>1. In this section, I briefly summarize several of those mechanisms. See Zhang et al. [2] for references.

Table 2 Literature related to the Hill equation

Direct multiplication of signal input concentration

Transforming a single molecule to an active state may require simultaneous binding by multiple input signal molecules. If two signal molecules, S, must bind to a single inactive reactant, R, to form a three molecule complex before transformation of R to the active state, A, then we can express the reaction as

S + S + R g SSR S + S + A ,

which by mass action kinetics leads to the rate of change in A as

A ̇ = g S 2 ( N - A ) - δA ,

in which N=R+A is the total concentration of the inactive plus active reactant molecules, and the back reaction AR occurs at rate δ. The equilibrium input-output relation is

A = N S 2 m 2 + S 2 ,

which is a Hill equation with k=2. The reaction stoichiometry, with two signal molecules combining in the reaction, causes the reaction rate to depend multiplicatively on signal input concentration. Other simple schemes also lead to a multiplicative effect of signal molecule concentration on the rate of reaction. For example, the signal may increase the rates of two sequential steps in a pathway, causing a multiplication of the signal concentration in the overall rate through the multiple steps. Certain types of positive feedback can also amplify the input signal multiplicatively.

Saturation and loss of information in multistep reaction cascades

The previous section discussed mechanisms that multiply the signal input concentration to increase the Hill coefficient. Multiplicative interactions lead to logarithmic scaling. The Hill equation with k>1 expresses logarithmic scaling of output at high and low input levels. I will return to this general issue of logarithmic scaling later. The point here is that multiplication is one sufficient way to achieve logarithmic scaling. But multiplication is not necessary. Other nonmultiplicative mechanisms that lead to logarithmic scaling can also match closely to the Hill equation pattern. This section discusses two examples covered by Zhang et al. [2].

Repressor of weak input signals

The key puzzle of the Hill equation concerns how to generate the logarithmic scaling pattern at low input intensity. The simplest nonmultiplicative mechanism arises from an initial reaction that inactivates the input signal molecule. That preprocessing of the signal intensity can create a filter that logarithmically reduces signals of low intensity. Suppose, for example, that the repressor may become saturated at higher input concentrations. Then the initial reaction filters out weak, low concentration, inputs but passes through higher input concentrations.

Consider a repressor, X, that can bind to the signal, S, transforming the bound complex into an inactive state, I, in the reaction

S + X β γ I .

One can think of this reaction as a preprocessing filter for the input signal. The kinetics of this input preprocessor can be expressed by focusing on the change in the concentration of the bound, inactive complex

İ=γ(S-I)(X-I)-βI.
(5)

The signal passed through this preprocessor is the amount of S that is not bound in I complexes, which is S=S-I. We can equivalently write I=S-S. The equilibrium relation between the input, S, and the output signal, S, passed through the preprocessor can be obtained by solving İ=0, which yields

S ( X - S + S ) - α ( S - S ) = 0 ,

in which α=β/γ. Figure 4a shows the relation between the input signal, S, and the preprocessed output, S. Bound inactive complexes, I, hold the signal molecule tightly and titrate it out of activity when the breaking up of complexes at rate β is slower than the formation of new complexes at rate γ, and thus α is small.

Figure 4
figure 4

Preprocessing of an input signal by a repressor reduces sensitivity of output to low input intensity signals. (a) Equilibrium concentration of processed signal, S, in relation to original signal input intensity, S, obtained by solution of Eq. (5). The four curves from bottom to top show decreasing levels of signal titration by the repressor for the parameter values α=0.01,0.1,0.5,1000. The top curve alters the initial signal very little, so that SS, showing the consequences of an unfiltered input signal. (b) The processed input signal, S, is used as the input to a standard Michaelis-Menten reaction kinetics process in Eq. (1), leading to an equilibrium output, A. The curves from bottom to top derive from the corresponding preprocessed input signal from the upper panel.

The preprocessed signal may be fed into a standard Michaelis-Menten type of reaction, such as the reaction in Eq. (1), with the preprocessed signal S driving the kinetics rather than the initial input, S. The reaction chain from initial input through final output starts with an input concentration, S, of which S passes through the repressor filter, and S stimulates production of the active output signal concentration, A. Figure 4b shows that titration of the initial signal concentration, S, to a lower pass-through signal concentration, S, leads to low sensitivity of the final output, A, to the initial signal input, S, as long as the signal concentration is below the amount of the repressor available for titration, X.

When this signal preprocessing mechanism occurs, the low, essentially logarithmic, sensitivity to weak input signals solves the puzzle of relating classical Michaelis-Menten chemical kinetics to the Hill equation pattern for input-output relations with k>1. The curves in Figure 4b do not exactly match the Hill equation. However, this signal preprocessing mechanism aggregated with other simple mechanisms can lead to a closer fit to the Hill equation pattern. I discuss the aggregation of different mechanisms below.

This preprocessed signal system is associated with classical chemical kinetic mechanisms, because it is the deterministic outcome of a simple and explicit mass action reaction chain. However, the reactions are not inherently multiplicative with regard to signal input intensity. Instead, preprocessing leads to an essentially logarithmic transformation of scaling and information at low input signal intensity.

This example shows that the original notion of multiplicative interactions is not a necessary condition for Hill equation scaling of input-output relations. Instead, the Hill equation pattern is simply a particular expression of logarithmic scaling of the input-output relation. Any combination of processes that leads to similar logarithmic scaling provides similar input-output relations. Thus, the Hill equation pattern does not imply any particular underlying chemical mechanism. Rather, such input-output relations are the natural consequence of the ways in which information degrades and is transformed in relation to scale when passed through reaction sequences that act as filters of the input signal.

Opposing forward and back reactions

The previous section showed how a repressor can reduce sensitivity to low intensity input signals. A similar mechanism occurs when there is a back reaction. For example, a signal may transform an inactive reactant into an active form, and a back reaction may return the active form to the inactive state. If the back reaction saturates at low signal input intensity, then a rise in the signal from a very low level will initially cause relatively little increase in the concentration of the active output, inducing weak, logarithmic sensitivity to low input signal intensity. In effect, the low input is repressed, or titrated, by the strong back reaction.

This opposition between forward and back reactions was one of the first specific mechanisms of classical chemical kinetics to produce the Hill equation pattern in the absence direct multiplicative interactions that amplify the input signal [14]. In this section, I briefly illustrate the opposition of forward and back reactions in relation to the Hill equation pattern.

In the forward reaction, a signal, S, transforms an inactive reactant, R, into an active state, A. The back reaction is catalyzed by the molecule B, which transforms A back into R. The balancing effects of the forward and back reactions in relation to saturation depend on a more explicit expression of classical Michaelis-Menten kinetics than presented above. In particular, let the two reactions be

S + R δ g SR ϕ S + A B + A d γ BA σ B + R

in which these reactions show explicitly the intermediate bound complexes, SR and BA. The rate of change in the output signal, A ̇ , when the dynamics follow classical equilibrium Michaelis-Menten reaction kinetics, is

A ̇ =ϕ S 0 R m + R -σ B 0 A μ + A ,
(6)

in which S0 includes the concentrations of both free signal, S, and bound signal, SR. Similarly, B0 includes the concentrations of both free catalyst, B, and bound catalyst, BA. The half-maximal reaction rates are set by m=δ/g and μ=d/γ. The degree of saturation depends on the total amount of reactant available, N=R+A, relative to the concentrations that give the half-maximal reaction rates, m and μ.

When the input signal, S0, is small, the back reaction dominates, potentially saturating the forward rate as R becomes large. Figure 5 shows that the level of saturation sets the input-output pattern, with greater saturation increasing the Hill coefficient, k.

Figure 5
figure 5

Balance between forward and back reactions leads to a high Hill coefficient when the reactions are saturated. The equilibrium output signal, A, is obtained by solving A ̇ =0 in Eq. (6) as a function of the input signal, S0. The signal is given as Ŝ=ϕ S 0 /σ B 0 . The total amount of reactant is N=R+A. The half-maximal concentrations are set to m=μ=1. The three curves illustrate the solutions for N=1,10,100, with increasing Hill coefficients for higher N values and greater reaction saturation levels.

Alternative perspectives on input-output relations

In the following sections, I discuss alternative mechanisms that generate Hill equation patterns. Before discussing those alternative mechanisms, it is helpful to summarize the broader context of how biochemical and cellular input-output relations have been studied.

Explicit chemical reaction mechanisms

The prior sections linked simple and explicit chemical mechanisms to particular Hill equation patterns of inputs and outputs. Each mechanism provided a distinct way in which to increase the Hill coefficient above one. Many key reviews and textbooks in biochemistry and systems biology emphasize that higher Hill coefficients and increased input-output sensitivity arise from these simple and explicit deterministic mechanisms of chemical reactions [2, 7, 20]. The idea is that a specific pattern must be generated by one of a few well-defined and explicit alternative mechanisms.

Explicit chemical reaction mechanisms discussed earlier include: binding of multiple signal molecules to stimulate each reaction; repressors of weak input signals; and opposing forward and back reactions near saturation. Each of these mechanisms could, in principle, be isolated from a particular system, analyzed directly, and linked quantitatively to the specific input-output pattern of a system. Decomposition to elemental chemical kinetics and direct quantitative analysis would link observed pattern to an explicit mechanistic process.

The Hill equation solely as a description of observed pattern

In the literature, the Hill equation is also used when building models of how system outputs may react to various inputs (Table 2). The models often study how combinations of components lead to the overall input-output pattern of a system. To analyze such models, one must make assumptions about the input-output relations of the individual components. Typically, a Hill equation is used to describe the components’ input-output functions. That description does not carry any mechanistic implication. One simply needs an input-output function to build the model or to describe the component properties. The Hill equation is invoked because, for whatever reason, most observed input-output functions follow that pattern.

System-level mechanisms and departures from mass action

Another line of study focuses on system properties rather than the input-output patterns of individual components. In those studies, the Hill equation pattern of sensitivity does not arise from a particular chemical mechanism in a particular reaction. Instead, sensitivity primarily arises from the aggregate consequences of the system (Table 2). In one example, numerous reactions in a cascade combine to generate Hill-like sensitivity [39]. The sensitivity derives primarily from the haphazard combination of different scalings in the distinct reactions, rather than a particular chemical process.

Alternatively, some studies assume that chemical kinetics depart from the classical mass action assumption (Table 2). If input signal molecules tend, over the course of a reaction, to become spatially isolated from the reactant molecules on which they act, then such spatial processes often create a Hill-like input-output pattern by nonlinearly altering the output sensitivity to changes in inputs. I consider such spatial processes as an aggregate system property rather than a specific chemical mechanism, because many different spatial mechanisms can restrict the aggregate movement of molecules. The aggregate spatial processes of the overall system determine the departures from mass action and the potential Hill-like sensitivity consequences, rather than the particular physical mechanisms that alter spatial interactions.

These system-level explanations based on reaction cascades and spatially induced departures from mass action have the potential benefit of applying widely. Yet each particular system-level explanation is itself a particular mechanism, although at a higher level than the earlier biochemical mechanisms. In any actual case, the higher system-level mechanism may or may not apply, just as each explicit chemical mechanism will sometimes apply to a particular case and sometimes not.

A broader perspective

As we accumulate more and more alternative mechanisms that fit the basic input-output pattern, we may ask whether we are converging on a full explanation or missing something deeper. Is there a different way to view the problem that would unite the disparate perspectives, without losing the real insights provided in each case?

I think there is a different, more general perspective (Table 1). At this point, I have given just enough background to sketch that broader perspective. I do so in the remainder of this section. However, it is too soon to go all the way. After giving a hint here about the final view, I return in the following sections to develop further topics, after which I return to a full analysis of the broader ways in which to think about input-output relations.

The Hill equation with k>1 describes weak, logarithmic sensitivity at low input and high input levels, with strong and essentially linear sensitivity through an intermediate range. Why should this log-linear-log pattern be so common? The broader perspective on this problem arises from the following points.

First, the common patterns of nature are exactly those patterns consistent with the greatest number of alternative underlying processes [3, 40]. If many different processes lead to the same outcome, then that outcome will be common and will lack a strong connection to any particular mechanism. In any explicit case, there may be a simple and clear mechanism. But the next case, with the same pattern, is likely to be mechanistically distinct.

Second, measurement and information transmission unite the disparate mechanisms. The Hill equation with k>1 describes a log-linear-log measurement scale [41, 42]. The questions become: Why do biological systems, even at the lowest chemical levels of analysis, often follow this measurement scaling? How does chemistry translate into the transmission and loss of information in relation to scale? Why does a universal pattern of information and measurement scale arise across such a wide range of underlying mechanisms?

Third, this broader perspective alters the ways in which one should analyze biological input-output systems. In any particular case, specific mechanisms remain interesting and important. However, the relations between different cases and the overall interpretation of pattern must be understood within the broader framing.

With regard to biological design, natural selection works on the available patterns of variation. Because certain input-output relations tend to arise, natural selection works on variations around those natural contours of input-output patterns. Those natural contours of pattern and variation are set by the universal properties of information transmission and measurement scale. That constraint on variation likely influences the kinds of designs created by natural selection. To understand why certain designs arise and others do not, we must understand how information transmission and measurement scale set the underlying patterns of variation.

I return to these points later. For now, it is useful to keep in mind these preliminary suggestions about how the various pieces will eventually come together.

Aggregation

Most biological input-output relations arise through a series of reactions. Initial reactions transform the input signal into various intermediate signals, which themselves form inputs for further reactions. The final output arises only after multiple internal transformations of the initial signal. We may think of the overall input-output relation as the aggregate consequence of multiple reaction components.

A linear reaction cascade forms a simple type of system. Kholodenko el al. [39] emphasized that a cascade tends to multiply the sensitivities of each step to determine the overall sensitivity of the system. Figure 6 illustrates how the input-output relations of individual reactions combine to determine the system-level pattern.

Figure 6
figure 6

Signal processing cascade increases the Hill coefficient. The parameters for each reaction were chosen randomly from a beta distribution, denoted as a random variable zB(α,β), which yields values in the range [0,1]. The parameters m=100z and g=5+10z were chosen randomly and independently for each reaction from a beta distribution with α=β=3. The parameter k for each reaction was obtained randomly as 1+z, yielding a range of coefficients 1≤k≤2. (a) In three separate trials, different combinations of (α,β) were used for the beta distribution that generated the Hill coefficient, k: in the first, shown as the left distribution, (α,β)=(1,6); in the second, shown in the middle, (α,β)=(4,4); in the third, on the right, (α,β)=(6,2). The plot shows the peak heights normalized for each curve to be the same to aid visual comparison. (b) The input-output relation over the full cascade. The curves from left to right correspond to the distributions for k from left to right in the prior panel. The input scale is normalized so that the maximum input value for each curve coincides at 80% of the maximum output that could be obtained at infinite input. The observed output curves have more strongly reduced sensitivity at low input than at high input compared with the Hill equation, but nonetheless match reasonably well. The best fit Hill equation for the three curves has a Hill coefficient of, from left to right, k=1.7,2.2,2.8. The average Hill coefficient for each reaction in a cascade is, from left to right, k=1.14,1.5,1.75. Each curve shows a single particular realization of the randomly chosen reaction parameters from the underlying distributions.

To generate Figure 6, I calculated how a cascade of 12 reactions processes the initial input into the final output. Each reaction follows a Hill equation input-output relation given by Eq. (4) with a half-maximal response at m and a Hill coefficient of k. The output for each reaction is multiplied by a gain, g. The parameters for each reaction were chosen randomly, as shown in Figure 6a and described in the caption.

Figure 6 shows that a cascade significantly increases the Hill coefficient of the overall system above the average coefficient of each reaction, and often far above the maximum coefficient for any single reaction. Intuitively, the key effect at low signal input arises because any reaction that has low sensitivity at low input reduces the signal intensity passed on, and such reductions at low input intensity multiply over the cascade, yielding very low sensitivity to low signal input. Note in each curve that an input signal significantly above zero is needed to raise the output signal above zero. That lower tail illustrates the loss of signal information at low signal intensity.

This analysis shows that weak logarithmic sensitivity at low signal input, associated with large Hill coefficients, can arise by aggregation of many reactions. Thus, aggregation may be a partial solution to the overall puzzle of log-linear-log sensitivity in input-output relations.

Aggregation by itself leaves open the question of how variations in sensitivity arise in the individual reactions. Classical Michaelis-Menten reactions have linear sensitivity at low signal input, with a Hill coefficient of k=1. A purely Michaelis-Menten cascade with k=1 at each step retains linear sensitivity at low signal input. A Michaelis-Menten cascade would not have the weak sensitivity at low input shown in Figure 6b.

How does a Hill coefficient k>1 arise in the individual steps of the cascade? The power of aggregation to induce pattern means that it does not matter how such variations in sensitivity arise. However, it is useful to consider some examples to gain of idea of the kinds of processes that may be involved beyond the deterministic cases given earlier.

Signal noise versus measurement noise

Two different kinds of noise can influence the input-output relations of a system. First, the initial input signal may be noisy, making it difficult for the system to discriminate between low input signals and background stochastic fluctuations in signal intensity [43]. The classical signal-to-noise ratio problem expresses this difficulty by analyzing the ways in which background noise in the input can mask small changes in the average input intensity. When the signal is weak relative to background noise, a system may be relatively insensitive to small increases in average input at low input intensity.

Second, for a given input intensity, the system may experience noise in the detection of the signal level or in the transmission of the signal through the internal processes that determine the final output. Stochasticity in signal detection and transmission determine the measurement noise intrinsic to the system. The ratio of measurement noise to signal intensity will often be greater at low signal input intensity, because there is relatively more noise in the detection and transmission of weak signals.

In this section, I consider how signal noise and measurement noise influence Michaelis-Menten processes. The issue concerns how much these types of noise may weaken sensitivity to low intensity signals. A weakening of sensitivity to low input distorts the input-output relation of a Michaelis-Menten process in a way that leads to a Hill equation type of response with k>1.

In terms of measurement, Michaelis-Menten processes follow a linear-log scaling, in which sensitivity remains linear and highly precise at very low signal input intensity, and grades slowly into a logarithmic scaling with saturation. By contrast, as the Hill coefficient, k, rises above one, measurement precision transforms into a log-linear-log scale, with weaker logarithmic sensitivity to signal changes at low input intensity. Thus, the problem here concerns how signal noise or measurement noise may weaken input-output sensitivity at low input intensity.

Input signal noise may not alter Michaelis-Menten sensitivity

Consider the simplified Michaelis-Menten type of dynamics given in Eq. (1), repeated here for convenience

A ̇ = gS ( R - A ) - δA ,

where A is the output signal, S is the input signal driving the reaction, R is the reactant transformed by the input, g is the rate of the transforming reaction which is a sort of signal gain level, and δ is the rate at which the active signal output decays to the inactive reactant form. Thus far, I have been analyzing this type of problem by assuming that the input signal, S, is a constant for any particular reaction, and then varying S to analyze the input-output relation, given at equilibrium by Michaelis-Menten saturation

A =g S m + S ,
(7)

where m=δ/g. When input signal intensity is weak, such that mS, then Ag S, which implies that output is linearly related to input.

Suppose that S is in fact a noisy input signal subject to random fluctuations. How do the fluctuations affect the input-output relation for inputs of low average intensity? Although the dynamics can filter noise in various ways, it often turns out that the linear input-output relation continues to hold such that, for low average input intensity, the average output is proportion to the average input, Ā S ̄ . Thus, signal noise does not change the fact that the system effectively measures the average input intensity linearly and with essentially infinite precision, even at an extremely low signal to noise ratio.

The high precision of classical chemical kinetics arises because the mass action assumption implies that infinitesimal changes in input concentration are instantly translated into a linear change in the rate of collisions between potential reactants. The puzzle of Michaelis-Menten kinetics is that mass action implies high precision and linear scaling at low input intensity, whereas both intuition and observation suggest low precision and logarithmic scaling at low input intensity. Input signal noise by itself typically does not alter the high precision and linear scaling of mass action kinetics.

Although the simplest Michaelis-Menten dynamics retain linearity and essentially infinite precision at low input, it remains unclear how the input-output relations of complex aggregate systems respond to the signal to noise ratio of the input. Feedback loops and reaction cascades strongly influence the ways in which fluctuations are filtered between input and output. However, classical analyses of signal processing tend to focus on the filtering properties of systems only in relation to fluctuations of input about a fixed mean value. By contrast, the key biological problem is how input fluctuations alter the relation between the average input intensity and the average output intensity. That key problem requires one to study the synergistic interactions between changes in average input and patterns of fluctuations about the average.

For noisy input signals, what are the universal characteristics of system structure and signal processing that alter the relations between average input and average output? That remains an open question.

Noise in signal detection and transmission reduces measurement precision and sensitivity at low signal input

The previous section considered how stochastic fluctuations of inputs may affect the average output. Simple mass action kinetics may lead to infinite precision at low input intensity with a linear scaling between average input and average output, independently of fluctuations in noisy inputs. This section considers the problem of noise from a different perspective, in which the fluctuations arise internally to the system and alter measurement precision and signal transmission.

I illustrate the key issues with a simple model. I assume that, in a reaction cascade with deterministic dynamics, each reaction leads to the Michaelis-Menten type of equilibrium input-output given in Eq. (7). To study how stochastic fluctuations within the system affect input-output relations, I assume that each reaction has a certain probability of failing to transmit its input. In other words, for each reaction, the output follows the equilibrium input-output relation with probability 1-p, and with probability p, the output is zero.

From the standard equilibrium in Eq. (7), we simplify the notation by using yA for output, and scale the input such that x=S/m. The probability that the output is not zero is 1-p, thus the expected output is

y=g x 1 + x (1-p).
(8)

Let the probability of failure be p=a e-bx. Note that as input signal intensity, x, rises, the probability of failure declines. As the signal becomes very small, the probability of reaction failure approaches a, from the range 0≤a≤1.

Figure 7 shows that stochastic failure of signal transmission reduces relative sensitivity to low input signals when a signal is passed through a reaction cascade. The longer the cascade of reactions, the more the overall input-output relation follows an approximate log-linear-log pattern with an increasing Hill coefficient, k. Similarly, Figure 8 shows that an increasing failure rate per reaction reduces sensitivity to low input signals and makes the overall input-output relation more switch-like.

Figure 7
figure 7

Stochastic failure of signal transmission reduces the relative sensitivity to low intensity input signals. The lower (blue) lines show the probability p=a e-bx that an input signal fails to produce an output. The upper (red) lines show the expected equilibrium output for Michaelis-Menten type dynamics corrected for a probability p that the output is zero. Each panel (a – d) shows a cascade of n reactions, in which the output of each reaction forms the input for the next reaction, given an initial signal input of x for the first reaction. Each reaction follows Eq. (8). The number of reactions in the cascade increases from the left to the right panel as n=1,2,4,8. The other parameters for Eq. (8) are the gain per reaction, g=1.5, the maximum probability of reaction failure as the input declines to very low intensity, a=0.3, and the rate at which increasing signal intensity reduces reaction failure, b=10. The final output signal is normalized to 0.8 of the maximum output produced as the input become very large.

Figure 8
figure 8

Greater failure rates for reactions reduce sensitivity to low input and increase the Hill coefficient, k . The curves arise from the same analysis as Figure 7, in which the curves from left to right are associated with an increase in the maximum failure rate as a=0.2,0.4,0.6. The curves in this figure have n=8 reactions in the cascade, a gain of g=1.5, and a decline in failure with increasing input, b=10. The scale for the input signal is normalized so that each curve has a final output of 0.85 at a normalized input of one.

Implications for system design

An input-output response with a high Hill coefficient, k, leads to switch-like function (Figure 3). By contrast, classical Michaelis-Menten kinetics lead to k=1, in which output increases linearly with small changes in weak input signals—effectively the opposite of a switch. Many analyses of system design focus on this distinction. The argument is that switch-like function will often be a favored feature of design, allowing a system to change sharply between states in response to external changes [1]. Because the intrinsic dynamics of chemistry are thought not to have a switch like function, the classical puzzle is how system design overcomes chemical kinetics to achieve switching function.

This section on stochastic signal failure presents an alternative view. Sloppy components with a tendency to fail often lead to switch-like function. Thus, when switching behavior is a favored phenotype, it may be sufficient to use a haphazardly constructed pathway of signal transmission coupled with weakly regulated reactions in each step. Switching, rather than being a highly designed feature that demands a specific mechanistic explanation, may instead be the likely outcome of erratic biological signal processing.

This tendency for aggregate systems to have a switching pattern does not mean that natural selection has no role and that system design is random. Instead, the correct view may be that aggregate signal processing and inherent stochasticity set the contours of variation on which natural selection and system design work. In particular, the key design features may have to do with modulating the degree of sloppiness or stochasticity. The distribution of gain coefficients in each reaction and the overall pattern of stochasticity in the aggregate may also be key loci of design.

My argument is that systems may be highly designed, but the nature of that design can only be understood within the context of the natural patterns of variation. The intrinsic contours of variation are the heart of the matter. I will discuss that issue again later. For now, I will continue to explore the processes that influence the nature of variation in system input-output patterns.

Spatial correlations and departures from mass action

Chemical reactions require molecules to come near each other spatially. The overall reaction depends on the processes that determine spatial proximity and the processes that determine reaction rate given spatial proximity. Roughly speaking, we can think of the spatial aspects in terms of movement or diffusion, and the transformation given spatial proximity in terms of a reaction coefficient.

Classical chemical kinetics typically assumes that diffusion rates are relatively high, so that spatial proximity of molecules depends only on the average concentration over distances much greater than the proximity required for reaction. Kinetics are therefore limited by reaction rate given spatial proximity rather than by diffusion rate. In contrast with classical chemical kinetics, much evidence suggests that biological molecules often diffuse relatively slowly, causing biological reactions sometimes to be diffusion limited (Table 2).

In this section, I discuss how diffusion-limited reactions can increase the Hill coefficient of chemical reactions, k>1. That conclusion means that the inevitable limitations on the movement of biological molecules may be sufficient to explain the observed patterns of sensitivity in input-output functions and departure from Michaelis-Menten patterns.

Two key points emerge. First, limited diffusion tends to cause potential reactants to become more spatially separated than expected under high diffusion and random spatial distribution. The negative spatial association between reactants arises because those potential reactants near each tend to react, leaving the nearby spatial neighborhood with fewer potential reactants than expected under spatial uniformity. Negative spatial association of reactants reduces the rate of chemical transformation.

This reduction in transformation rate is stronger at low concentration, because low concentration is associated with a greater average spatial separation of reactants. Thus, low signal input may lead to relatively strong reductions in transformation rate caused by limited diffusion. As signal intensity and concentration rise, this spatial effect is reduced. The net consequence is a low transformation rate at low input, with rising transformation rate as input intensity increases. This process leads to the the pattern characterized by higher Hill coefficients and switch-like function, in which there is low sensitivity to input at low signal intensity.

Limited diffusion within the broader context of input-output patterns leads to the second key point. I will suggest that limited diffusion is simply another way in which systems suffer reduced measurement precision and loss of information at low signal intensity. The ultimate understanding of system design and input-output function follows from understanding how to relate particular mechanisms, such as diffusion or random signal loss, to the broader problems of measurement and information. To understand those broader and more abstract concepts of measurement and information, it is necessary to work through some of the particular details by which diffusion limitation leads to loss of information.

Departure from mass action

Most analyses of chemical kinetics assume mass action. Suppose, for example, that two molecules may combine to produce a bound complex

A + B r AB

in which the bound complex, AB, may undergo further transformation. Mass action assumes that the rate at which AB forms is rAB, which is the product of the concentrations of A and B multiplied by a binding coefficient, r. The idea is that the number of collisions and potential binding reactions between A and B per unit of time changes linearly with the concentration of each reactant.

Each individual reaction happens at a particular location. That particular reaction perturbs the spatial association between reactants. Those reactants that were, by chance, near each other, no longer exist as free potential reactants. Thus, a reaction reduces the probability of finding potential reactants nearby, inducing a negative spatial association between potential reactants. To retain the mass action rate, diffusion must happen sufficiently fast to break down the spatial association. Fast diffusion recreates the mutually uniform spatial concentrations of the reactants required for mass action to hold.

If diffusion is sufficiently slow, the negative spatial association between reactants tends to increase over time as the reaction proceeds. That decrease in the proximity of potential reactants reduces the overall reaction rate. Diffusion-limited reactions therefore have a tendency for the reaction rate to decline below the expected mass action rate as the reaction proceeds.

That classical description of diffusion-limited reactions emphasizes the pattern of reaction rates over time. By contrast, my focus is on the relation between input and output. It seems plausible that diffusion limitation could affect the input-output pattern of a biological system. But exactly how should we connect the classical analysis of diffusion limitation for the reaction rate of simple isolated reactions to the overall input-output pattern of biological systems?

The connection between diffusion and system input-output patterns has received relatively little attention. A few isolated studies have analyzed the ways in which diffusion limitation tends to increase the Hill coefficient, supporting my main line of discussion (Table 2). However, the broad field of biochemical and cellular responses has almost entirely ignored this issue. The following sections present a simple illustration of how diffusion limitation may influence input-output patterns, and how that effect fits into the broader context of the subject.

Example of input-output pattern under limited diffusion

Limited diffusion causes spatial associations between reactants. Spatial associations invalidate mass action assumptions. To calculate reaction kinetics without mass action, one must account for spatially varying concentrations of reactants and the related spatial variations in chemical transformations. There is no simple and general way to make spatially explicit calculations. In some cases, simple approximations give a rough idea of outcome (Table 2). However, in most cases, one must study reaction kinetics by spatially explicit computer simulations. Such simulations keep track of the spatial location of each molecule, the rate at which nearby molecules react, the spatial location of the reaction products, and the stochastic movement of each molecule by diffusion.

Many computer packages have been developed to aid stochastic simulation of spatially explicit biochemical dynamics. I used the package Smoldyn [33, 44]. I focused on the ways in which limited diffusion may increase Hill coefficients. Under classical assumptions about chemical kinetics, diffusion rates tend to be sufficiently high to maintain spatial uniformity, leading to Michaelis-Menten kinetics with a Hill coefficient of k=1. With lower diffusion rates, spatial associations arise, invalidating mass action. Could such spatial associations lead to increased Hill coefficients of k>1?

Figure 9 shows clearly that increased Hill coefficients arise readily in a simple reaction scheme with limited diffusion. The particular reaction system is

S + R g S + A
(9)
Figure 9
figure 9

Limited diffusion and spatial association of reactants can increase the Hill coefficient, k . Simulations shown from the computer package Smoldyn, based on the reaction scheme in Eqs. (9,10). The concentration of the input signal, S, is the number of molecules per unit volume. The other concentrations are set to N=X=100. Diffusion rates are 10-5 for all molecules. I ran three replicates for each input concentration, S. Each circle shows the average of the three replicates. For each panel (a – f), I fit a Hill equation curve to the observations, denoting the output as the relative saturation level, A/N=sat[Sk/(mk+Sk)]. The fitted parameters are: k, the Hill coefficient; m, the input signal concentration that yields one-half of maximum saturation; and “sat”, the maximum saturation level at which the output is estimated to approach an asymptotic value relative to the maximum theoretical value of one, at which all N has been transformed into A. Because of limited diffusion, actual saturation can be below the theoretical maximum of one. Panels (b) and (c) are limited to output responses far below the median, because the simulations take too long to run for higher input concentrations.

X + A δ X + R .
(10)

Under mass action assumptions, the dynamics would be identical to Eq. (1)

A ̇ = gS ( N - A ) - δXA ,

in which N=R+A is the total concentration of inactive plus active reactant molecules and, in this case, we write the back reaction rate as δ X rather than just δ as in the earlier equation. In a spatially explicit model, we must keep track of the actual spatial location of each X molecule, thus we need to include explicitly the concentration X rather than include that concentration in a combined rate parameter. At equilibrium, the output signal intensity under mass action follows the Michaelis-Menten relation

A = N S m + S ,

in which m=δ X/g. If we let x=S/m and y=A/N, then we see that the reaction scheme here leads to an equilibrium input-output relation as in Eq. (4) that follows the Hill equation

y = x k 1 + x k ,

with k=1.

I used the Smoldyn simulation package to study reaction dynamics when the mass action assumption does not hold. The simulations for this particular reaction scheme show input-output relations with k>1 when the rates of chemical transformation are limited by diffusion. Figure 9 summarizes some Smoldyn computer simulations showing k significantly greater than one for certain parameter combinations. I will not go into great detail about these computer simulations, which can be rather complicated. Instead, I will briefly summarize a few key points, because my goal here is simply to illustrate that limited diffusion can increase Hill coefficients under some reasonable conditions.

It is clear from Figure 9 that limited diffusion can raise the Hill coefficient significantly above one. What causes the rise? It must be some aspect of spatial process, because diffusion limitation primarily causes departure from mass action by violating the assumption of spatial uniformity. I am not certain which aspects of spatial process caused the departures in Figure 9. It appeared that, in certain cases, most of the transformed output molecules, A, were maintained in miniature reaction centers, which spontaneously formed and decayed.

A local reaction center arose when S and R molecules came near each other, transforming into S and A. If there was also a nearby X molecule, then X and A caused a reversion to X and R. The R molecule could react again with the original nearby S molecule, which had not moved much because of a slow diffusion rate relative to the timescale of reaction. The cycle could then repeat. If formation of reaction centers rises nonlinearly with signal concentration, then a Hill coefficient k>1 would follow.

Other spatial processes probably also had important, perhaps dominant, roles, but the miniature reaction centers were the easiest to notice. In any case, the spatial fluctuations in concentration caused a significant increase in the Hill coefficient, k, for certain parameter combinations.

Limited diffusion, measurement precision and information

Why do departures from spatial uniformity and mass action sometimes increase the Hill coefficient? Roughly speaking, one may think of the inactive reactant, R, as a device to measure the signal input concentration, S. The rate of SR binding is the informative measurement. The measurement scale is linear under spatial uniformity and mass action. The measurement precision is essentially perfect, because SR complexes form at a rate exactly linearly related to S, no matter how low the concentration S may be and for any concentration R.

Put another way, mass action implies infinite linear measurement precision, even at the tiniest signal intensities. By contrast, with limited diffusion and spatial fluctuations in concentration, measurement precision changes with the scale of the input signal intensity. For example, imagine a low concentration input signal, with only a few molecules in a local volume. An SR binding transforms R into A, reducing the local measurement capacity, because it is the R molecules that provide measurement.

With slow diffusion, each measurement alters the immediate capacity for further measurement. The increase in information from measurement is partly offset by the loss in measurement capacity. Put another way, the spatial disparity in the concentration of the measuring device R is a loss of entropy, which is a sort of gain in unrealized potential information. As unrealized potential information builds in the spatial disparity of R, the capacity for measurement and the accumulation of information about S declines, perhaps reflecting a conservation principle for total information or, equivalently, for total entropy at steady state.

At low signal concentration, each measurement reaction significantly alters the spatial distribution of molecules and the measurement capacity. As signal concentration rises, individual reactions have less overall effect on spatial disparity. Put another way, the spatial disparities increase as signal intensity declines, causing measurement to depend on scale in a manner that often leads to a logarithmic scaling. I return to the problem of logarithmic scaling below.

Shaping sensitivity and dynamic range

The previous sections considered specific mechanisms that may alter sensitivity of input-output relations in ways that lead to the log-linear-log scaling of the Hill equation. Such mechanisms include stochastic failure of signal processing in a cascade or departures from mass action. Those mechanisms may be important in many cases. However, my main argument emphasizes that the widespread occurrence of log-linear-log scaling for input-output relations must transcend any particular mechanism. Instead, general properties of system architecture, measurement and information flow most likely explain the simple regularity of input-output relations. Those general properties, which operate at the system level, tend to smooth out the inevitable departures from regularity that must occur at smaller scales.

Brief review and setup of the general problem

An increase in the Hill coefficient, k, reduces sensitivity at low and high input signal intensity (Figure 2). At those intensities, small changes in input cause little change in output. Weak sensitivity tends to be logarithmic, in the sense that output changes logarithmically with input. Logarithmic sensitivities at low and high input often cause sensitivity to be strong and nearly linear within an intermediate signal range, with a rapid rate of change in output with respect to small changes in input intensity. The intermediate interval over which high sensitivity occurs is the dynamic range. The Hill coefficient often provides a good summary of the input-output pattern and is therefore a useful method for studying sensitivity and dynamic range.

The general problem of understanding biological input-output systems can be described by a simple question. What processes shape the patterns of sensitivity and dynamic range in biological systems? To analyze sensitivity and dynamic range, we must consider the architecture by which biological systems transform inputs to outputs.

Aggregation of multiple transformations

Biological systems typically process input signals through numerous transformations before producing an output signal. Thus, the overall input-output pattern arises from the aggregate of the individual transformations. Although the meaning of “output signal” depends on context, meaningful outputs typically arise from multiple transformations of the original input.

I analyzed a simple linear cascade of transformations in an earlier section. In that case, the first step in the cascade transforms the original input to an output, which in turn forms the input for the next step, and so on. If individual transformations in the cascade have Hill coefficients k>1, the cascade tends to amplify the aggregate coefficient for the overall input-output pattern of the system. Amplification occurs because weak logarithmic sensitivities at low and high inputs tend to multiply through the cascade. Multiplication of logarithmic sensitivities at the outer ranges of the signal raises the overall Hill coefficient, narrows the dynamic range, and leads to high sensitivity over intermediate inputs.

That amplification of Hill coefficients in cascades leads back to the puzzle I have emphasized throughout this article. For simple chemical reactions, kinetics follow the Michaelis-Menten pattern with a Hill coefficient of k=1. If classical kinetics are typical, then aggregate input-output relations should also have Hill coefficients near to one. By contrast, most observed input-output patterns have higher Hill coefficients. Thus, some aspect of the internal processing steps must depart from classical Michaelis-Menten kinetics.

There is a long history of study with regard to the mechanisms that lead individual chemical reactions to have increased Hill coefficients. In the first part of this article, I summarized three commonly cited mechanisms of chemical kinetics that could raise the Hill coefficient for individual reactions: cooperative binding, titration of a repressor, and opposing saturated forward and back reactions. Those sorts of deterministic mechanisms of chemical kinetics do raise Hill coefficients and probably occur in many cases. However, the generality of raised Hill coefficients seems to be too broad to be explained by such specific deterministic mechanisms.

Component failure

If the classical deterministic mechanisms of chemical kinetics do not sufficiently explain the generality of raised Hill coefficients, then what does explain that generality? My main argument is that input-output relations reflect underlying processes of measurement and information. The nature of measurement and information leads almost inevitably to the log-linear-log pattern of observed input-output relations. That argument is, however, rather abstract. How do we connect the abstractions of measurement and information to the actual chemical processes by which biological systems transform inputs to outputs?

To develop the connection between abstract concepts and underlying mechanisms of chemical kinetics, I presented a series of examples. I have already discussed aggregation, perhaps the most powerful and important general concept. I showed that aggregation amplifies small departures from Michaelis-Menten kinetics (k=1) into strongly log-linear-log patterns with increased k.

In my next step, I showed that when individual components of an aggregate system have Michaelis-Menten kinetics but also randomly fail to transmit signals with a certain probability, the system converges on an input-output pattern with a raised Hill coefficient. The main assumption is that failure rate increases as signal input intensity falls.

Certainly, some reactions in biological systems will tend to fail occasionally, and some of those failures will be correlated with input intensity. Thus, a small and inevitable amount of sloppiness in component performance of an aggregate system alters the nature of input-output measurement and information transmission. Because the consequence of failures tends to multiply through a cascade, logarithmic sensitivity at low signal input intensity follows inevitably.

Rather than invoke a few specific chemical mechanisms to explain the universality of log-linear-log scaling, this view invokes the universality of aggregate processing and occasional component failures. I am not saying that component failures are necessarily the primary cause of log-linear-log scaling. Rather, I am pointing out that such universal aspects must be common and lead inevitably to certain patterns of measurement and information processing. Once one begins to view the problem in this way, other aspects begin to fall into place.

Departure from mass action

Limited rates of chemical diffusion often occur in biological systems. I showed that limited diffusion may distort classical Michaelis-Menten kinetics to raise the Hill coefficient above one. The increased Hill coefficient, and associated logarithmic sensitivity at low input, may be interpreted as reduced measurement precision for weak signals.

Regular pattern from highly disordered mechanisms

The overall conclusion is that many different mechanisms lead to the same log-linear-log scaling. In any particular case, the pattern may be shaped by the classical mechanisms of binding cooperativity, repressor titration, or opposing forward and back reactions. Or the pattern may arise from the generic processes of aggregation, component failure, or departures from mass action.

No particular mechanism necessarily associates with log-linear-log scaling. Rather, a broader view of the relations between pattern and process may help. That broader view emphasizes the underlying aspects of measurement and information common to all mechanisms. The common tendency for input-output to follow log-linear-log scaling may arise from the fact that so many different processes have the same consequences for measurement, scaling and information.

The common patterns of nature are exactly those patterns consistent with the widest, most disparate range of particular mechanisms. When great underlying disorder has, in the aggregate, a rigid common outcome, then that outcome will be widely observed, as if the outcome were a deterministic inevitability of some single underlying cause. The true underlying cause arises from generic aspects of measurement and information, not with specific chemical mechanisms.

System design

The inevitability of log-linear-log scaling from diverse underlying mechanisms suggests that the overall shape of biological input-output relations may be strongly constrained. Put another way, the range of variation is limited by the tendency to converge to log-linear-log scaling. However, within that broad class of scaling, biological systems can tune the responses in many different ways. The tuning may arise by adjusting the number of reactions in a cascade, by allowing component failure rates to increase, by using reactions significantly limited by diffusion rate, and so on.

Understanding the design of input-output relations must focus on those sorts of tunings within the broader scope of measurement and information transmission. The demonstration that a particular mechanism occurs in a particular system is always interesting and always limited in consequence. The locus of design and function is not the particular mechanism of a particular reaction, but the aggregate properties that arise through the many mechanisms that influence the tuning of the system.

Robustness

Overall input-output pattern often reflects the tight order that arises from underlying disorder. Thus, perturbations of particular mechanisms in the system may often have relatively little consequence for overall system function. That insensitivity to perturbation—or robustness—arises naturally from the structure of signal processing in biological systems.

To study robustness, it may not be sufficient to search for particular mechanisms that reduce sensitivity to perturbation. Rather, one must understand the aggregate nature of variation and function, and how that aggregate nature shapes the inherent tendency toward insensitivity in systems [3, 4, 45]. Once one understands the intrinsic properties of biological systems, then one can ask how those intrinsic properties are tuned by natural selection.

Measurement and information

Intuitively, it makes sense to consider input-output relations with respect to measurement and information. However, one may ask whether “measurement” and “information” are truly useful concepts or just vague and ultimately useless labels with respect to analyzing biological systems. Here, I make the case that deep and useful concepts underlie “measurement” and “information” in ways that inform the study of biological design (Table 1). I start by developing the abstract concepts in a more explicit way. I then connect those abstractions to the nature of biological input-output relations.

Measurement

Measurement is the assignment of a value to some underlying attribute or event. Thus, we may think of input-output relations in biology as measurement relations. At first glance, this emphasis on measurement may seem trivial. What do we gain by thinking of every chemical reaction, perception, or dose-response curve as a process of measurement?

Measurement helps to explain why certain similarities in pattern continually arise. When we observe common patterns, we are faced a question. Do common aspects of pattern between different systems arise from universal aspects of measurement or from particular mechanisms of chemistry or perception shared by different systems?

Problems arise if we do not think about the distinction between general properties of measurement and specific mechanisms of particular chemical pathways. If we do not think about that distinction, we may try to explain what is in fact a universal attribute of measurement by searching, in each particular system, for special aspects of chemical kinetics, pathway structure or physical laws that constrain perception. In the opposite direction, we can never truly recognize the role of particular mechanisms in generating observed patterns if we do not separate out those aspects of pattern that arise from universal process.

Understanding universal aspects of pattern that arise from measurement means more than simply analyzing how observations are turned into numbers. Instead, we must recognize that the structure of each problem sets very strong constraints on numerical pattern independently of particular chemical or biological mechanisms.

Log-linear-log scales

I have mentioned that the Hill equation is simply an expression of log-linear-log scaling. The widely recognized value of the Hill equation for describing biological pattern arises from its connection to that underlying universal scale of measurement, in which small magnitudes scale logarithmically, intermediate magnitudes scale linearly, and large values scale logarithmically. Although linear and logarithmic scales are widely used and very familiar, the actual properties and meanings of such scales are rarely discussed. If we consider directly the nature of measurement scale, we can understand more deeply how to understand the relations between pattern and process.

Consider the example of measuring distance [41]. Start with a ruler that is about the length of your hand. With that ruler, you can measure the size of all the visible objects in your office. That scaling of objects in your office with the length of the ruler means that those objects have a natural linear scaling in relation to your ruler.

Now consider the distances from your office to various galaxies. Your ruler is of no use, because you cannot distinguish whether a particular galaxy moves farther away by one ruler unit. Instead, for two galaxies, you can measure the ratio of distances from your office to each galaxy. You might, for example, find that one galaxy is twice as far as another, or, in general, that a galaxy is some percentage farther away than another. Percentage changes define a ratio scale of measure, which has natural units in logarithmic measure [5]. For example, a doubling of distance always adds log(2) to the logarithm of the distance, no matter what the initial distance.

Measurement naturally grades from linear at local magnitudes to logarithmic at distant magnitudes when compared to some local reference scale. The transition between linear and logarithmic varies between problems. Measures from some phenomena remain primarily in the linear domain, such as measures of height and weight in humans. Measures for other phenomena remain primarily in the logarithmic domain, such as cosmological distances. Other phenomena scale between the linear and logarithmic domains, such as fluctuations in the price of financial assets [46] or the distribution of income and wealth [47].

Consider the opposite direction of scaling, from local magnitude to very small magnitude. Your hand-length ruler is of no value for small magnitudes, because it cannot distinguish between a distance that is a fraction 10-4 of the ruler and a distance that is 2×10-4 of the ruler. At small distances, one needs a standard unit of measure that is the same order of magnitude as the distinctions to be made. A rule of length 10-4 distinguishes between 10-4 and 2×10-4, but does not distinguish between 10-8 and 2×10-8. At small magnitudes, ratios can potentially be distinguished, causing the unit of informative measure to change with scale. Thus, small magnitudes naturally have a logarithmic scaling.

As we change from very small to intermediate to very large, the measurement scaling naturally grades from logarithmic to linear and then again to logarithmic, a log-linear-log scaling. The locus of linearity and the meaning of very small and very large differ between problems, but the overall pattern of the scaling relations remains the same. This section analyzes that characteristic scaling in relation to the Hill equation and biological input-output patterns. I start by considering more carefully what measurement scales mean. I then connect the abstract aspects of measurement to the particular aspects of the Hill equation and to examples of particular biological mechanisms.

Invariance, the essence of explanation

We began with an observation. Many different input-output relations follow the Hill equation. We then asked: What process causes the Hill equation pattern? It turned out that many very different kinds of process lead to the same log-linear-log pattern of the Hill equation. We must change our question. What do the very different kinds of process have in common such that they generate the same overall pattern?

Consider two specific processes discussed earlier, cooperative binding and departures from mass action. Those different processes may produce Hill equation patterns with similar Hill coefficients, k. However, it is not immediately obvious why cooperative binding, departures from mass action, and many other different processes should lead to a very similar pattern.

Group together all of the disparate mechanisms that generate a common Hill equation pattern. When faced with a new mechanism, how can we tell if it belongs to the group? We might look for particular features that are common to all members of the group. However, that does not work. Various potential members might have important common features. But the attributes that they do not share might cause one potential member to have a different pattern. Common features are not sufficient.

More often common membership arises from the features that do not matter. Think of circles. How can we describe whether a shape belongs to the circle class? We have to say what does not matter. For circles, it does not matter how much one rotates them, they always look the same. Circles are invariant to any rotation. Equivalently, circles are symmetric with regard to any rotation. Invariance and symmetry are the same thing. Subject to some constraints, if a shape is invariant to any rotation, it is a circle. If it is not invariant to all rotations, it is not a circle. The things that do not matter set the shared, invariant property of a group [4850].

A rotation is a kind of transformation. The group is defined by the set of transformations that leave the group members unchanged, or invariant. We can alter a chemical system from cooperative binding under mass action to noncooperative binding under departure from mass action, and the log-linear-log scaling may be preserved. Such invariance arises because the different processes have an underlying symmetry with regard to the transformation of information from inputs to outputs (Table 1).

What aspects of process do not matter with respect to causing the same log-linear-log pattern of the Hill equation? How can we recognize the underlying invariance that joins together such disparate processes with respect to common pattern? The Hill equation expresses measurement scale. To answer our key questions, we must understand the meaning of measurement scale. Measurement scale itself is solely an expression of invariance. A particular measurement scale expresses what does not matter—the invariance under transformation that joins different kinds of processes to a common scaling.

Invariance and measurement

Suppose a process transforms inputs x to outputs G(x). The process may be a reading from a measurement instrument or a series of chemical transformations. Given that process, how should we define the associated measurement scale? Definitions can, of course, be made in any way. But we should aim for something with reasonable meaning.

One possible meaning for measurement is the scale that preserves information. In particular, we seek a scale on which we obtain the same information from the values of the inputs as we do from the values of the outputs. The measurement scale is the scale on which the input-output transformation does not alter the information in the signal (Table 1).

Information is, of course, often lost between input and output. But only certain kinds of information are lost. The measurement scale describes exactly what sort of information is lost during the transformation from input to output and what sort of information is retained. In other words, the measurement scale defines the invariant qualities of information that remain unchanged by the input-output process.

Different input-output processes belong to the same measurement scale when they share the same invariance that leaves particular aspects of information unchanged. For such processes, certain aspects of information remain the same whether we have access to the original inputs or the final outputs when those values are given on the associated measurement scale. By contrast, input-output processes that alter those same aspects of information when input and output values are given by a particular measurement scale do not belong to that scale.

Those abstract properties define a reasonable meaning for measurement scale. Such abstractness can be hard to parse. However, it is essential to have a clear expression of those ideas, otherwise we could never understand why so many different kinds of biological processes can have such similar input-output relations, and why other processes do not share the same relations. It is exactly those abstract informational aspects of measurement that unite cooperative binding and departures from mass action into a common group of processes that share a similar Hill equation pattern.

Measurement and information

It is useful to express the general concepts in a simple equation. I build up to that simple summary equation by starting with components of the overall concept.

Inputs are given by x. We denote a small change in input by dx. An input given on the measurement scale is T(x). The sensitivity of the measurement scale to a change in input is

m x = dT ( x ) d x ,

which is the change on the measurement scale, dT(x), with respect to a change in input, dx. That sensitivity describes the information in the measurement scale with respect to fluctuations in inputs [41, 42, 51]. We may also write

m x d x = dT ( x ) ,

providing an expression for the incremental information associated with a change in the underlying input, dx. If the scale is logarithmic, T(x)= log(x), then

m x d x = d log ( x ) = d x x ,

for which the sensitivity of the measurement scale declines as the input becomes large. On a purely logarithmic scale, the same increment in input, dx, provides a lot of information when x is small and little information when x is large.

Next, we express the relation that defines measurement scale. On the proper measurement scale for a particular problem, the information from input values is proportional to the information from associated output values. Put another way, the measurement scale is the transformation of values that makes information invariant to whether we use the input values or the output values. The measurement scale reflects those aspects of information that are preserved in the input-output relation, and consequently also expresses those aspects of information that are lost in the input-output relation. Although rather abstract, it is useful to complete the mathematical development before turning to some examples in the next section.

The output is G(x), and the measurement scale transforms the output by T[G(x)]. To have proportionality for the incremental information associated with a change in the underlying input, dT(x), and the incremental information associated with a change in the associated output, dT[G(x)], we have

dT(x)dT G ( x )
(11)

in which the relationship shows the proportionality of information associated with the sensitivity of inputs and outputs when expressed on the measurement scale. That measurement scale defines the group of input-output processes, G(x), that preserves the same invariant sensitivity and information properties on the scale T(x). In other words, all such input-output processes G(x) that are invariant to the measurement scale transformation T(x) belong to that measurement scale [41, 42, 51].

In this equation, we have inputs, x, with the information in those inputs, dT(x), on the measurement scale T, and outputs, G(x), with information in those outputs, dT[G(x)], on the measurement scale T. We may abbreviate this key equation of measurement and information as

dT dT G

which we read as the information in inputs, dT, is proportional to the information in outputs, dT[G]. All input-output relations G(x) that satisfy this relation have the same invariant informational properties with respect to the measurement scale T.

Linear scale

This view of measurement scale means that linearity has an exact definition. Linearity requires that we obtain the same information from an increment dx on the input scale independently of whether the actual value is big or small (location), and whether we uniformly stretch or shrink all measurements by a constant amount. To expresses changes in location and in uniform scaling, let

T ( x ) = a + bx ,

which changes the initial value, x, by altering the location by a and the uniform stretching or shrinking (scaling) by b. This transformation is often called the linear transformation. But why is that the essence of linearity? From the first part of Eq. (11)

m x d x = dT ( x ) = b d x d x ,

which means that an increment in measurement provides a constant amount of information no matter what the measurement value, and that the information is uniform apart from a constant of proportionality b. Linearity means that information in measurements is independent of location and uniform scaling.

What sort of input-output relations, G(x), belong to the linear measurement scale? From the second part of Eq. (11), we have dT[G(x)]d x, which we may expand as

dT G ( x ) = d a + b G ( x ) = b dG ( x ) d x.

Thus, any input-output relations such that dG(x)dx belong to the linear scale, and any input-output relations that do not satisfy that condition do not belong to the linear scale. To satisfy that condition, the input-output relation must have the form G(x)=α+β x, which is itself a linear transformation. So, only linear input-output relations attach to a linear measurement scale. If the input-output relation is not linear, then the proper measurement scale is not linear.

Logarithmic scale

We can run the same procedure on the logarithmic measurement scale, for which a simple form is T(x)= log(x). For this scale, dT(x)=dx/x. Thus, input-output relations belong to this logarithmic scale if

dT G ( x ) = d log G ( x ) = dG ( x ) G ( x ) d x x .

This condition requires that G(x)xk, for which dG(x)xk-1d x. The logarithmic measurement scale applies only to input-output functions that have this power-law form (Table 1). Note that the special case of k=1 leads to linear scaling, but for other k values the scale is logarithmic.

Linear-log and log-linear scales

The most commonly used measurement scales are linear and logarithmic. But those scales are unnatural, because the properties of measurement likely change with magnitude. As I mentioned earlier, an office ruler is fine for making linear measurements on the visible objects in your office. But if you scale up to cosmological distances or down in microscopic distances, you naturally grade from linear to logarithmic. A proper sense of measurement requires attention to the ways in which information and input-output relations change with magnitude [41, 42].

Suppose an input increment provides information as

m x d x = d x 1 + bx .

When x is small, m x dx≈dx, which is the linear measurement scale. When x is large, m x dx≈dx/x, which is the logarithmic scale. The associated measurement scale is

T ( x ) log ( 1 + bx ) ,

and the associated input-output functions satisfy G(x)(1+b x)k. This scale grades continuously from linear to logarithmic. The parameter b determines the relation between magnitude and the type of scaling.

The inverse scaling grades from logarithmic at small magnitudes to linear as magnitude increases, with

T ( x ) x + b log ( x ) .

When x is small, the scale is logarithmic with T(x)≈b log(x). When x is large, the scale is linear with T(x)≈x.

Biological input-output: log-linear-log

I have emphasized that the log-linear-log scale is perhaps the most natural of all scales. Information in measurement increments tends to be logarithmic at small and large magnitudes. As one moves in either extreme direction, the unit of measure changes in proportion to magnitude to preserve consistent information. At intermediate magnitudes, changing values associate with an approximately linear measurement scale. For many biological input-output relations, that intermediate, linear zone is roughly the dynamic range.

The Hill equation description of input-output relations

G ( x ) = x k 1 + x k ,

is widely useful because it describes log-linear-log scaling in a simple form. To check for log scaling in the limits of high or low input, we use T(x)= log(x), which implies dT(x)dx/x. In our fundamental relation of measurement, we have

dT ( x ) dT G ( x ) = d log G ( x ) = k 1 x - x k - 1 1 + x k d x ,

When x is small, dT(x)dx/x, the expression for input-output functions associated with the logarithmic scale. When x is large, dT(x)-dx/x, which is the expression for saturation on a logarithmic scale.

When k>1, the input-output relation scales linearly for intermediate x values. One can do various calculations to show the approximate linearity in the middle range. But the main point can be seen by simply looking at Figure 2.

Exact linearity occurs when the second derivative of the Hill equation vanishes at

x = k - 1 k + 1 1 / k
(12)

for k>1. Figure 10 shows that the locus of linearity shifts from the low side as k→1 and x→0 to the high side as k and x→1. Note that x=1 is the input at which the response is one-half of the maximum.

Figure 10
figure 10

The locus of linearity, which is the value of input, x, at which the log-linear-log pattern of the Hill equation becomes exactly linear. The locus of linearity corresponds to the peak sensitivity of the input-output relation. At x=1, output is one-half of maximal response. Plot based on Eq. (12).

Sensitivity and information

Sensitivity is the responsiveness of output for a small change in input. For a log-linear-log pattern, the locus of linearity is often equivalent to maximum sensitivity of the output in relation to the input. The logarithmic regimes at low and high input are relatively weakly sensitive to changes in input.

The Hill equation pattern for input, x, and output, G(x), is

G ( x ) = x k 1 + x k = 1 1 + e - k log ( x ) .

The equivalent form on the right side is the classic logistic function expressed in terms of log(x) rather than x. This logarithmic form is the log-logistic function. Note also that G(x) varies between zero and one as x increases from zero. Thus, G(x) is analogous to a cumulative distribution function (cdf) from probability theory. These mathematical analogies for input-output curves will be useful as we continue to analyze the meaning of input-output relations and why certain patterns are particularly common.

Note also that k=1 is the Michaelis-Menten pattern of chemical kinetics. This relation of the input-output curve G(x) to chemical kinetics will be important when we connect general aspects of sensitivity to the puzzles of chemical kinetics and biochemical input-output patterns.

The sensitivity is the change in output with respect to input. Thus, sensitivity is the derivative of G with respect to x, which is

G ̇ ( x ) = k x k - 1 ( 1 + x k ) 2 .

This expression is analogous to the log-logistic probability distribution function (pdf). Here, I obtained the pdf in the usual way by differentiating the cdf. Noting that the pdf is the sensitivity of the cdf to small changes in value (input), we have an analogy between the sensitivity of input-output relations and the general relation between the pdf and cdf of a probability distribution.

Maximum sensitivity is the maximum value of G ̇ (x), which corresponds to the mode of the pdf. For k≤1, the maximum occurs at x=0, which means that measurement sensitivity of the input-output system is greatest when the input is extremely small. Intuitively, it seems unlikely that maximum sensitivity could be achieved when discriminating tiny input values. For k>1, the maximum value of the log-logistic pattern occurs when G ̈ (x)=0, which is the point at which the second derivative is zero and the input-output relation is purely linear. That maximum occurs at the point given in Eq. (12).

The analogy with probability provides a connection between input-output functions, measurement and information. A probability distribution is completely described by the information that it expresses [3, 40]. That information can be split into two parts. First, certain constraints must be met that limit the possible shapes of the distribution, such as the mean, the variance, and so on. Second, the measurement scale sets the sensitivity of the outputs in terms of randomness (entropy) and information (negative entropy) in relation to changes in observed values or inputs [41, 42].

Sensitivity, measurement and the shape of input-output patterns

The Hill equation seems almost magical in its ability to fit the input-output patterns of diverse biological processes. The magic arises from the fact that the Hill equation is a simple expression of log-linear-log scaling when the Hill coefficient is k>1. The Hill coefficient expresses the locus of linearity. As k declines toward one, the pattern becomes linear-log, with linearity at low input values grading into logarithmic as input increases. As k drops below one, the pattern becomes everywhere logarithmic, with declining sensitivity as input increases.

Sensitivity and measurement scale are the deeper underlying principles. The Hill equation is properly viewed as just a convenient mathematical form that expresses a particular pattern of sensitivity, measurement, and the informational properties of the input-output pattern. From this perspective, one may ask whether alternative input-output functions provide similar or better ways to express the underlying log-linear-log scale.

Frank & Smith [41, 42] presented the general relations between measurement scales and associated probability distribution function (pdf) patterns. Because a pdf is analogous to an expression of sensitivity for input-output functions, we can use their system as a basis for alternatives to the Hill equation. Perhaps the most compelling general expressions for log-linear-log scales arise from the family of beta distributions. For example, the generalized beta prime distribution can be written as

G ̇ (x) x m α 1 + x m k - β .
(13)

With α=k and β=1, we obtain a typical form of the Hill equation given in Eq. (3). The additional parameters α and β provide more flexibility in expressing different logarithmic sensitivities at high versus low inputs.

The theory of measurement scale and probability in Frank & Smith [41, 42] also provides a way to analyze more complex measurement and sensitivity schemes. For example, a double log scale (logarithm of a logarithm) reduces sensitivity below classical single log scaling. Such double log scales provide a way to express more extreme dissipation of signal information in a cascade at low or high input levels.

These different expressions for sensitivity have two advantages. First, they provide a broader set of empirical relations to use for fitting data. Those empirical relations derive from the underlying principles of measurement scale. Second, the different forms express hypotheses about how signal processing cascades dissipate information in signals and alter patterns of sensitivity. For example, one may predict that certain signal cascade architectures dissipate information more strongly and lead to double logarithmic scaling and loss of sensitivity at certain input levels. Further theory could help to sort out the predicted relations between signal processing architecture, the dissipation of information, and the general forms of input-output relations.

Conclusions

Nearly all aspects of biology can be reduced to inputs and outputs. A chemical reaction is the transformation of input concentrations to output concentrations. Developmental or regulatory subsystems arise from combinations of chemical reactions. Any sort of sensory measurement of environmental inputs follows from chemical output responses. The response of a honey bee colony to changes in temperature or external danger follows from perceptions of external inputs and the consequent output responses. Understanding biology mostly has to do with description of input-output patterns and understanding the processes that generate those patterns.

I focused on one simple pattern, in which outputs rise with increasing inputs. I emphasized basic chemistry for two reasons. First, essentially all complex biological processes reduce to cascades of simple chemical reactions. Understanding complex systems ultimately comes down to understanding the relation between combinations of simple reactions and the resulting patterns at the system level. Second, the chemical level presents a clear puzzle. The classical theory of chemical kinetics predicts a concave Michaelis-Menten input-output relation. By contrast, many simple chemical reactions follow an S-shaped Hill equation pattern. The input-output relations of many complex systems also tend to follow the Hill equation.

I analyzed this distinction between Michaelis-Menten kinetics and Hill equation patterns in order to illustrate the broad problems posed by input-output relations. Several conclusions follow.

First, many distinct chemical processes lead to the Hill equation pattern. The literature mostly considers those different processes as a listing of exceptions to the classical Michaelis-Menten pattern. Each observed departure from Michaelis-Menten is treated as a special case requiring an explicit mechanistic explanation chosen from the list of possibilities.

Second, I emphasized an alternative perspective. A common pattern is widespread because it is consistent with the greatest number of distinct underlying mechanisms. Thus, the Hill equation pattern may be common because there are so many different processes that lead to that outcome.

Third, because a particular common pattern associates with so many distinctive underlying processes, it is a mistake to treat each observed case of that pattern as demanding a match to a particular underlying process. Rather, one must think about the problem differently. What general properties cause the pattern to be common? What is it about all of the different processes that lead to the same outcome?

Fourth, I suggested that aggregation provides the proper framing. Roughly speaking, aggregation concerns the structure by which different components combine to produce the overall input-output relations of the system. The power of aggregation arises from the fact that great regularity of pattern often emerges from underlying disorder. Deep understanding turns on the precise relation between underlying disorder and emergent order.

Fifth, measurement in relation to the dissipation of information sets the match between underlying disorder and emergent order. The aggregate combinations of input-output processing that form the overall system pattern tend to lose information in particular ways during the multiple transformations of the initial input signal. The remaining information carried from input to output arises from aspects of precision and measurement in each processing step.

Sixth, previous work on information theory and probability shows how aggregation may influence the general form of input-output relations. In particular, certain common scaling relations tend to set the invariant information carried from inputs to outputs. Those scaling relations and aspects of measurement precision tell us how to evaluate specific mechanisms with respect to their general properties. Further work may allow us to classify apparently different processes into a few distinctive sets.

Seventh, classifying processes by their key properties may ultimately lead to a meaningful and predictive theory. By that theory, we may understand why apparently different processes share similar outcomes, and why certain overall patterns are so common. We may then predict how overall pattern may change in relation to the structural basis of aggregation in a system and the general properties of the underlying components. More theoretical work and associated empirical tests must follow up on that conjecture.

Eighth, I analyzed the example of fundamental chemical kinetics in detail. My analysis supports the general points listed here. Specific analyses of other input-output relations in terms of aggregation, measurement and scale will provide the basis for a more general theory.

Ninth, robustness means insensitivity to perturbation. Because system input-output patterns tend to arise by the regularities imposed by aggregation, systems naturally express order arising from underlying disorder in components. The order reflects broad structural aspects of the system rather than tuning of particular components. Perturbations to individual components will therefore tend to have relatively little effect on overall system performance—the essence of robustness.

Finally, natural selection and biological design may be strongly influenced by the regularity of input-output patterns. That regularity arises inevitably from aggregation and the dissipation of information. Those inevitably regular patterns set the contours that variation tends to follow. Thus, biological design will also tend to follow those contours. Natural selection may act primarily to modulate system properties within those broad constraints. How do changes in extrinsic selective pressures cause natural selection to alter overall system architecture in ways that modulate input-output patterns?

Reviewers’ comments

Reviewer’s report 1

Eugene Koonin, NCBI, NLM, NIH, United States of America

In my view, this is an important, deep analysis that continues the series of insightful studies by the author in which various aspects of the manifestation of the Maximum Entropy principle are investigated. In this particular case, biological systems are looked at from the standpoint of input-output relations, and it is shown how information dissipation caused by aggregation of signals from numerous components of biological systems leads to common patterns such as the Hill equation. The complex relationship between patterns and processes is emphasized whereby the universal patterns, such as that given by the Hill equation, are so common because highly diverse underlying processes can converge to produce these patterns. It is further emphasized that natural selection is likely to act primarily as a modulator on the regular patterns yielded by aggregation and dissipation of information. In short, an important paper that, together with the previous publications of the author, should come as an important revelation to many biologists. One only hopes at least some of them read.

Reviewer’s report 2

Georg Luebeck, Fred Hutchinson Cancer Research Center, United States of America

The review of molecular input-output relations in biological systems by Dr. Frank, although broad and somewhat speculative, wrestles with a fundamental problem in Systems biology: with what Nobel laureate Sydney Brenner referred to as “ … low input, high throughput, no output science”. The perspective Frank offers in this review is refreshing and instructive. Rather than taking on the “inverse-problem” Brenner alludes to by its horns, Frank dissects the characteristics of commonly seen biological input-output relations in view of measurement error, signal processing, information loss, and system aggregation. His analysis is motivated by an apparent contradiction between classical Michaelis-Menten kinetics and the S-shaped Hill (equation) response often seen in biological processes. The resolution of this apparent paradox along various lines of thought and argument, accompanied by easy-to-understand examples, are enlightening and in themselves worth reading.

The review offers many interesting tidbits related to measurement, psychophysics, and information processing. However, the development of the main points feels somewhat disorganized and random in their order. For example, for the uninitiated reader, it is not immediately clear until later that the sensitivity imparted on an output signal by the input is logarithmic for the Hill equation with k>1. For an initiated reader, however, that may be entirely trivial. Still, it seems better to demonstrate such a crucial point upfront.

As it stands, the review could benefit from some shortening and additional focusing, something I trust the author will attempt. Although the main points raised by Dr. Frank are clearly presented in general, he often seems to be one or two steps ahead of the reader who struggles to make sense of formulations that do not become clear until later. For example, early on, he refers to “the haphazard combination of different scalings in the distinct reactions …” or the “transmission and loss of information in relation to scale...” without first defining what he means by ‘scale’ (or scaling) and elaborating on how measurement models—that conform to the input-output paradigm—actually reveal scaling. The example he offers later on provides some intuition, but misses the point as it fails to address the role of noise (fluctuations in input signals) in setting the scale (at low inputs), by reducing an increasing number of false positives as genuine signals may get drowned out by noise. As the author points out, there is plenty of room for advancing development and understanding. It is exactly at the intersection of optimal information control and processing that guided aggregation of molecular processes leads to emerging order and the (controlled) biological complexity we call life. Perhaps, if we understand the invariant properties of the molecular processes that participate in life, and the common biophysical principles that guide the evolution of biological systems, we may be able to get on top of the “inverse problem” that Brenner so bleakly refers to. The alternative, solving the “forward problem” by brute-force computation is tedious and not very satisfying from an intellectual point of view.

Reviewer’s report 3

Sergei Maslov, Brookhaven National Laboratory, United States of America

This reviewer provided no comments for publication.

References

  1. Tyson JJ, Chen KC, Novak B: Sniffers, buzzers, toggles and blinkers: dynamics of regulatory and signaling pathways in the cell. Curr Opin Cell Biol. 2003, 15: 221-231. 10.1016/S0955-0674(03)00017-6.

    Article  CAS  PubMed  Google Scholar 

  2. Zhang Q, Bhattacharya S, Andersen ME: Ultrasensitive response motifs: basic amplifiers in molecular signalling networks. Open Biol. 2013, 3: 130031-10.1098/rsob.130031. doi:10.1098/rsob.130031,

    Article  PubMed  PubMed Central  Google Scholar 

  3. Frank SA: The common patterns of nature. J Evol Biol. 2009, 22: 1563-1585. 10.1111/j.1420-9101.2009.01775.x.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  4. Kauffman SA: The Origins of Order. 1993, Oxford: Oxford University Press

    Google Scholar 

  5. Hand DJ: Measurement Theory and Practice. 2004, London: Arnold

    Google Scholar 

  6. Stevens SS: On the psychophysical law. Psychol Rev. 1957, 64: 153-181.

    Article  CAS  PubMed  Google Scholar 

  7. Cornish-Bowden A: Fundamentals of Enzyme Kinetics. 2012, Hoboken, NJ: Wiley-Blackwell

    Google Scholar 

  8. Gescheider GA: Psychophysics: The Fundamentals. 1997, Mahwah, NJ: Lawrence Erlbaum Associates

    Google Scholar 

  9. Krantz DH, Luce RD, Suppes P, Tversky A: Foundations of Measurement: Volume 1: Additive and Polynomial Representations. 2006, New York: Dover

    Google Scholar 

  10. Krantz DH, Luce RD, Suppes P, Tversky A: Foundations of Measurement. Volume II: Geometrical, Threshold, and Probabilistic Representations. 2006, New York: Dover

    Google Scholar 

  11. Suppes P, Krantz DH, Luce RD, Tversky A: Foundations of Measurement. Volume III: Representation, Axiomatization, and Invariance. 2006, New York: Dover

    Google Scholar 

  12. Rabinovich SG: Measurement Errors and Uncertainty: Theory and Practice. 2005, New York: Springer

    Google Scholar 

  13. Sarpeshkar R: Ultra Low Power Bioelectronics. 2010, Cambridge, UK: Cambridge University Press

    Book  Google Scholar 

  14. Goldbeter A, Koshland DE: An amplified sensitivity arising from covalent modification in biological systems. Proc Nat Acad Sci USA. 1981, 78: 6840-6844. 10.1073/pnas.78.11.6840.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  15. Kim SY, Ferrell JE: Substrate competition as a source of ultrasensitivity in the inactivation of Wee1. Cell. 2007, 128: 1133-1145. 10.1016/j.cell.2007.01.039.

    Article  CAS  PubMed  Google Scholar 

  16. Ferrell JE: Signaling motifs and Weber’s law. Mol Cell. 2009, 36: 724-727. 10.1016/j.molcel.2009.11.032.

    Article  CAS  PubMed  Google Scholar 

  17. Cohen-Saidon C, Cohen AA, Sigal A, Liron Y, Alon U: Dynamics and variability of ERK2 response to EGF in individual living cells. Mol Cell. 2009, 36: 885-893. 10.1016/j.molcel.2009.11.025.

    Article  CAS  PubMed  Google Scholar 

  18. Goentoro L, Kirschner MW: Evidence that fold-change, and not absolute level, of β-catenin dictates Wnt signaling. Mol Cell. 2009, 36: 872-884. 10.1016/j.molcel.2009.11.017.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  19. Goentoro L, Shoval O, Kirschner MW, Alon U: The incoherent feedforward loop can provide fold-change detection in gene regulation. Mol Cell. 2009, 36: 894-899. 10.1016/j.molcel.2009.11.018.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  20. Alon U: An Introduction to Systems Biology: Design Principles of Biological Circuits. 2007, Boca Raton, Florida: CRC press

    Google Scholar 

  21. DeLean A, Munson P, Rodbard D: Simultaneous analysis of families of sigmoidal curves: application to bioassay, radioligand assay, and physiological dose-response curves. Am J Physiol-Endocrinol Metab. 1978, 235: 97-102.

    Google Scholar 

  22. Weiss JN: The Hill equation revisited: uses and misuses. FASEB J. 1997, 11: 835-841.

    CAS  PubMed  Google Scholar 

  23. Rang HP: The receptor concept: pharmacology’s big idea. Br J Pharmacol. 2006, 147: 9-16.

    Article  Google Scholar 

  24. Bindslev N: Drug-Acceptor Interactions, Chapter 10: Hill in Hell. 2008, Jarfalla, Sweden: Co-Action Publishing, doi:10.3402/bindslev.2008.14

    Google Scholar 

  25. Walker JS, Li X, Buttrick PM: Analysing force–pCa curves. J Muscle Res Cell Motil. 2010, 31: 59-69. 10.1007/s10974-010-9208-7.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  26. Hoffman A, Goldberg A: The relationship between receptor-effector unit heterogeneity and the shape of the concentration-effect profile: pharmacodynamic implications. J Pharmacokinet Biopharm. 1994, 22: 449-468.

    Article  CAS  PubMed  Google Scholar 

  27. Getz WM, Lansky P: Receptor dissociation constants and the information entropy of membrane coding ligand concentration. Chem Senses. 2001, 26: 95-104. 10.1093/chemse/26.2.95.

    Article  CAS  PubMed  Google Scholar 

  28. Kolch W, Calder M, Gilbert D: When kinases meet mathematics: the systems biology of MAPK, signalling. FEBS Lett. 2005, 579: 1891-1895. 10.1016/j.febslet.2005.02.002.

    Article  CAS  PubMed  Google Scholar 

  29. Tkačik G, Walczak AM: Information transmission in genetic regulatory networks: a review. J Phys: Condens Matter. 2011, 23: 153102-10.1088/0953-8984/23/15/153102.

    Google Scholar 

  30. Marzen S, Garcia HG, Phillips R: Statistical mechanics of Monod-Wyman-Changeux (MWC) models. J Mol Biol. 2013, 425: 1433-1460. 10.1016/j.jmb.2013.03.013.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  31. Savageau MA: Michaelis-Menten mechanism reconsidered: implications of fractal kinetics. J Theor Biol. 1995, 176: 115-124. 10.1006/jtbi.1995.0181.

    Article  CAS  PubMed  Google Scholar 

  32. Savageau MA: Development of fractal kinetic theory for enzyme-catalysed reactions and implications for the design of biochemical pathways. Biosystems. 1998, 47: 9-36. 10.1016/S0303-2647(98)00020-3.

    Article  CAS  PubMed  Google Scholar 

  33. Andrews SS, Bray D: Stochastic simulation of chemical reactions with spatial resolution and single molecule detail. Phys Biol. 2004, 1: 137-151. 10.1088/1478-3967/1/3/001.

    Article  CAS  PubMed  Google Scholar 

  34. ben-Avraham D, Havlin S: Diffusion and Reactions in Fractals and Disordered Systems. 2000, Cambridge, UK: Cambridge Univerity Press

    Book  Google Scholar 

  35. Schnell S, Turner TE: Reaction kinetics in intracellular environments with macromolecular crowding: simulations and rate laws. Prog Biophys Mol Biol. 2004, 85: 235-260. 10.1016/j.pbiomolbio.2004.01.012.

    Article  CAS  PubMed  Google Scholar 

  36. Dieckmann U, Law R, Metz JAJ: The Geometry of Ecological Interactions: Simplifying Spatial Complexity. 2000, Cambridge, UK: Cambridge University Press

    Book  Google Scholar 

  37. Ellner SP: Pair approximation for lattice models with multiple interaction scales. J Theor Biol. 2001, 210: 435-447. 10.1006/jtbi.2001.2322.

    Article  CAS  PubMed  Google Scholar 

  38. Marro J, Dickman R: Nonequilibrium Phase Transitions in Lattice Models. 2005, Cambridge, UK: Cambridge University Press

    Google Scholar 

  39. Kholodenko BN, Hoek JB, Westerhoff HV, Brown GC: Quantification of information transfer via cellular signal transduction pathways. FEBS Lett. 1997, 414: 430-434. 10.1016/S0014-5793(97)01018-1.

    Article  CAS  PubMed  Google Scholar 

  40. Jaynes ET: Probability Theory: The Logic of Science. 2003, New York: Cambridge University Press

    Book  Google Scholar 

  41. Frank SA, Smith E: Measurement invariance, entropy, and probability. Entropy. 2010, 12: 289-303. 10.3390/e12030289.

    Article  Google Scholar 

  42. Frank SA, Smith E: A simple derivation and classification of common probability distributions based on information symmetry and measurement scale. J Evol Biol. 2011, 24: 469-484. 10.1111/j.1420-9101.2010.02204.x.

    Article  CAS  PubMed  Google Scholar 

  43. Das S, Vikalo H, Hassibi A: On scaling laws of biosensors: a stochastic approach. J Appl Phys. 2009, 105: 102021-10.1063/1.3116125.

    Article  Google Scholar 

  44. Andrews SS, Addy NJ, Brent R, Arkin AP: Detailed simulations of cell biology with Smoldyn 2.1. PLoS Comput Biol. 2010, 6: 1000705-10.1371/journal.pcbi.1000705.

    Article  Google Scholar 

  45. Blühgen N, Herzel H: How robust are switches in intracellular signaling cascades?. J Theor Biol. 2003, 225: 293-300. 10.1016/S0022-5193(03)00247-9.

    Article  Google Scholar 

  46. Aparicio FM, Estrada J: Empirical distributions of stock returns: European securities markets, 1990-95. Eur J Finance. 2001, 7 (1): 1-21.

    Article  Google Scholar 

  47. Dragulescu AA, Yakovenko VM: Exponential and power-law probability distributions of wealth and income in the United Kingdom and the United States. Phys A. 2001, 299: 213-221. 10.1016/S0378-4371(01)00298-9.

    Article  Google Scholar 

  48. Feynman RP: The Character of Physical Law. 1967, Cambridge, MA: MIT Press

    Google Scholar 

  49. Anderson P: More is different. Science. 1972, 177: 393-396. 10.1126/science.177.4047.393.

    Article  CAS  PubMed  Google Scholar 

  50. Weyl H: Symmetry. 1983, Princeton, NJ: Princeton University Press

    Google Scholar 

  51. Frank SA: Measurement scale in maximum entropy models of species abundance. J Evol Biol. 2011, 24: 485-496. 10.1111/j.1420-9101.2010.02209.x.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

Download references

Acknowledgements

I developed this work while supported by a Velux Foundation Professorship of Biodiversity at ETH Zürich. I benefitted greatly from many long discussions with Paul Schmid-Hempel.

Grant information

National Science Foundation (USA) grants EF–0822399 and DEB–1251035 support my research.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Steven A Frank.

Additional information

Competing interests

The author declares that he has no competing interests.

Authors’ original submitted files for images

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Frank, S.A. Input-output relations in biological systems: measurement, information and the Hill equation. Biol Direct 8, 31 (2013). https://doi.org/10.1186/1745-6150-8-31

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1745-6150-8-31

Keywords