The notion of genetic programme in developmental biology has received many criticisms from a variety of standpoints (e.g., Keller 2003). Still, a more moderate assumption – that we shall call “the algorithmic commitment” – is arguably guiding the bulk of synthetic and systems’ biology. According to the algorithmic commitment, developmental processes can be described as sets of transition rules mapping specific genomic or environmental inputs into corresponding outputs. Therefore, even though developmental processes are not merely the result of the execution of a centralised genetic programme, they might be profitably understood in computational terms as long as they are the product of a distributed computing, for instance locally instantiated by interactive programmes in which the genome responds to environmental cues (Calcott 2021). In practice, synthetic and systems’ biologists apply this general principle by uncovering which portions of a metabolic network, cell or organ are materially responsible for carrying out the biological computations (e.g., Alon 2007; Kirkpatrick 2021).
Daniel Nicholson (2019, 2021) has recently put forward a seemingly challenging criticism against the algorithmic commitment. From his point of view, computational (or, more generically, machine) processes are ultimately deterministic, while cellular behaviour is – at least in some respects – “inherently” stochastic. One of the examples provided by Nicholson is transcription activation. Deterministic (so-called “graded”) models of this phenomenon are predictively unreliable. According to Nicholson, this is because transcription activation is triggered by “random switching”, as the variable flickering of cellular transcription activation is caused by Brownian motion.
Nicholson is partly right. Common computers, which are the clearest physical implementation of computing machines, work deterministically. Of course, we can use computers to simulate stochastic processes, but this is usually done by relying on pseudorandom generators, which are deterministic. Nonetheless, genuine random generators are actually available since the 1940’s (RAND 2001). Most notably, quantum random generators – like Quantis (https://www.idquantique.com/random-number-generation/products/quantis-random-number-generator/) – rely on a source of randomness which is arguably similar to the one acting in transcription activation. In both cases, the processes generating certain outputs are “inherently” stochastic in the sense that they are affected by quantum-level putative indeterminacy.
One may say that, in gathering quantum indeterminacy, Quantis is not, strictly speaking, “computing”, but just receiving inputs. Quantis’ computation consists in leveraging stochastic inputs, which are not necessarily random (i.e., they can be, on the short-medium run, slightly biased), so as to make them equiprobable and independent. This goal is attained, again, by a deterministic algorithm. Instead of counting as another counterexample against the algorithmic commitment, however, this rather helps to put the comparison between developmental and computational processes on the right path.
As a matter of fact, neither Quantis nor cells seem to process information stochastically. The quantum-level phenomena that trigger transcription activation does not percolate through all developmental processes at higher levels of organisation. The stochasticity of Brownian motion is harnessed so as to generate statistically expectable outputs, and it is arguable that this is because the quantum-level inputs are further processed by deterministic mechanisms. We shall illustrate this point through a short discussion of genetic mutations.
Consider the scenario in which a UVB photon causes a pre-mutational lesion in a DNA molecule. Even accepting that the photon causing such pre-mutational lesion is an event of quantum indeterminacy, the pre-mutational lesion will not necessarily convert into a mutation as a stable change in the genome that, through DNA replication, will be inherited by daughter cells. The reason is that the process of mutation is strongly regulated by proofreading and quality control mechanisms that edit most of the pre-mutational lesions (Stoltzfus 2021). This shows that, despite the putative existence of quantum effects at the initial stages of the causal chain (i.e., an indefinite number of events of pre-mutational lesion triggered by putatively indeterministic quantum causes), the final outcome (i.e., the production of the daughter DNA molecule) is remarkably stable (i.e., matching faithfully the original template). In brief, inherent stochasticity is “filtered” because of regulation happening at later stages of the causal chain. But, if this is the case, there might be no genuine disanalogy between the cell and Quantis and, therefore, there might not be conclusive reasons to reject the algorithmic commitment.
Alon, U. (2007) An Introduction to System Biology. Design Principles of Biological Circuits. Boca Raton, FL: Chapman & Hall.
Calcott, B. (2021) A Roomful of Robovacs: How to Think about Genetic Programs. In: Holm, S. & Serban, M. (ed.) Philosophical Perspectives on The Engineering Approach in Biology: Living Machines?, pp. 69-78. New York: Routledge.
Keller, E.F. (2003) Making Sense of Life. Explaining Biological Development with Models, Metaphors, and Machines. Cambridge, MA: Harvard University Press.
Kirkpatrick, K. L. (2021) Biological Computation: Hearts and Flytraps. Journal of Biological Physics, 48, 55–78.
Nicholson, D.J. (2019) Is the Cell Really a Machine? Journal of Theoretical Biology, 477, 108–126.
_____ (2021) On Being of the Right Size, Revisited: The Problem with Engineering Metaphors in Molecular Biology. In: Holm, S. & Serban, M. (ed.) Philosophical Perspectives on The Engineering Approach in Biology: Living Machines?, pp. 40-68. New York: Routledge.
RAND (2001) A Million Random Digits with 10,000 Normal Deviates. Santa Monica, CA: RAND
Stoltzfus, A. (2021) Mutation, Randomness, and Evolution. Oxford: Oxford University Press.