0% found this document useful (0 votes)
37 views10 pages

Vuckovac Unpredictability A

The document discusses unpredictability and randomness through the lens of deterministic systems. It argues that (1) randomness can arise from deterministic processes due to the butterfly effect in chaos theory and computational irreducibility in cellular automata, and (2) for algorithms with conditional branching, the specific execution path for a given input may only be known after running the algorithm due to the exponential growth in possible paths with each iteration.

Uploaded by

Saif Hasan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views10 pages

Vuckovac Unpredictability A

The document discusses unpredictability and randomness through the lens of deterministic systems. It argues that (1) randomness can arise from deterministic processes due to the butterfly effect in chaos theory and computational irreducibility in cellular automata, and (2) for algorithms with conditional branching, the specific execution path for a given input may only be known after running the algorithm due to the exponential growth in possible paths with each iteration.

Uploaded by

Saif Hasan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Unpredictability and Randomness

Rade Vuckovac
Abstract
Randomness is somewhat opposite to determinism. This essay tries to
put this two on the same page. It argues the premise where randomness
is a consequence of a deterministic process. It also provides yet another
viewpoint on the hidden variable theory.

1 Introduction
The discussion about randomness should include some description of unpre-
dictability. A quoted part from the explanation on true randomness makes a
good starting point for our inquiry (bold emphasis added):

If outcomes can be determined (by hidden variables or whatever),


then any experiment will have a result. More importantly, any ex-
periment will have a result whether or not you choose to do that
experiment, because the result is written into the hidden variables
before the experiment is even done. Like the dice, if you know all
the variables in advance, then you dont need to do the experiment
(roll the dice, turn on the accelerator, etc.). The idea that every
experiment has an outcome, regardless of whether or not you choose
to do that experiment is called the reality assumption, and it should
make a lot of sense. If you flip a coin, but dont look at it, then itll
land either heads or tails (this is an unobserved result) and it doesnt
make any difference if you look at it or not. In this case the hidden
variable is heads or tails, and its only hidden because you havent
looked at it.
It took a while, but hidden variable theory was eventually disproved
by John Bell, who showed that there are lots of experiments that can-
not have unmeasured results. Thus the results cannot be determined
ahead of time, so there are no hidden variables, and the results are
truly random. That is, if it is physically and mathematically
impossible to predict the results, then the results are truly,
fundamentally random. [1]

Some deterministic systems which show unpredictable behaviour are in:

Chaos Theory; It shows one exciting feature not usually found in classical
systems. Predicting a long-term state of a system combined with its ap-
proximate initial conditions is difficult if not impossible task. Edward
Lorenz sums it as:

Chaos: When the present determines the future, but the approx-
imate present does not approximately determine the future.
Following details show unpredictability in a little bit more detail:

1
• Butterfly Effect; An interesting and not widely emphasised point
about Butterfly Effect is [2, 3]:
Even a tiny change, like the flap of a butterflys wings, can
making a big difference for the weather next Sunday. This
is the butterfly effect as you have probably heard of it. But
Edward Lorenz actually meant something much more radical
when he spoke of the butterfly effect. He meant that for
some non-linear systems you can only make predictions for
a limited amount of time, even if you can measure the tiniest
perturbations to arbitrary accuracy.
• n − body problem; Even in Newton times, the motions of more than
two orbiting bodies were considered as a problem. Currently, we are
left with numerical methods and simulations. Former is an approx-
imation and butterfly effect prone. The later is basically computa-
tional experiments, which are preferable option for n − body system
investigation [4, 5].
Cellular Automation (CA); CA is a deterministic model containing grids,
populated by cells. The grids are arranged in one or two-dimensional
space. An evolution rule governs how the initial state of cells evolve to
the next generation. Figure 1 shows one dimensional CA rules and an
evolution history. One of the interesting CA features is the concept of
Computational Irreducibility (CI). It proposes that the only way to deter-
mine the future state of CA is to run it, which is very similar to the “the
results cannot be determined ahead of time” principle in hidden variable
argumentation.

2 CA a Closer Look
While Chaos Theory provides us with dynamical systems showing unpredictabil-
ity behaviour, CI principle is probably more accessible for discussion.
Wolfram’s rule 30 CA is a good starting point. Figure 1 shows transition
rules on the top. Every case prescribes how a cell (black or white) is transformed
depending on the previous cell state and its neighbouring cells. Row 1 is the
initial state of CA. Rows 2, 3, 4 ... are consecutive evolved generations. A next-
generation cell is derived from a previous cell and its neighbours. For example,
the cell (row 4, column 13) is derived from case 7. When a cell does not have
above row neighbour, the cell from other end is considered. So, the cell (16; 1)
uses case 7 (again) because the top left neighbour is the cell (15; 31).
There is a contest, and a prize for solving three CA rule 30 problems [6]. All
three of them are relevant for the randomness discussion:
Problem 1: Does the center column always remain non-periodic? Center col-
umn is the column 16 outlined red in Figure 1. The period is checked for
≈ first billion steps and the centre column does not show periodicity. In
effect, it has very similar properties to the number π.
Problem 2: Does each color of cell occur on average equally often in the center
column? The ratio of black and white cells approaches 1 : 1 when the
step iterations increase. After one billion steps, the centre column has

2
Figure 1: CA rule 30. Transformation rules and evolution history. Column 16 acts
as a random sequence.

500025038 black and 499974962 white cells. In a way, it mimics a fair coin
flipping exercise.
Problem 3: Does computing the nth cell of the center column require at least
O(n) computational effort?. Stephen Wolfram strongly suspects that rule
30 is CI. In that case, even if the initial state and rules of transformation
are known, the quickest way to see the future of the state is to run CA
(to experiment).

3 Conditional Branching
While the chaos theory and CA provide evidence of future inaccessibility, there
is an even more persuasive argument where we can not acquire future state of
some system without experimenting.
Conditional branching, the underlying algorithmic construct is the essential
argumentation ingredient. Conditional branching with other two primitives,
sequence and iteration, provides all necessary blocks to build any algorithm
imaginable [7]. It is an if-else statement in a program. When the program
reaches if-else command, it evaluates some state and depending on evaluation
continues with an if or an else path. Usually, when the program is made, all the
possible inputs and they eventual execution paths (EP) are thoroughly defined.
Figure 2 shows domain partition, where inputs are partitioned according to the
joint EP. For example, the 3x + 1 problem one step is:

f (x) if x ≡ 0 mod 2
F (x) = (1)
g(x) if x ≡ 1 mod 1
where f (x) = x/2, and g(x) = (3x + 1)/2. There we know that even input
will be executed by f (x) and odd with g(x).
The problem starts when this procedure is iterated (Figure 3). Every iter-
ation doubles the amount of unique EP. For example if our inputs are 64 − bit
integers and we iterate Equation 1, 64 times. Number of possible paths is

3
execution path 1
Partition A

execution path 2
Partition B

execution path 3

Partition C
execution path ...

Partition ...

Range
Domain

Figure 2: Partitions mappings, when a program has multiple paths of execution.

EP ≤ |264 |. In that case domain partition by execution paths (Figure 2) seems


an impossible task and that is probably the cause of why this problem is still
unsolved even it deals with very basic arithmetic [8].

Figure 3: The 3x + 1 composite function; f (x) = x/2 and g(x) = (3x + 1)/2.

Now we can ask: Do every program with multiple execution paths is domain
partition-able?
1
Informal Theorem 1. There exists at least one algorithm with multiple unique
execution paths where knowledge of which way execution goes is known only after
input evaluation. (Draft [9])

Proof. We can assume the opposite that every input partitioning can be done
efficiently before execution. On first glance, that is a reasonable statement
because when a program is specified correctly, every case behaviour should be
fully defined in advance. On the other hand, there are at least two problems
with that:

• Programs are not always made with a purpose. The program could be
written with arbitrarily many branching statements positioned in a pro-
gram everywhere. If it compiles, it will run, but output behaviour will be
unpredictable.

4
• If every program is partition-able, then branching algorithmic structure
is redundant. Practically, we will know which path to execute for some
input in advance without the need to test the branching statement.
Having algorithms without conditional branching is not what we presently
believe:
– Encyclopedia Britannica Conditional branching entry:
In Analytical Engine control transfer, also known as con-
ditional branching, whereby it would be able to jump to a
different instruction depending on the value of some data.
This extremely powerful feature was missing in many of the
early computers of the 20th century. [10]
– Turing Completeness, Wikipedia:
To show that something is Turing complete, it is enough to
show that it can be used to simulate some Turing complete
system. For example, an imperative language is Turing com-
plete if it has conditional branching. [11]

3.1 Conditional Branching Candidates


While the systems where inputs can not be partitioned exists, it is hard to
identify one. Some candidates are:
Colatz conjecture; Equation 1 is one step of the procedure. The conjec-
ture asserts that every natural number after some steps reaches 1. To
paraphrase Lagarias [12], 3x + 1 transformation appears to be without
any mathematical structure, and every arbitrarily chosen step (branching
decision) behave like a fairly flipped coin.
CA rule 30; English description of the rule is in the exact form as Equation 1
(bold added):
Look at each cell and its right-hand neighbor. If both of these
where white on the previous step, then take the new color of the
cell to be whatever the previous color of its left-hand neighbor
was. Otherwise, take the new color to be opposite of that. [13]
Rule 30 used to be random number generator in Wolfram’s Mathematica.
Möbius function; It is another mathematical object with a branching struc-
ture and randomness emergence. It appears that the Riemann hypothesis,
random walks and Mbius function are closely related. Math StackEx-
change discussion is here [14]. Equation 2 shows Möbius function and its
branching structure.


 0 if n has one or more repeated prime factors
µ(n) = 1 if n = 1
(−1)k if n is a product of k distinct primes

(2)
[15]

5
Branching CA (BCA); This CA was at first designed as a toy model to
investigate the if-else structure and its impact on the state transitions.
The CA evolution step has now a familiar structure:
(
0 c ⊕ Ai+1 , if Ai+2 > Ai+3
c = (3)
c ⊕ Ai+1 , otherwise

Figure 4: Two cellular automata evolutions where the initial state (top left six pixels
plus top black region) differs by just one bit (one in 4096 − bit state).

where ⊕ is exclusive or and Ai+1 is one complement of the cell Ai+1 . This
particular CA is used as a cryptographic primitive for cipher design [16].
Figure 4 using branching CA illustrates all chaos theory features including
the butterfly effect.

4 Measuring Randomness
While all mentioned candidates show randomness, the measure of it is a seem-
ingly tricky endeavour. The Turing test for intelligence [17] may provide the
base for randomness evaluation. The similar idea was explored in cryptography
as well [18].
We have three entities in Turing AI test: a human, a machine and an in-
terrogator. They are in three separate rooms, and their communication is in
the questions/answer form where interrogator asks, and the other two parties
respond. The evaluation of answers should determine who the human is. If a
distinction is inconclusive, the machine could be considered intelligent.
For randomness game we have true-randomness side represented by Random
Oracle (RO), the pseudo-randomness side is BCA and an interrogator (IN).
As in the AI test, IN communicate with RO and BCA via questions/answer
(input/output) format. IN task is to decide which party produces truly random
output. If the distinction could not be made, the true and pseudo qualifiers are
redundant.

6
4.1 Random Oracle
RO concept provides a specific tabular representation which we can categorise
as a true random mapping:
The following model describes a random oracle:
• There is a black box. In the box lives a gnome, with a big book
and some dice.
• We can input some data into the box (an arbitrary sequence of
bits).
• Given some input that he did not see beforehand, the gnome
uses his dice to generate a new output, uniformly and randomly,
in some conventional space (the space of oracle outputs). The
gnome also writes down the input and the newly generated out-
put in his book.
• If given an already seen input, the gnome uses his book to re-
cover the output he returned the last time, and returns it again.
[19]

Domain (N ) Range (N )
input 1 random string 1
input 2 random string 2
input 3 random string 3
... ...

Table 1: Random Function

RO model is believed to be an imaginary construct useful in a cryptogra-


phy secure protocol argumentation but not realisable in practice. The use of
dices in the description is assumed to be an entirely random process. In other
words, if we need RO in practice, we have to use a random function. Its main
feature is that every output is an independent, random string. That implies
the use of a gigantic input/output table (Table 1). That table can not be com-
pressed, rendering the exercise impractical. More details about RO could be
found here [20].

4.2 BCA
BCA candidate (mentioned earlier) is a pseudo-random oracle. Details are in
Figure 5. We are considering two variants:

• In 1st variant: IN sees inputs and corresponding outputs only (red and
blue Figure 5).
• We can also make the game where IN have full knowledge of BCA algo-
rithm but an only partial knowledge of initial state (not knowing a green
part Figure 5). If IN wants to check if observed pair comes from BCA by
running it, the green part has to be replaced with some provisional part.

7
input initial state ?

final state output

Figure 5: CA puzzle. The CA evolution history is partitioned as an input (red) and


an output (blue). The output is a partial state of the last CA evolution cycle.

4.3 Distinguishing chances


Examining BCA input/output pairs reveals a couple of interesting things: Sim-
ilar inputs, produce very different outputs (Figure 4). When a numerous i/o
pairs are put in a tabular form, the table starts to reassemble random structure
from Table 1. This set-up is also similar to the Alan Turing challenge scenario:

I have set up on the Manchester computer a small programme us-


ing only 1,000 units of storage, whereby the machine supplied with
one sixteen-figure number replies with another within two seconds.
I would defy anyone to learn from these replies sufficient about the
programme to be able to predict any replies to untried values. [17]

In the case of knowing rules and partial initial state, the inability to correlate
i/o pairs stays. It also prevents IN to check if observed data comes from BCA.
The partially knowing BCA initial state case might be analogous to the
RO randomness cause. Generally, initial state ignorance comes from practical
reasons. Fidelities of real state vs. assumed state might be tiny and can be easily
violated. We can see what one-bit difference in 4096 − bit state can do for BCA
evolution (Figure 4). There is a good chance that RO and BCA randomness
have the same origin. Both (RO and BCA) know how the state evolves and
what partial initial state is. Both of them show the randomness part (true and
pseudo respectively), and both of them are deterministic processes.

8
5 Speculations
The main point raised in this essay is the computation barrier imposed by
conditional branching. That barrier forces input evaluation during program
execution. Even with known input and algorithm knowledge, predictability of
output is impossible. The only way to discover output is to run the algorithm
(to evaluate input). The absence of patterns, when input/output pairs of such
algorithm were analysed, is caused by that barrier.
If we assume that the conditional barrier is relevant to physical reality and
in the context of:
Thus the results cannot be determined ahead of time, so there are no
hidden variables, and the results are truly random.
we can speculate; yes, the results can not be determined ahead of time but,
paradoxically, that does not necessary exclude determinism.
The misunderstandings might come from the determinism definitions be-
cause we can have different meanings which usually are not noted in argumen-
tations. There are at least two sets of determinism with very different properties.
The differentiation occurs when we need to access state details at a particular
point of time. For example, imagine, a special kind of video player which can
play two movies only: 2 − body digital simulation and the sequel n − body digital
simulation. While watching the 2 − body movie, we can fast forward it if we
are bored. While watching n − body movie we notice that fast forward is not
working any more and if we want to see what happens on end we have to watch
the whole movie.
From there, we can identify potential inconsistencies when dealing with de-
terminism. In the accepted hidden variable theories (HVT) narrative, a cursory
look at the Bell’s inequality [21] and HVT indicates that we have determinism
acting locally with the correspondent distribution of probable outcomes which
is never confirmed by experiment. Since quantum distribution is matched with
observed, we conclude that quantum phenomena are non-deterministic.
On the other hand, we have determinism from n − body movie. Desired
state details are known only when they happen. Any imaginable distribution
information is obtained by observation only. It is not evident how this kind of
determinism fits in HVT context and how its distributions can be violated by
experiment.
For the end, there is a parallel with apparent quantum weirdness in the
algorithmic world:
3x + 1 problem story; We have a state (a natural number). The state will
go through the transition, as shown in Figure 1. Before the measurement, the
state is in a superposition of a number of possible execution paths. Only after
running the algorithm (measurement), the superposition state collapses, and we
know the execution path and the transition result.
The 3x + 1 problem is still unsolved. The conjecture support comes from
experimental evidence where all numbers between 1 and 2≈59 are reaching 1 as
postulated. Another support comes from a heuristic probability which shows
that every multiplication (with 3) has two divisions (with 2) on average, which
indicates a starting number shrinkage on the long run [8]. Both of the supports
or what we know about the phenomena comes from experiments and probability
distributions; even it is a deterministic process.

9
References
[1] Ask a Physicist. Do physicists really believe in true random-
ness? [Link]
really-believe-in-true-randomness/.
[2] Sabine Hossenfelder. The 10 most important physics effects.
[Link]
[Link].
[3] Edward N Lorenz. The predictability of a flow which possesses many scales
of motion. Tellus, 21(3):289–307, 1969.
[4] Kathleen T Alligood, Tim D Sauer, and James A Yorke. Chaos. Springer,
1996.
[5] Michele Trenti and Piet Hut. N-body simulations (gravitational). Scholar-
pedia, 3(5):3930, 2008.
[6] The wolfram rule 30 prizes. [Link]
[7] Corrado Böhm and Giuseppe Jacopini. Flow diagrams, turing machines
and languages with only two formation rules. Communications of the ACM,
9(5):366–371, 1966.
[8] Collatz conjecture. [Link] conjecture.
[9] Rade Vuckovac. On function description. arXiv preprint arXiv:2003.05269,
2020.
[10] Conditional branching. [Link]
branching.
[11] Turing completeness. [Link] completeness.
[12] Jeffrey C. Lagarias. The 3x + 1 problem: An overview.
[Link] FreeAttachments/[Link].
[13] Stephen Wolfram. A new kind of science (rule 30 alternative description p
27). [Link]
behave/?firstview=1/.
[14] Riemann hypothesis, random walks and möbius function.
[Link]
random-walks-and-m
[15] Eric W. Weisstein. Möbius function (from mathworld–a wolfram web re-
source). [Link]
[16] Rade Vuckovac. Secure and computationally efficient cryptographic primi-
tive based on cellular automaton. Complex Systems, 28(4):457–474, 2019.
[17] Alan M Turing. Computing machinery and intelligence (1950). The Essen-
tial Turing: The Ideas that Gave Birth to the Computer Age. Ed. B. Jack
Copeland. Oxford: Oxford UP, pages 433–64, 2004.
[18] Oded Goldreich. Randomness, interactive proofs, and zero-knowledge–a
survey. In The Universal Turing Machine: A Half Century Survey. Citeseer,
1988.
[19] What is the random oracle model and why is it controver-
sial? [Link]
random-oracle-model-and-why-is-it-controversial.
[20] What is the random oracle model and why should you care?
[Link]
oracle-model-and-why-3/.
[21] John S Bell. On the einstein podolsky rosen paradox. Physics Physique
Fizika, 1(3):195, 1964.
10

You might also like