Machine Learning Algorithms Simulating the Double Self Theory


A few months ago I came across Jean Pierre Garnier Malet’s theory of time doubling, and a generous intuition came to my head. Malet proposes that each human being exists simultaneously in two distinct temporal states: a perceptible self that lives the present in a linear way, and an imperceptible self that operates on an accelerated temporal scale, exploring potential futures before they manifest in our everyday reality. Intuition, according to this theory, would be the communication channel between these two states.
I immediately recognized something familiar in this description. When I translate the theory into mathematical terms, I intuitively understand a majestic and sophisticated quantum AI mechanism that is the Double Self: a quantum computer, filled with machine learning algorithms that can generate sophisticated probabilistic simulations, predict scenarios in their most subtle occurrences and deliver insights to the individual through synchronicities such as the sighting of a blue butterfly, a lizard on the path, premonitory dreams or warnings from loved ones.
The question that consumed me was: is it possible to mathematically simulate this concept? Can we computationally demonstrate how a quantum processor would explore futures, amplify probabilities of optimal scenarios and communicate through synchronistic events? I then dove into building a complete simulation that united quantum mechanics and machine learning algorithms to model the Double Self.
From Concept to Mathematics
The first task was to translate Malet’s theory into rigorous mathematical structures. If the Double Self explores potential futures on an accelerated temporal scale, this could be modeled as a quantum system operating in Hilbert space, where each state represents a possible future scenario. I began by defining a quantum state space with 100 potential futures, each represented by a normalized 10-dimensional complex vector. The choice of complex numbers is not arbitrary: they allow representing both magnitude and phase, fundamental properties of quantum states that capture probabilities and interference.
Each potential future has characteristics that can be measured through three main components: ontological coherence, which measures the alignment of the scenario with the individual’s existential purpose; entropic cost, which quantifies how much disorder or chaos that future would bring; and informational gain, which evaluates the potential for learning and growth. These three factors are combined in a utility function that assigns a scalar value to each future scenario, allowing the system to identify which futures are most favorable. What makes this genuinely quantum is not just the use of complex numbers, but the principle of superposition. All 100 futures exist simultaneously in quantum superposition, each with its probability amplitude. When we sum the squares of these amplitudes, we obtain the initial probability distribution, where each future has approximately 1% chance of manifesting. This is where the Double Self begins its work.
Optimal Futures
The heart of the system is the amplitude amplification algorithm, inspired directly by Grover’s algorithm from quantum computing. Grover developed an algorithm that allows searching for marked items in an unstructured database with a speed quadratically superior to classical methods. If a classical computer needs N operations to find an item among N possibilities, a quantum computer with Grover’s algorithm achieves it in approximately square root of N operations.
The mechanism works through controlled quantum interference. First, we apply an oracle operator that marks desirable futures by inserting a 180-degree phase change in their amplitudes. In practical terms, this means multiplying the amplitudes of these states by minus one, inverting their phase without altering the magnitude. Then, we apply a diffusion operator that reflects all amplitudes around the average amplitude, a process that can be thought of geometrically as a rotation in state space.
At each iteration of this process, the amplitudes of favorable futures grow while those of unfavorable ones decrease. It is an elegant mathematical dance where constructive interference strengthens desired states and destructive interference weakens undesired ones. After seven iterations of the algorithm, I achieved an amplification of 2.22 times in maximum probability, taking certain futures from 1% to approximately 2.2% probability. It may seem like a modest increase, but in a space of 100 possibilities, it means that 40 specific scenarios now stand out significantly from the background noise.
But why seven iterations and not more? I discovered something fascinating during testing: Grover’s algorithm oscillates. With 15 iterations, the amplification fell to only 1.15 times, almost returning to the beginning. This happens because the process is a continuous rotation in state space, and if you rotate too much, you pass the optimal point and begin to return. It is a clear manifestation that we are dealing with genuine quantum physics, where timing and phase are absolutely critical.

Temporal Trajectories
To simulate the Double Self actively exploring these futures on an accelerated temporal scale, I implemented a quantum Monte Carlo simulation engine.
Monte Carlo is a technique where you generate thousands of random trajectories to explore the space of possibilities, and the quantum version adds unitary evolution through Hamiltonian operators.
The Hamiltonian is the operator that governs the temporal evolution of the quantum system, analogous to total energy in classical mechanics. I built a random Hermitian Hamiltonian for the 10-dimensional space, ensuring it was self-adjoint, a mathematical property that ensures the eigenvalues are real and that evolution preserves the norm of quantum states. Then, for each trajectory, I evolved the initial state through 30 temporal steps using the exponential evolution operator of the Hamiltonian multiplied by the temporal acceleration factor. This acceleration factor, defined as one million, mathematically represents Malet’s proposal that the Double Self operates on a temporal scale millions of times faster than our perceptible time. At each step, I introduced small quantum noise to simulate environmental decoherence, because even biological quantum systems are not perfectly isolated from the environment. After simulating 500 complete trajectories, I computed the statistics of the endpoints, obtaining the distribution of most probable futures after this accelerated temporal exploration.
The Neural Network as Double Self Processor
A quantum computer exploring futures would not be useful without a system that learns to recognize patterns in these futures. Knowing this, I implemented a neural network with three hidden layers of 64, 32 and 16 neurons, trained to classify futures as optimal or suboptimal based on features extracted from Hilbert space. The network uses hyperbolic tangent activation function, which has interesting properties of smooth saturation and symmetry around zero, suitable for processing quantum features that can be negative. Training was supervised using amplified probabilities as labels. Futures with probability above 1% after quantum amplification were marked as optimal, and the network learned to predict this classification directly from the characteristics of each scenario. The result was 100% accuracy on the training set, indicating that the neural network managed to perfectly capture the patterns that the quantum algorithm was amplifying. More importantly, when tested with 20 completely new scenarios it had never seen before, the network maintained high predictive precision, demonstrating true generalization.
This neural network represents the machine learning aspect of the Double Self. It not only explores futures through quantum mechanics, but learns from these explorations, progressively refining its ability to identify favorable scenarios without needing to recalculate all the quantum amplification each time. It is an elegant hybrid between quantum computing and classical artificial intelligence.
Synchronicities
The fascinating aspect is that these events also reduce entropy. Each synchronicity decreased the system’s entropy by about 30% of the current value, totaling a reduction of approximately 30 bits of entropy through the 15 events. This means that synchronicities are actively organizing the system, reducing uncertainty and guiding the individual toward more coherent futures.
Temporal Entanglement
I implemented a correlation matrix between present states and future states. For each pair of states, I computed the inner product of their quantum state vectors and squared it to obtain probability correlation. The result was a 10×10 matrix where each element represents how much a specific present state is correlated with a specific future state. The average correlation was approximately 0.19, indicating weak but non-trivial correlations across time. However, the maximum correlation reached 0.92, showing that certain present-future pairs are extremely entangled. This high variance in correlation suggests that some futures are strongly determined by certain presents, while others are more independent.
This temporal entanglement matrix is analogous to the reduced density matrices used to quantify spatial entanglement, but applied to the temporal dimension. It is a way to mathematically visualize how decisions and present states are intertwined with specific potential futures through quantum correlations that transcend linear classical causality.
The Intuition Channel
Intuition, in this model, is a quantum communication channel subject to noise. I implemented a transmission protocol where the Double Self encodes messages about optimal futures in quantum signals, transmits through a noisy channel and ordinary consciousness receives degraded versions of these signals. The channel capacity, measured in bits per transmission, determines how much information can flow reliably.
Each transmission involves three steps: encoding, where the future state is multiplied by the utility value to create a signal; noise addition, where random quantum perturbations are added to the signal; and reception, where the noisy signal is renormalized. Transmission fidelity is calculated as the square of the inner product between the original and received signal, ranging from 0 (totally corrupted) to 1 (perfectly preserved). After 50 transmissions, the average fidelity was approximately 13.6%, a low value that reflects the difficulty of maintaining quantum coherence in a noisy biological channel. However, the channel capacity still reached 1.86 bits per transmission, sufficient to communicate non-trivial information. This aligns with the subjective experience of intuitions: they frequently arrive imprecisely, fragmented, subject to interpretation, but still carry useful information when correctly decoded.
Wave Function Collapse
Every quantum measurement involves wave function collapse, where the superposition of multiple possibilities reduces to a single reality. I simulated this process through iterative partial collapses, where each measurement projects the state onto a specific basis but not completely, maintaining a mixture between the collapsed state and the original state.

I started with a pure quantum state in maximum coherence. In each collapse event, I projected the state onto a new measurement basis, constructed from another vector in Hilbert space, and calculated the norm of the projected state to determine the probability of this measurement. Then, I mixed 70% of the collapsed state with 30% of the original state, simulating a smooth rather than abrupt collapse. Simultaneously, I introduced decoherence through mixing with a maximally mixed state, representing the progressive loss of quantum coherence as the system interacts with the environment.
The coherence measure, calculated as the sum of the off-diagonal elements of the density matrix, started at approximately 8.5 and gradually fell to about 7.0 after five collapse events. This reduction mathematically quantifies how the system loses its characteristic quantum properties and becomes more classical through decoherence. This is the process by which quantum possibilities transform into definite reality.
Results
After running the complete simulation on Kaggle, the results quantitatively validated the conceptual model of the Double Self. Of the 100 future possibilities explored, 40 were identified as optimal scenarios through quantum amplification, representing an optimization rate of 40%. The neural network classified these futures with 100% accuracy and demonstrated 40% learning efficiency, meaning it managed to generalize the optimality pattern to new unseen scenarios.
The intuition channel established 50 successful transmissions with capacity of 1.86 bits per transmission, while 15 synchronistic events were generated, each carrying an average of 3.15 bits of information.
The most frequent symbol, animal messenger, appeared in one third of the events, aligning with the idea that certain archetypes have greater perceptual salience. The system’s entropy was reduced by approximately 30 bits through synchronicities, mathematically demonstrating the organizing effect of these events.
Temporal entanglement showed average correlation of 0.19 and maximum of 0.92, confirming that present and future states maintain non-trivial quantum correlations. The amplification of 2.22 times in maximum probability proved that the quantum algorithm effectively doubled the chances of the most favorable futures manifesting.
Final Reflections
The complete code is available as a Jupyter notebook that anyone can run and modify. I used standard scientific libraries: NumPy for linear algebra, SciPy for quantum evolution operators, Scikit-learn for neural networks, Matplotlib for visualization.
Feel free to contact me via email for any needs: contact@antoniovfranco.com