We live in a world where financial institutions hold vast portfolios, which often include shares in other financial institutions. In this situation, could a particular transaction lead to a financial crash?
This is an incredibly complex question. It’s so complex that traditional supercomputers are incapable of tackling it, even for simple toy models. Even given complete knowledge of all the assets and cross-holdings in a simple network of 20–30 institutions, it would take more time than the age of the universe — 13.7 billion years! — to compute the effect of a perturbation.
At present, we mainly rely on empirical or statistical tools to answer this question. It is not clear that these methods can systematically and reliably predict financial crashes because indicators of a crisis generally fail at predicting the next crisis. Our failure to prevent these events is directly responsible for economic crises and their devastating consequences.
While the situation might seem dire, quantum computing has shown it could tackle these types of problems efficiently, both in theory and in practice. In particular, quantum computing has proved extremely successful at tackling complex financial problems.
A word or two about quantum annealing
In our article, we developed a quantum algorithm for predicting financial crashes on a quantum annealer. These quantum processors solve problems using nature’s remarkable ability to find the lowest-energy state of complex systems, kind of like how water always finds it way to the lowest valley in a chain of mountains. Let me say a word or two about how we go about solving problems on a quantum annealer.
First, we need is a cost function. This is simply the function I want to minimize to solve my optimization problem. Here, I am predicting a financial crash, so the cost function is the stability condition of our financial network. By minimizing this cost function, I can find all stable configurations of my financial network, and ask: has a financial crash occurred to reach this configuration?
We then show that there exists a quantum mechanical system which encodes precisely our cost function. We can therefore find the stable state of our financial network simply by building our model system, and measuring its lowest energy state.
The quantum annealer is essentially a versatile machine which can simulate any (or at least, many) quantum materials. The trick is to drive the simulated material to its most stable state — just like the water example — and then to measure it. If you’re interested to learn more about the technical details of this process, I invite you to have a look at this great introduction to quantum annealers.
The Financial Model
We base our study on a simple model for financial networks originally proposed by Elliott, Golub, and Jackson. With their model, we can — in principle — calculate the value of all the assets in the network (though in practice no computer in the world can do this!).
Financial institutions, like countries, banks, and companies, can own shares of the assets. The model also allows for inter-dependencies between the institutions, such as institutions owning shares of other institutions and debt contracts between institutions.
Now imagine a transaction has taken place. I want to calculate the values of all institutions once the system has stabilized — we call these the equilibrium values v. The value of any given institution is simply the value of all its equity, scaled with its self-ownership (the amount of itself it owns):
v = C V.
No surprises so far. To put it in mathematical form, I’ve defined V, the total equity values of all institutions, and C, the self-ownership matrix.
But in reality, if the market value of an institution drops below a certain critical threshold, the institution is no longer able to pay its own operating costs and may lose its investors’ faith. To model this behavior, Elliott, Golub, and Jackson introduce the notion of failure: if the value of an institution drops below a certain critical value, the institution suffers an extra drop in value, modeled by the function F(v):
v = C V — F(v) .
In practice, F(v) is a discontinuous function, which makes it extremely difficult to determine the market values of institutions after a small change in the network.
The question now is: given a transaction, how many failures are going to occur? Our aim is therefore to find the network configuration which satisfies the financial stability condition (the above equation), and determine if, in this configuration, institution failures occur on a massive scale.
Expression as a cost function
To express this problem as an optimization problem, we simply write down the function:
H(v) = [ v — C V — F(v) ]².
Any set of values v which minimizes H(v) must also satisfy the financial equilibrium condition (there always exists at least one equilibrium: the trivial solution). We have therefore re-expressed our problem — finding the equilibrium values of all assets in the network — as an optimization problem: to find the minimum of H(v).
In our paper, we decided to present H(v) in a more explicit form. Have a read if you want to gain a deeper intuition!
Quantize H(v), and the machine does the rest!
The function H(v) determines the way in which the asset values interact: H(v) controls how the value of asset A is affected if I increase the value of asset B. To find the optimal set of values v using a quantum annealer, we perform a mathematical trick: we quantize H(v). This means we replace the value variables v by operators — mathematical objects which act on the quantum processor’s qubits and determine their state (0, 1, or some superposition of these states).
The next step is to enter the quantized H(v) into the quantum annealer, which finds its most stable state. It is this state that encodes the values at equilibrium of all assets in the financial network. The rest is easy: we calculate the values of all institutions, and ensure that none of them drops below their critical value. In the future, this could allow us to proceed responsibly, ensuring that none of our actions can cause financial havoc.
OK, I might have swept a few details under the carpet
You need to discretize H(v)
One thing to remember is that qubits actually encode information in a discrete way. This is exactly the same problem with a digital computer: continuous variables, like the value of our assets, need to be encoded in discrete, digital bits. The only difference here is that we choose to encode information (asset value) in a quantum state, rather than as a chain of zeros and ones.
We need to deal with discontinuous functions
The failure terms which we introduced are discontinuous. This is a problem because terms like this could cause the computation to fail. We found that the most viable option was to approximate the problematic discontinuous function by a polynomial expansion. We saw that a Fourier-Legendre expansion, truncated at a reasonable order, produces a similar behavior to the failure terms, using only polynomial functions (which are much easier to deal with).
Many-qubit interaction terms
The quantized H(v) could already be used as input for a quantum annealer, provided this machine could simulate interactions between any number of qubits, at any distance. However, state-of-the-art quantum processors target, for practical reasons, systems with at most 2-qubit interactions. Thus, the final step of our derivation is to bring the interactions between qubits down to 2-qubit terms at most.
One way to do this requires us to sacrifice some qubits. As before, one group of qubits, our logical qubits, will be used to encode the asset values. The others, which we call ancilla qubits, will act as intermediaries to simulate the complex, multi-qubit interactions between the logical qubits. In general, we can approximate the lowest energy state of an interaction between k qubits by using k ancillary qubits.
After this article in the QWA series on quantum, we will address a very important issue in quantum computation: quantum advantage. Stay tuned!