Innovation
Pushing the limits of what quantum computing can do for chemistry
This article is written for technical readers exploring the intersection of quantum computing and computational chemistry. If you’re familiar with quantum circuits, molecular modeling, or machine learning frameworks like PennyLane, this deep dive is for you. And if you’re exploring these ideas in your own organization, we’d love to help you get started.
What we’ll cover
Quantum chemistry has the potential to transform how we design new materials, develop better drugs, and solve complex industrial challenges, from clean energy to advanced manufacturing. At its simplest, quantum chemistry uses the principles of quantum mechanics to calculate how molecules form, bond, and interact (something that’s notoriously difficult for classical computers to handle at scale).
Applying quantum computing to chemistry opens up new ways to estimate binding energies and bond distances in molecules (and that’s exactly what this article dives into). We’ll start with a primer on core quantum computing concepts, including how qubits (the quantum equivalent of classical bits) are represented on the Bloch sphere and how quantum gates manipulate their state. From there, we’ll delve into the practicalities of coding a quantum chemistry problem using PennyLane, a quantum machine learning library built for hybrid quantum-classical computing.
Our investigation kicks off with the complex Cyanex272 molecule, highlighting the challenges of managing its 54 atoms and 162 electrons within today’s quantum simulation limits. The approach involves importing molecular geometries into PennyLane, constructing Hartree-Fock states to represent electron configurations in atomic orbitals, and applying single and double electron excitations to simulate quantum superpositions. The molecular Hamiltonian is calculated and used to estimate the system’s total energy, with gradient descent optimization helping to find the lowest energy state, corresponding to the most probable molecular configuration.
Our study also explores how active space reduction, basis sets, and electronic excitations impact computational complexity and result accuracy, using simpler CH4 and CH3 molecules as models. Along the way, challenges arose, from computational resource limitations to discrepancies in molecular coordinates across different data sources.
Finally, we’ll reflect on the future potential of quantum computing versus GPU clusters in handling complex quantum chemistry calculations, plus we’ll speculate on the metaphysical implications of manipulating quantum systems. While scaling quantum simulations for large molecules remains a significant hurdle, the foundational framework for applying quantum computing to chemical problems is emerging—paving the way for future research in areas like quantum subspace expansion, molecular property calculations, and quantum phase estimation.
Quantum computing 101
Quantum computing can be described in many different ways, but ultimately it’s a form of computing that uses everything in the known mathematical toolbox, relying on extremely intricate hardware to do the necessary calculations.
A quantum computer achieves this by exploiting the properties of particles studied in the field of physics: photons, electrons, atoms, or even larger molecular structures (such as Microsoft’s Majorana particle). The particles whose properties we work with in quantum computing are often combined into units called “logical qubits.”
When we combine multiple particles into logical qubits, it makes the math more resistant to errors. A large system can also form what’s referred to as a topological qubit. Regardless, at the end of the day, what we have is a qubit - analogous to its cousin, the famous classical computer “bit”.
The amazing thing about a qubit is its ability to represent an infinite number of states using just two numbers. A qubit’s state can be illustrated on what’s called a Bloch Sphere—a 3D model that shows all of the possible states a qubit can exist in, mapped onto the surface of a sphere (pictured below).

Now the Bloch Sphere, as you can see, has 3 axes: X, Y, and Z. Therefore there are 3 coordinates, but also the {θ, ϕ} angles. The point on the surface of the sphere is the value of the qubit. Each point on the surface has unique coordinates. In mathematical jargon, a qubit state/coordinate can be represented by a complex number (a single value with two numbers - one for each angle of its coordinate). The thing to keep in mind about a qubit, is that its state is represented by a single point on the surface of a sphere.
Operations can be performed on qubits with “gates,” including rotations along an axis, bit-flip, and control gates (if this, then that), etc. If you think of a point on the surface of a sphere, you can imagine rotating the sphere any which way, and the point will rotate accordingly. A bit-flip is a rotation of the qubit along one of its axes.
I won't go into too much detail about the low-level gates, but there are many. Each gate operates on one or multiple qubits. Similarly to classical computers, there are a set amount of basic operations that can be applied to a qubit. We simply start combining gates together to form more complex operations. To really solve complex problems, though, simple operations must be applied to many, many qubits.
If we think of a typical photo, graphic, or any visual file stored on a computer, a typical file size would be ~4 megabytes, which equates to 32 million bits. A single image requires millions of 1s and 0s to describe the image. We encode this array of bits using various file formats.
But here’s the fascinating thing: unlike a classical computer bit, a single qubit can represent an infinite number of values. It will only have a single state at any given time, but a single qubit can encode an infinite number of states. This is because there are an infinite number of points on the surface of a sphere. The only thing we need is more precise coordinates (the {θ, ϕ} angles). Low precision results in fewer points on the sphere, and higher precision results in more points on a sphere.
If you’ve ever done any 3D rendering, this is equivalent to the subdivision level, level of detail, tessellation, resolution, etc. In practical terms, however, a qubit cannot have an infinite number of states. We just don’t have that kind of precision, due to limits imposed by physics). Alas, I digress.
To summarize:
- A qubit is a mathematical object built from physical particles like photons, electrons, protons, molecules, etc.
- Its state is represented as a point on the surface of the Bloch Sphere.
- Qubit exists in superposition—meaning they can represent multiple states at once.
- When we measure a qubit, we get a probability of finding it in a specific state (|0> or |1>).
- Because quantum measurements are probabilistic, we need to measure many times to get reliable results.
It’s a deeply philosophical question to ponder the actual unpredictability of our universe. After all, when you generate random numbers on a classical computer, you still need an initial “seed” value to get started.
One of the questions I’ve been pondering for several years now, and that you may be wondering about too, is, what types of problems can we actually solve with quantum computing? Beyond keeping our random numbers safe from decryption, that is.
For me, one of the most fascinating problems is understanding why it’s more favorable for molecules to form than for atoms to stay isolated.
To explore that question, we need to shift our focus from qubits to chemistry.
Turning quantum theory into chemical insight
To put quantum computing to the test, I set out to answer a big question: how do molecules actually hold together, and how can we calculate that using qubits?
The molecule I focused on is called Cyanex272. It’s a complex organic molecule I’ve been working with indirectly in another quantum computing project.
Cyanex272 has the following properties:
- C16H35O2P
- Atoms: 54
- Protons: 162
- Electrons: 162
- Atomic Orbitals: 134
- Hydrophobic due to its long Carbon-Hydrogen (CH) groups
- Has a single active site at Oxygen-Hydrogen (OH) group
- The active site typically binds to metal ions, such as Cobalt and Nickel
- In a low-pH environment, the Oxygen will bond with an ionized hydrogen (proton) to form an OH group
- In a high-pH environment, the Hydrogen will typically be released, and the Oxygen will favor binding to metal ions
- Two Cyanex272 molecules will coordinate with a metal ion via the active site
- Highly selective to Cobalt and Nickel


Setting up the environment - Python dependencies
To run these quantum chemistry experiments, I used Python, the go-to programming language for scientific computing, along with PennyLane, a library built specifically for hybrid quantum and classical machine learning. PennyLane provides tools for building quantum circuits, simulating molecules, and calculating energies using quantum algorithms. It’s the bridge between theory and code in this project.
I’ll skip the step-by-step of setting up a Python virtual environment because the real challenge wasn’t creating the environment. It was getting the dependencies to actually work together.
Like many things in quantum computing, this part of the process is still a bit wild west. All the articles I read for my research were based on PennyLane’s documentation, but when I first started, I thought I could simply pip install the latest versions for all the packages I needed. Oh, was I wrong!
It took me the better part of a day (and a lot of trial and error) to arrive at the realization that every single dependency is a v0 release—meaning that with every major version change, there are breaking changes. Some versions only work on specific Linux distributions. Others aren’t compatible with each other at all.
The only reliable fix I found was to use a Docker container with an older set of dependencies that matched the source code of the PennyLane repository I was working from. This is the reality of working in a bleeding-edge field. Things move fast, and stability isn’t guaranteed.
I won’t list all the dependencies here (they’ll probably be outdated by the time you read this), but here is the exact pip freeze output.
Preparing for the first quantum chemistry run
After reading through some of PennyLane’s demos and articles, I had a strong initial plan in mind. Here’s the process I followed and the key steps to running a quantum chemistry simulation with PennyLane:
- Import the molecular geometry into PennyLane
- Build the Hartree-Fock (HF) state to represent electrons in atomic orbitals
- Apply single and double excitations to simulate electron movement and superposition
- Calculate the molecular Hamiltonian to estimate total system energy
- Use gradient descent to optimize the system toward its lowest energy state
Step One: Importing the molecular geometry into PennyLane
The first step is usually to describe the molecule you’re working with, its atomic symbols (H, Co, O, C, etc.), along with the coordinates of those atoms in 3D cartesian space. In this setup, both list indices represent the same atomic nuclei.
The Molecule class from qchem requires the coordinates to be given in atomic units. The coordinates can be any value - even values that are too far apart, or that result in atomic nuclei positioned at the same point in space. If the molecular geometry is not in the optimal configuration, the Hamiltonian of the molecule will simply not be at its minimum. We will come back to the Hamiltonian, but in short, it’s a function that calculates the energy of the system.
The molecular geometry is a key factor that determines the energy of the system. Natural systems will naturally devolve into their lowest energy level (e.g. via alpha-decay, beta-decay, or other phenomenon). Atoms that are very close together, dependent on the nucleic diameters, will repel each other, and we’ll see a very high value for the Hamiltonian.
When the Hamiltonian is at its lowest energy value, we’ll see that the atoms are at their equilibrium bond distance. Atoms that are far apart behave independently, and typically have a higher (potential) energy than when combined. This image will help to reflect the relationship between bond distances of H3, a simple molecule.

When the atoms are very close together, the total energy is very high. As the bond length increases, the energy of the system decreases exponentially until it reaches the minimum energy state of the molecule. Afterwards, as the bond length increases further, the energy of the system increases again.
Here are all the questions I was wondering about at this point in the process:
- Is the energy of a single H atom equivalent to approximately three times the energy of a highly separated H3 molecule?
- -1.27 Hartree for optimal H3 vs -0.5 Hartree for a lone Hydrogen
- How do I import the Cyanex-272 molecule into PennyLane?
- After importing Cyanex272, are its coordinates already optimized?
- I.e. is it the lowest energy state of its Hamiltonian?
- Are the coordinates for the protonated form of the molecule or the deprotonated form?
- Protonated means that the Oxygen atom in the active site contains a Hydrogen atom (essentially neutralizing the atom)
- Deprotonated means that the Oxygen atom in the active site does NOT contain a Hydrogen atom, but doesn’t necessarily have a bonded metal ion
- Metal-Organophosphorus Complex means that the molecule has a metal ion bond in the active site
- From looking at the PubChem database, I believe the molecule is in its Protonated form, since there’s a Hydrogen on the Oxygen atom.
After a quick search, I found the coordinates of the Cyanex272 molecule in the PubChem database, the world's largest collection of information on chemical compounds.
I found I could download the coordinates in various formats, or use PennyLane directly:
Warning: the coordinates provided by PubChem are probably not in the expected atomic units.
With the molecule now loaded, we have access to a bunch of information, and we can proceed with our quantum algorithm.
<Molecule = C16H35O2P, Charge: 0, Basis: STO-3G, Orbitals: 134, Electrons: 162, Mult: 1, Coordinates: [...]>
Step Two: Building the Hartree-Fock state to represent electrons
For our molecule, there are 162 electrons, which are positioned in 134 atomic orbitals. Due to the fermionic nature of electrons, the Pauli Exclusion Principle applies. Therefore, a maximum of two electrons can occupy a single orbital, and the electrons must have opposite spins. 81 are fully occupied (by 2 electrons), and 53 are unoccupied.
Electrons are attracted to the positively charged atomic nucleus, and they will naturally arrange themselves by occupying the lowest energy orbitals first.

In all of the Pennylane demos, and indeed in most quantum chemistry problems, you will most likely encounter the Hartree-Fock state, commonly abbreviated as hf_state.
"Hartree–Fock (HF) method is an approximation method for the determination of the wave function and the energy of a quantum many-body system in a stationary state."
The initial value of the HF state is analogous to the fermionic nature of electrons. In an atom, the lowest energy orbitals are occupied first, and higher energy orbitals are unoccupied. Here a |1> indicates that there is an electron in the orbital. A |0> indicates that there is no electron in that orbital. Each index in the array represents a spin orbital.

- Note: The orbital in an HF state is a spin orbital, not an atomic orbital. A single atomic orbital is described by 2 spin orbitals, because there can be a max of 2 electrons per atomic orbital.
- How does the Quantum Variational Eigensolver (QVE) algorithm know which orbital and electrons are “active”?
- Why are there 53 unoccupied orbitals when using the basis set?
- What are basis functions and the basis set, and how do these fit into the HF method? I fear this is too advanced for this article.
So, let’s build our HF state for the Cyanex272 molecule.
That’s a lot of qubits for a single molecule! And that’s not even counting the metal ion or the second and third Cyanex272 molecules required to form a complete chelated complex. We’re also ignoring the need for qubit error correction. In the best case scenario, we still need hundreds of qubits to model this system.
At this point, I started wondering: how can we directly calculate the Hartree-Fock (HF) energy of a molecule? It seems like we need to know the alpha coefficients for the basis set functions. Something like this:
qchem.hf_energy(molecule)(*alpha)
Now, the beauty of chemistry is that we know that the Cyanex272 molecule doesn’t have 162 “active” electrons. Most of the molecule is stable (in other words, all of the CH groups are stable, and the phosphate group is also stable). The only unstable section is the OH site. Oxygen has 6 valence electrons, 1 electron is provided by the attached Hydrogen atom, and the other electron is provided by the Phosphate atom. This completes the 8 electron orbital-shell.
We can reduce the problem space by specifying the active space of the system:

In this image, the core orbitals are all occupied, and therefore will not participate in chemical reactions. For this reason, we can eliminate them from the Hartree-Fock state, and reduce the combinatorial expansion. In this image, 6 electrons are in the core orbitals, and 4 are in the active orbitals, with 4 atomic orbitals, and therefore 8 spin-orbitals.
Alright… let’s start talking about quantum computing.
A single qubit will represent a single spin-orbital. When the qubit is in the state |0>, this indicates that no electrons are present in the orbital. In the picture above, there are 2 unoccupied high energy active atomic orbitals (first 2 red lines). In this case, the HF state will end with |0000>. This means that there are no electrons in the last 4 spin-orbitals, and the qubits have a value of zero. Similarly, the 2 lower active atomic orbitals each contain 2 electrons and 4 spin orbitals, all occupied by an electron. Therefore, the HF state will start with |1111>, the state of the first 4 qubits. If we remember the Bloch Sphere from the primer, the first 4 qubits point directly up, and the last 4 qubits point directly down. The spin orbitals are not yet in a superposition. They simply have a classical value of 0 or 1: unoccupied or occupied.
Step Three: Simulating electron movement with single and double excitations
When I first read PennyLane’s demos, I was really confused by the quantum circuit. I found myself wondering, “what are these single and double excitations?” After exploring the demos further and reading more documentation, it became clearer. We’re talking about electron excitations–electrons going from one lower orbital to another higher orbital.When I first read PennyLane’s demos, I was really confused by the quantum circuit. I found myself wondering, “what are these single and double excitations?” After exploring the demos further and reading more documentation, it became clearer. We’re talking about electron excitations–electrons going from one lower orbital to another higher orbital.

A single excitation means that a single electron goes from 1 spin-orbital to another spin-orbital. As electrons move around in the molecule, they avoid each other, and sometimes they go from one orbital to another unoccupied orbital. This is known as electronic rearrangement, and the purpose is to attempt to lower the total energy of the system.
From a quantum mechanical perspective, my assumption here is that for each qubit, we are creating a superposition of possible electronic configurations, where the probability of finding an electron in that orbital is proportional to the sum of all possible excitations from an electron in that orbital. Taking the magnitude square of the qubit gives rise to the probability that an electron is found in the orbital that that qubit represents. I realize this is a bit of a mouthful Let’s break this down further.
A single qubit represents a spin-orbital. A single excitation can be applied onto 2 spin-orbitals via the following line of code:
This means that qubit 0 (orbital 0) and qubit 1 (orbital 1) have had a series of quantum gates applied to them. This will place the 2 qubits (spin-orbitals) in a superposition with each other. The 2 qubits will be correlated with each other. It essentially swaps the states of the 2 qubits.
When measuring the qubits, we will see if a qubit is in state |1> occupied, or |0> unoccupied. However, because all qubits are correlated, if one qubit is measured to be occupied, automatically we’ll know that the other qubit is in the opposite state. The results may be random across various simulations, but within a single given simulation, the results will be correlated. When properly created, these quantum mechanical systems conserve the amount of electrons. There are no lost electrons. If an electron is “not in the first qubit,” it will be in the “second qubit.”
We’ll discuss what the parameter is later on.
A SingleExcitation operation is composed of 6 successive quantum gates (4 CNOTs and 2 PauliY rotations):

Counting the permutations of excitations
You can imagine all the possible permutations of 8 electrons in 16 spin-orbitals. There are lots! There are 32 ways that 8 electrons can be excited to a higher electron shell. There are 328 ways 8 electrons can be excited to either the first or second higher electron shells. It’s also possible to define the third higher electron shell, but that seems to be uncommon - at least at the current scales of the demos by PennyLane. My assumption is that larger atoms with more electrons might indeed benefit from TripleExcitations (ex: Cobalt?!).
To push this idea even further, with the Cyanex272 molecule, if we want to calculate all the ways that the electrons can be excited from the first or second orbitals, we can calculate the list of all possible permutations of excitations using the following:
This results in a truly gigantic quantum circuit, with millions of quantum operations performed on gates. I’m beginning to wonder if we’re trying to solve these problems using a mathematical theory that might just be too… expansive? It feels like we’re trying to answer a simple question such as “2+2” using Set Theory, or Homotopy Type Theory.
To recap, we create a quantum circuit and apply all of the Double and Single excitations on it. These excitations represent all the ways the electrons may move within the spin-orbitals. Each qubit ends up in a superposition indicating whether an electron is occupying the spin-orbital it represents. The simultaneous measurement of all qubits ends up telling us the electronic distribution within the system. Measuring the system multiple times will tell us the probability distribution of where the electron is most likely to be found.
At least I think that’s what’s happening…
Step Four: Calculating the molecular Hamiltonian to estimate system energy
Switching gears from single qubits, let's start looking at the global system. One way this is done in quantum chemistry is by using the molecular Hamiltonian. This is a function that represents the total energy of a system. It’s the sum of the potential energies and kinetic energies of all the constituent parts of the system. To simplify the equations, I believe PennyLane uses the electronic molecular Hamiltonian - i.e. it’s only summing the electronic energies (e.g. electron-electron repulsion, electron-proton attraction), but neglects nucleic forces. At the end of the day, though, what we need to understand is that the molecular Hamiltonian can be evaluated, and tells us the energy of the system in Hartree units. The expectation value of the Hamiltonian is estimated as an average over many measurements.
Computing the Hamiltonian from the molecule takes a bit of time and scales with the complexity of the molecule. Active space reduction can help reduce computation time.
- Question: How does the Hamiltonian change with the active space reduction?
- Question: How inaccurate are the results if we remove them?
Question: Does qchem automatically know which electrons and orbitals can be removed from the wave function? How does it know which electrons and orbitals can be omitted?
When inspecting the Hamiltonian for the CH4 molecule, we notice that the len(H):
- With full config (10 electrons, 9 orbitals)
- ~640k characters
- When active electrons = 4 electrons (for the outer shell of Carbon)
- ~120k characters
- When active electrons = 4, active spin orbitals = 4
- ~22k characters
- When active electrons = 2, active spin orbitals = 2
- ~1.2k characters
Plotting this, we see that the size of the Hamiltonian grows exponentially with the amount of active electrons and orbitals. Growth is approximately O(M^4), where M is the number of active orbitals.

Decomposition of the Quantum Hamiltonian
As we saw, adding more active electrons and orbitals to the Hamiltonian results in an exponential addition of terms.
Looking at this Hamiltonian, we can notice a list of terms. Each term of the Hamiltonian is composed of a coefficient and a quantum operation. Each coefficient multiplies each operation. An operation is defined as a set of one or more Pauli gates (Z, X, Y, I) operating on single qubits. The coefficients are real numbers. When an operator contains an @ symbol, this indicates that it is the tensor product of the two quantum gates.
We can see that -1067 * Identity matrix of qubit 0. In this specific example, we set the active space of the Hamiltonian to be fairly small, so all of the non-active parts of the Cyanex272 molecule add up to -1067. This was based on having two active electrons, and atomic orbitals. The constant term -1067 was precomputed. In quantum chemistry problems, we typically fix the position of the nuclei, so they can be considered constant. The nuclear forces are included in this term. The rest of the terms in the Hamiltonian will offset this constant by focusing on fluctuations.
Step Five: Optimizing for the lowest-energy state with gradient descent
The magic line of code that does all the heavy lifting:
Here, the expval (expectation value of the Hamiltonian) will run the quantum circuit and measure the final state of the qubits. Each qubit is then combined in the wave-function Hamiltonian, and the output is the total energy of the wave-function (in Hartree units). Further research would indicate how the qubit states influence the Hamiltonian, but my guess is that the basis-set plays a huge role.
Now, let’s define the quantum circuit:
Instead of defining the quantum circuit manually, one of my favorite ways to define it is by using the AllSinglesDoubles function. It sets the initial value of all the qubits to the initial HF state defined above. So, we have our molecule with all the electrons in the lowest energy orbitals. It then applies each single and double excitation that all electrons might express. This creates a superposition for each qubit that defines whether the electron is in those orbitals, in that specific electronic configuration. After running this quantum circuit many times and averaging the results, we get the expectation value of the molecular Hamiltonian, i.e. the energy of the system.
To compute the circuit, we simply need to run it. Here’s how:
What are Excitation Params?
But wait a second, what is this theta, an array filled with zeros, with that length? Well, these are the parameters of the quantum gates. They’re a set of values supplied to each excitation, used to indicate how much of an impact that excitation has on the wave-function of the Hamiltonian, a deviation from the Hartree-Fock base state. If the params are all zero, this equates to the hf_initial_state, where all of the lower-energy spin-orbitals are occupied, and all of the active spin-orbitals are unoccupied. The expectation value of the Hamiltonian gives us an energy for the system, and each parameter tweaks how much of a contribution an excitation has on the Hamiltonian.
I’m wondering what happens when we take the initial HF state, and simply tweak one parameter. What does this mean? Let’s check it out:
As we increase the contribution of the first double excitation, the total energy of the system increases. I’m unsure how much of a contribution a single excitation should have on the Hamiltonian. What are the limits? This is why we need to perform gradient descent.
With my limited understanding of how each excitation contributes to the Hamiltonian, there’s only one way to fix this issue: perform gradient descent on the Hamiltonian. Gradient descent gives us a clear direction; a clear indication of what values our parameters should have. Considering that there are potentially thousands of parameters, we need advanced gradient descent algorithms. JAX (a Python library for fast numerical computing) and Optax (an optimization toolkit built on top of JAX) provide those algorithms:
So, what does this mean? The lowest energy state of the system for the provided coordinates (and other configurations) is the most probable configuration. This is because physical systems naturally tend towards the lowest energy. Alpha and beta decays, as well as other physical phenomena, naturally occur to reduce the total energy. Once we’ve found the ground energy of the molecule, we know the most probable electronic configuration.
What is a Basis-set?
Computational chemists have been hard at work over the past decades to compute coefficients for gaussian functions that approximate the orbital structure of all atoms. Each basis-set is a list of coefficients to apply on Gaussian type functions. Where the linear combination of each Gaussian function brings the approximation closer to the Shrondinger equation for that atom. The tradeoff is either memory and CPU operations. As an example, this article was computed in the STO-3G basis-set. It’s an extremely small basis set with a minimum number of functions. At the cost of result accuracy, STO-3G calculates the H atom to be -0.49491 Ha, while in reality the H atom is -0.5 Ha.

Now that we’ve covered the key steps in a typical quantum chemistry run, let’s look at how we put them to work on some real molecules — starting with Cyanex272.
Putting Cyanex272 under the quantum microscope
As with all complex problems, starting off with a simpler problem may be a good idea. PubChem contains the coordinates for the molecule, provided in different formats, and these were translated into an xyz file using an online tool. However, after inspecting the coordinates and the bond distances, it was clear that they weren’t being provided accurately.
It seems like the bond distances downloaded from PubChem as coordinates don’t match the bond distances of the API loaded molecule (the mol_data() call). A quick online search reveals that an average distance of 1.48 Å is common for P=O bonds, with some claiming 1.52 Å in specific molecules (HO-PO, PO). Going forwards, the mol_data coordinates will be used.
One potential reason for this is confusing units in the PennyLane functions. Some functions require units to be in Bohrs; other functions expect units to be in Angstroms. Being mindful of units before every function call is critical. Another potential reason for the confusion in values is caused by the multiple estimated geometries for molecules provided automatically by PubChem’s internal estimation tools (Cyanex272 has multiple confirmations/shapes).
As with the previous section of this paper, setting up the molecular Hamiltonian is fairly straightforward. Reduction of the active space is critical due to the O(n^4) scaling of the terms. An attempt to use 8 electrons in 8 active orbitals still results in an uncomputable Hamiltonian. The Docker container crashes with out-of-memory exceptions.
This molecule is simply too big, and our algorithms/computers too inefficient. Breaking up the molecule into smaller chunks is the only feasible way we currently have to investigate further.
Breaking down the CH chains
When inspecting the molecule, we can observe two long Hydrocarbon chains on each side of the central Phosphorus atom. Each Carbon chain is attached to multiple Hydrogen atoms, forming multiple CH3 groups. There are a total of 16 Carbon atoms, with 8 on each side of the Phosphorus atom. These Carbon chains render the molecule hydrophobic, making it easily separable from aqueous solutions.
In this sub-experiment, we will analyse the energy at various active electronic configurations of the CH4 molecule. This doesn’t necessarily give us any useful information with respect to the Cyanex272 molecule, but it’s an interesting starting point for wrapping our heads around the concept of active space.
We can set up the experiment the same way as before, except modifying the active electrons and orbitals for various test cases. When doing so, and not optimizing for electronic excitations (using the mean-field HF state), we get a Hamiltonian expectation value of -37.72 Ha. A quick online search tells us that the total molecular energy for CH4 is -39.86 Ha. We’re nowhere near the experimentally verified energy for Methane. What’s going on?

Here we used a theta of zero, indicating that none of the excitations apply. This simply runs the Hamiltonian at the HF meanfield. Changing the amount of active electrons and orbitals results in decreasing energies. As the electrons “move to another orbital” to avoid each other, the total energy decreases.
As we can see, the time per iteration increases from 1s to 96s. This excludes gradient descent of the parameters. When including gradient descent, the time per iteration for the full active configuration grows to “infinity” (it consistently crashes the Docker container). Perhaps renting a quantum computer could help. Alternatively, selecting max_diff=1 in the Qml qnode solves the issue, but diverges in the 7th iteration.
These results do indeed indicate that we are getting closer and closer to the experimentally verified energy (-39.86 Ha). This value is also provided by PennyLane’s publicly available CH4 dataset. To get even closer, we should remove the integral differentiation limit imposed by the max_diff flag, and we should also change the basis set to a more complete basis set (currently using STO-3G, which is a very simple basis-set. aug-cc-pVQZ?).
The biggest problem going forwards: how do we know if the values that we get actually make sense? In the case of simple molecules, such as Methane, we have datasets available that indicate the final result. We can compare the convergence tendency against the verified values. In the case of complex molecules, such as the entire Cyanex 272 molecule, how do we know that the -1400 Ha energy is a valid approximation?
Zooming in on the CH3 groups
After investigating the Carbon chains, we can see that some of the groups are CH3 groups. Could we calculate the difference in Hartree energy between the CH4 molecule and a CH3 molecule? One thing to note about this experiment is that there is a missing Hydrogen, resulting in a net molecular charge of -1. Keep in mind that these CH3 groups are tied to other CH2/CH3 groups.
Due to the missing Hydrogen, the geometry of the CH3 molecule changes to form a trigonal planar geometry. This is a result of equidistant electron repulsion. In this example, we will not be performing geometry optimization techniques. Instead, we’ll use some trigonometry to define the 120-degree angle of the Hydrogen atoms.
If we try to run the molecular_hamiltonian function, we will get an error. There are multiple problems with the current setup. Firstly, because there’s a missing Hydrogen, there is a missing electron. With a missing electron, the molecule has a net charge of 1, so we must modify the spin multiplicity to 2. Secondly, because open-shell systems are not supported by the built-in PennyLane method, we must use another method, such as openfermion.
Running the circuit with the HF meanfield, excluding electron excitations, results in -38.65 Ha. If we include gradient descent to find the Ha including excitations, we will find energies closer to -38.704 Ha. Full electron configuration is being used here. This still requires significant time to execute on a modern Macbook, through a Docker container.
We can compare the CH3 and CH4 energies and notice that the CH3 energy is slightly higher than CH4. This means that it will react more easily with other molecules - and indeed, this molecule has a missing Hydrogen. It is also an open-shell system, meaning it will easily bond with another lone electron.
Exploring the molecule’s active site
Let’s take a look at the active site of the Cyanex 272 molecule.

In the first segment we noticed that the molecule had two long Hydrocarbon chains. In the image below, they were replaced with two Hydrogen atoms, to keep a closed-shell system. This will result in an approximation, as Hydrogen atoms are much much smaller, and therefore the forces repelling them from each other is nowhere near the same strength of the forces in the comparatively bigger Carbon atom (Carbon is six times bigger).
When the Cyanex272 molecule is in a highly acidic solution (with a low pH), the HO molecule is bonded with the central Phosphorus atom. In a low acidity solution, where fewer Hydrogen atoms are available in solution, the HO bond is easier to disturb, and a metal ion could form instead. Remembering that two Cyanex272 molecules are necessary to chelate Cobalt, this results in two lone protons.
Co2+ + 2(HR) = CoR2 + 2H+
Can we see this at play in this quantum chemistry example?
Let’s attempt to build this system. If even the smallest of systems, CH3, takes hours to compute, we must reduce the problem to the smallest possible configuration. Instead of using Cyanex272 as the reactant, we will use the HOH. Although it isn’t necessarily the best approximation for this problem, as H2O is much more stable than the HO-P molecule.
Perhaps renting a quantum computer to run this experiment may help.
Computing the Hamiltonian using the openfermion backend, we discover that 48 qubits are necessary to compute it - that is, without active space reduction.
The Docker container had trouble running a 9 electron system. We can only imagine the resource requirements for 36 electrons. This is without including the 50 spin orbitals necessary, which increases the Hamiltonian function exponentially. We could reduce the active space to only include the valence electrons. Cobalt has 9 valence electrons, oxygen has 6, resulting in 15. Still extremely high.
Pushing the limits of what the currently configured experiment resources allow, we can set the number of active electrons and orbitals to 8. An active space of (10, 10) resulted in consistently crashing the Docker container.
Here are the results of the six experiments. Energy may be slightly more optimized if we consider implementing an optimized geometry differentiation algorithm, increasing the active space, and/or changing the basis-set in use.

What the quantum chemistry results reveal
Comparing the difference in Ha energy between Co+2 and Co+3, we notice that Co+2 is less energetic. Therefore it is more favorable to be in that configuration, compared to Co+3. Indeed, this is what is observed as well in nature: Co+3 is less stable in aqueous solutions, and tends to form complexes with other atoms due to its electron charge density.
Results indicate that it’s not necessarily energetically favorable to form a CoOH bond compared to H20 + Co. This is most likely due to the fact that the system has been oversimplified to fit computational resources. The OH bond should actually be an OP bond. My assumption is that the 15 electrons of Phosphorus results in better electronic correlation between the CoO bond. The shared electrons between the atoms most likely result in more stability than when a single electron is shared (as is the case with the OH). Alternatively, the basis-set choice may have had an impact on the orbital approximations, with worse results in higher order orbitals (p/d orbitals).
Comparing quantum computing to GPU power
To achieve greater precision in computational chemistry, larger basis sets are required. A basis set consists of predefined Gaussian functions, where each additional function in the linear combination helps approximate the Schrödinger equation more accurately. Currently, conventional supercomputers can handle estimations for systems with a few thousand atoms. However, because computational chemistry scales exponentially, classical supercomputers will eventually hit a limit. Perhaps we’re already approaching that threshold. This is where quantum chemistry presents a possible solution, as its scaling could be less restrictive than classical approaches due to the exponential growth in qubit availability. On the other hand, the massive parallelism of modern GPU clusters is emerging as a strong competitor, challenging the necessity of using physical qubits.
Controlling the quantum fabric of reality
The idea may seem far-fetched, but what appears to be happening is that our growing knowledge of quantum manipulations is allowing us to use machines such as quantum computers to control the fundamental layer of our Universe. Quantum computing may just be a modern take on the archaic dream of alchemy. The ability to confine quantum systems and protect them from entanglement with external systems might be the level of control needed to begin building Planck scale structures.
When considering how the fundamental fabric of reality is interconnected with its surrounding spacetime—governed by relativity—it becomes clear that entanglement plays a crucial role. Everything is moving relative to everything else. The quantum computer is moving with the Earth, which is moving relative to the Sun, which in turn is moving relative to the Galaxy, all the way up to cosmic scales. A quantum computer is a system that enables its qubits to follow the Minkowski geometry of spacetime without interference from external systems. Could we be creating what is known (or indeed unknown) as dark matter or dark energy—unentangled particles? The curvature of spacetime would therefore be the only influence on unentangled particles.
Where quantum chemistry goes from here
This experiment was just the beginning. With the foundational setup in place, there’s a whole world of quantum chemistry techniques waiting to be explored.
Here’s where we could go next:
- Compute the next energy state using quantum subspace expansion (QSE)
- This can be used to understand how photons are emitted from a molecule: colors, fluorescence, and photochemical reactions.
- Compute molecular properties, including dipole moment and charge distribution
- Compute the quantum phase estimation (QPE)
- Investigate the molecular changes that occur during chemical reactions
- Use a different basis-set, by using more gaussian functions instead of STO-3G.
- Apply perturbation theory on the HF estimation: MP2?
- Run the model using a rented quantum computer such as those provided by AWS’s Braket.
Exploring what quantum computing can do for your business
Quantum computing is already opening up new ways to tackle complex challenges in research, simulation, and problem-solving. At Osedea, we combine curiosity, creativity, and technical expertise to help organizations explore this emerging technology with confidence.
We’ve worked hands-on with quantum algorithms, molecular modeling, and next-gen simulation techniques. From setting up quantum environments with PennyLane to fine-tuning molecular Hamiltonians and applying gradient descent, we know what it takes to turn quantum theory into real-world solutions.
If you’re ready to explore how quantum computing could unlock new possibilities for your organization, we’d love to help you chart that path forward.
Get in touch—we’re always up for a good challenge!
Further reading
Want to dig deeper into quantum chemistry?
Check out this resource from Montana State University with data on water molecule geometry and bond energy: A Hartree-Fock Calculation of the Water Molecule.


Did this article start to give you some ideas? We’d love to work with you! Get in touch and let’s discover what we can do together.