Quantum Mechanics
Nature, at small enough scales, does not run on knobs and positions. It runs on amplitudes — complex numbers whose squared magnitudes are probabilities. Getting comfortable with that shift is the whole job. Once you have it, everything from the hydrogen spectrum to qubits to the photons in your phone's camera snaps into focus.
1. Why quantum mechanics exists
By 1900, classical physics was so complete that the University of Munich told young Max Planck not to study it because "all the important problems have been solved." Four problems, left unsolved, turned out to be the bitter ones. Each one, taken seriously, broke classical physics open.
Blackbody radiation. Heat a chunk of iron in a forge. It glows — first dull red, then orange, then yellow-white. The spectrum of the glow depends only on temperature, not material. Classical physics predicted that hot objects should radiate infinite energy at high frequencies (the "ultraviolet catastrophe"). Planck, in 1900, fit the observed spectrum by assuming — reluctantly, and as "an act of despair" — that light comes off in discrete chunks of energy $E = h f$, where $h$ is a new constant he had to invent.
The photoelectric effect. Shine light on a metal plate. Above a certain frequency threshold, electrons pop off. Below that threshold, nothing happens no matter how bright the light. Classical electromagnetism predicted that bright enough light should always eject electrons. Einstein, in 1905 (the same year as special relativity), explained the data by taking Planck's chunks literally: each photon is a particle with energy $h f$, and you need a single photon above threshold to kick out an electron. He won his Nobel for this, not for relativity.
Atomic spectra. When you run an electric current through hydrogen gas, it emits light at very specific wavelengths — a few sharp lines, not a continuous spectrum. Every element has its own fingerprint of spectral lines. Classical mechanics had no way to explain why atoms should emit only at discrete frequencies. Bohr, in 1913, guessed that electrons live on discrete energy levels and jump between them, emitting a photon whose energy equals the gap. It was a guess, but the numerical fit for hydrogen was perfect.
Electron diffraction. In 1927, Davisson and Germer bounced electrons off a nickel crystal and found interference patterns — electrons were behaving like waves. That clinched a weird hypothesis de Broglie had made in 1924: every particle has a wavelength $\lambda = h/p$, where $p$ is its momentum. Particles were waves. Waves were particles. The old categories did not survive.
Between 1925 and 1927, Heisenberg, Schrödinger, Born, Dirac, and Pauli put together a new theory — not a patchwork, an actual theory — that took the amplitude-and-probability picture seriously. By 1928 it had explained the hydrogen spectrum to astonishing precision. By the 1940s it had become the most accurately tested theory in physics. Today's prediction of the electron's magnetic moment from quantum electrodynamics agrees with experiment to about 12 decimal places.
The state of a quantum system is a vector in a complex vector space. Physical observables are Hermitian operators on that space. Measuring an observable gives you one of its eigenvalues, at random, with probabilities determined by the state vector. Between measurements, the state vector evolves smoothly and reversibly according to the Schrödinger equation. That list is unreasonably short, and unreasonably successful.
Why should you care in 2026? Three immediate reasons:
- Semiconductors and lasers. Every transistor in every phone, every LED, every laser pointer works because electrons in solids live in bands with energy gaps that only quantum mechanics explains. A modern economy is built on band structures.
- Chemistry and materials. The shape of a protein, the strength of steel, the color of a dye — all of it is electron wave functions doing their thing. Drug design and battery engineering are applied quantum mechanics.
- Quantum computing. Qubits, entanglement, and superposition are the raw material of an industry that has gone from academic curiosity to Google, IBM, and dozens of startups building devices you can run code on.
This page will teach you the conceptual core. We'll use a little linear algebra and a little calculus; we'll assume you know what complex numbers are; and we'll keep the philosophy to a minimum. The philosophy arguments are real, but the math is the same regardless of which camp you sit in.
2. Vocabulary cheat sheet
| Symbol | Read as | Means |
|---|---|---|
| $|\psi\rangle$ | "ket psi" | A state vector — a column vector in a complex Hilbert space. Contains everything knowable about the system. |
| $\langle\phi|$ | "bra phi" | The Hermitian conjugate of a ket. A row vector with complex-conjugated entries. |
| $\langle\phi|\psi\rangle$ | "bracket phi psi" | The inner product — a complex number. Its magnitude squared is a probability amplitude. |
| $\hat{A}$ | "A hat" | An operator (e.g. position, momentum, Hamiltonian). Observables are Hermitian: $\hat{A}^\dagger = \hat{A}$. |
| $\hat{H}$ | "H hat" | The Hamiltonian — total-energy operator. Governs time evolution. |
| $\hbar$ | "h-bar" | Reduced Planck constant: $h / (2\pi) \approx 1.055 \times 10^{-34} \ \text{J s}$. |
| $\psi(x)$ | "wave function" | $\langle x | \psi \rangle$ — the amplitude to find the particle at position $x$. |
| $|\psi(x)|^2$ | "probability density" | Probability per unit length (or volume) of finding the particle at $x$. |
| $[\hat A, \hat B]$ | "commutator of A and B" | $\hat A \hat B - \hat B \hat A$. Zero if the operators commute; nonzero is where the weirdness lives. |
3. State vectors and Dirac notation
Classical mechanics describes a particle by its position and momentum — six real numbers if it's in three dimensions. Quantum mechanics describes a system by a state vector, a unit-length vector in a complex vector space. Dirac introduced a notation that is a little weird the first time and a lot cleaner every time after that. A state is written
A quantum state is a vector
- $|\psi\rangle$
- Read "ket psi." It is a column vector whose entries are complex numbers. Think "arrow in a possibly infinite-dimensional space."
- $\mathcal{H}$
- A Hilbert space — a complex vector space with an inner product, in which distances converge. For a spin, $\mathcal{H}$ is just $\mathbb{C}^2$. For a particle on a line, it's the space of square-integrable functions on $\mathbb{R}$.
The key move. You are not describing "the particle's position." You are describing something from which predicted probabilities can be computed. That something is the state vector. It may live in very high dimensions, and it may be outside your everyday geometric intuition, but the rules of linear algebra apply exactly.
Every ket $|\psi\rangle$ has a partner called a bra, written $\langle\psi|$. The bra is the Hermitian conjugate: transpose the vector and complex-conjugate the entries. Bras and kets combine to form the inner product:
Inner product (bracket)
- $\langle \phi | \psi \rangle$
- A complex number. The "overlap" between states $|\phi\rangle$ and $|\psi\rangle$. If the states are orthogonal, it is zero.
- $\overline{\cdot}$
- Complex conjugate. Flipping the order of a bracket complex-conjugates the value.
- $\mathbb{C}$
- The complex numbers. Amplitudes are complex, not just real.
Born rule preview. If $|\phi\rangle$ and $|\psi\rangle$ are normalized ($\langle\psi|\psi\rangle = 1$), then the probability of measuring the state $|\psi\rangle$ and finding it in state $|\phi\rangle$ is $|\langle\phi|\psi\rangle|^2$. Amplitudes add, but probabilities come from squared magnitudes — which is why quantum interference looks so weird.
States can be superposed. Given two states $|0\rangle$ and $|1\rangle$, any combination
Superposition
- $|0\rangle, |1\rangle$
- Two basis states — for example, the two spin states of an electron, or the ground and first-excited state of an atom.
- $\alpha, \beta$
- Complex numbers, the amplitudes. Not probabilities themselves.
- $|\alpha|^2 + |\beta|^2 = 1$
- The normalization condition. It says the total probability of getting either outcome is $1$.
Not "both at once." Popular science likes to say a superposed qubit is "0 and 1 at the same time." That's misleading. It is in a state that is neither definitely 0 nor definitely 1, and whose measurement outcome is governed by the amplitudes. Closer to "the direction of this arrow in 2D points somewhere between the x and y axes."
Two more pieces of notation. An outer product $|\psi\rangle\langle\phi|$ is a linear operator — feed it a ket, get back a ket scaled by $\langle\phi|$ applied to the input. A projection operator $|\phi\rangle\langle\phi|$ projects onto the direction of $|\phi\rangle$. The identity operator on any Hilbert space can be written $\sum_n |n\rangle\langle n|$ for any orthonormal basis $\{|n\rangle\}$; this is the completeness relation and it is how sums and integrals get inserted in calculations.
4. Operators, eigenvalues, and measurement
An observable — something you can measure — is represented by a Hermitian operator on the Hilbert space. Hermitian means $\hat A^\dagger = \hat A$: the operator equals its own conjugate transpose. Hermitian operators have real eigenvalues, which is exactly the property you want for things you can read off a voltmeter.
The measurement rules (collectively the Born rules) are these. Let $\hat A$ be an observable with eigenstates $|a_n\rangle$ and corresponding eigenvalues $a_n$: $\hat A |a_n\rangle = a_n |a_n\rangle$. Suppose the system is in state $|\psi\rangle$. Expand in the eigenbasis:
Expanding a state in an eigenbasis
- $|a_n\rangle$
- An eigenstate of $\hat A$ with eigenvalue $a_n$.
- $c_n$
- The complex amplitude for finding the system in state $|a_n\rangle$. Computed by projecting $|\psi\rangle$ onto $|a_n\rangle$.
- $\sum_n$
- Sum over all eigenstates — or integral, if the spectrum is continuous.
Why expand? Because in the eigenbasis, the operator's action is trivial: multiply each coefficient by its eigenvalue. Every computation becomes easier once you pick the right basis, and "the right basis" is usually the eigenbasis of whatever operator you want to apply.
Now measure $\hat A$. The rule is:
- The measurement yields one of the eigenvalues $a_n$ — never anything else.
- The probability of getting $a_n$ is $|c_n|^2 = |\langle a_n | \psi \rangle|^2$.
- After measurement, the state of the system collapses to $|a_n\rangle$ — the eigenstate corresponding to the outcome.
The expected value of $\hat A$ in state $|\psi\rangle$ is
Expectation value
- $\langle \hat A \rangle$
- Mean value of the observable, averaged over many identical measurements of the same state.
- $\langle \psi | \hat A | \psi \rangle$
- Compute as $\sum_{m,n} c_m^* c_n \langle a_m | \hat A | a_n \rangle$; the matrix element $\langle a_m | \hat A | a_n \rangle$ is $a_n \delta_{mn}$ in the eigenbasis.
Not a prediction of a single outcome. The expectation value is a statistical average, not the result you'll get from one run. Quantum mechanics is fundamentally about ensembles of identically prepared systems.
Two observables commute if $\hat A \hat B = \hat B \hat A$, and they share an eigenbasis. Measuring one does not disturb the other. Two observables that don't commute do not share eigenstates; measuring one changes the state so that subsequent measurements of the other give different outcomes. The canonical non-commuting pair is position and momentum:
Canonical commutation relation
- $\hat x$
- Position operator. In the position representation, it is just "multiply by $x$."
- $\hat p$
- Momentum operator. In the position representation, $\hat p = -i\hbar \, d/dx$.
- $[\hat x, \hat p]$
- The commutator. Nonzero means position and momentum are incompatible.
- $i$
- The imaginary unit. Its appearance is a dead giveaway that something genuinely quantum is going on.
- $\hbar$
- Reduced Planck constant. Sets the scale of all quantum effects.
Why it matters. This single relation is why the uncertainty principle exists, why quantum mechanics has a unique classical limit, and why a harmonic oscillator's zero-point energy exists. Everything downstream flows from this line.
5. The Schrödinger equation
Between measurements, the state vector evolves deterministically in time. The equation of motion was written down by Erwin Schrödinger in late 1925 and published in early 1926:
Time-dependent Schrödinger equation
- $|\psi(t)\rangle$
- The state vector at time $t$.
- $\hat H$
- The Hamiltonian — the operator representing total energy. For a particle with mass $m$ in a potential $V(x)$, $\hat H = \hat p^2/(2m) + V(\hat x)$.
- $i\hbar \, \partial_t$
- Time derivative of the state, multiplied by $i\hbar$. The factor of $i$ is essential — it is why superpositions "rotate" in Hilbert space rather than just growing or decaying.
Classical analogy. In Hamiltonian classical mechanics, the Hamiltonian $H(x, p)$ generates time evolution through Poisson brackets. Replace Poisson brackets with $[\cdot, \cdot]/(i\hbar)$ — commutators — and you get the Schrödinger evolution. The structure is the same; only the vocabulary changes.
In the position representation, writing $\psi(x, t) = \langle x | \psi(t) \rangle$, the Schrödinger equation for a particle of mass $m$ in a potential $V(x)$ becomes a partial differential equation:
Schrödinger equation in position space
- $\psi(x, t)$
- The wave function. $|\psi(x, t)|^2$ is the probability density for the particle to be at position $x$ at time $t$.
- $-\tfrac{\hbar^2}{2m} \partial_x^2 \psi$
- The kinetic-energy term. Comes from $\hat p^2/(2m)$ with $\hat p = -i\hbar \partial_x$, so $\hat p^2 = -\hbar^2 \partial_x^2$.
- $V(x) \psi$
- Potential-energy term. The potential multiplies the wave function pointwise.
It's a wave equation with a twist. Compare to the classical wave equation $\partial_t^2 \psi = c^2 \partial_x^2 \psi$: that one has two time derivatives. Schrödinger's has one time derivative and an $i$. That $i$ is what makes quantum evolution unitary (probability-preserving) instead of wave-like.
If $\hat H$ is time-independent, you can separate variables: $\psi(x, t) = \psi(x) e^{-iEt/\hbar}$. Plugging in gives the time-independent Schrödinger equation:
Time-independent Schrödinger equation
- $\psi(x)$
- A spatial wave function — the time-dependence has been peeled off.
- $E$
- The energy eigenvalue. For bound states, $E$ is one of a discrete set; for unbound states, continuous.
- $V(x)$
- Potential energy. The character of the solutions depends entirely on what $V$ looks like.
An eigenvalue problem. This is "find the eigenfunctions of $\hat H$." The eigenvalues $E$ are the allowed energies; the eigenfunctions are the stationary states. Most of undergraduate quantum mechanics is: pick a potential $V$, solve this equation, interpret the answer.
6. Canonical systems
Particle in a box (infinite square well)
The simplest bound-state problem. A particle lives in a one-dimensional region $0 \le x \le L$ with infinite potential walls: $V(x) = 0$ inside, $\infty$ outside. The wave function must vanish at $x = 0$ and $x = L$ (no penetration into infinite walls). Solve $-(\hbar^2/2m) \psi'' = E \psi$ with these boundary conditions, and you get sine waves that fit an integer number of half-wavelengths in the box.
Energy eigenstates of a 1D box
- $\psi_n(x)$
- The $n$-th stationary wave function. A sine with exactly $n$ antinodes in the box.
- $\sqrt{2/L}$
- Normalization constant — chosen so that $\int_0^L |\psi_n|^2 dx = 1$.
- $E_n$
- The energy of the $n$-th state. Scales as $n^2$, so levels spread out quadratically.
- $n$
- The quantum number. A positive integer labeling the state. There is no $n = 0$ state (that would be $\psi = 0$, no particle).
Why quantized? Only sine waves whose wavelength is a divisor of $2L$ vanish at both walls. That's literally a boundary condition: half-integer waves fit exactly. The "forbidden" energies between levels are the frequencies you simply cannot fit into the box.
The ground-state energy is not zero. Even in its lowest state, the particle has kinetic energy $E_1 = \pi^2 \hbar^2 / (2 m L^2)$. This is the zero-point energy, and it is a direct consequence of the uncertainty principle: a particle confined to a region of size $L$ has uncertain momentum of order $\hbar/L$, and therefore kinetic energy of order $\hbar^2/(2 m L^2)$.
Harmonic oscillator
The second canonical system. A particle in the potential $V(x) = \tfrac{1}{2} m \omega^2 x^2$ (a spring, essentially). The time-independent Schrödinger equation here can be solved by an algebraic trick (ladder operators) that is one of the cleanest calculations in physics. The result is
Harmonic oscillator spectrum
- $\omega$
- Classical angular frequency of the oscillator: $\omega = \sqrt{k/m}$ for a spring constant $k$.
- $E_n$
- The $n$-th energy level, evenly spaced.
- $n$
- Non-negative integer, starting at zero. $n$ is the number of "quanta" in the oscillator.
- $\tfrac{1}{2} \hbar \omega$
- The ground-state energy — half a quantum, always present, impossible to remove. The zero-point energy of the vacuum, in the modern version of the calculation.
Why this matters. Almost every system behaves like a harmonic oscillator near equilibrium — Taylor-expand the potential to second order and there you are. So harmonic-oscillator physics shows up in phonons, photons, molecular vibrations, quantum fields, and nearly every quantum-mechanical model you'll ever write down.
Hydrogen atom
The big result — the original reason anyone believed quantum mechanics. A single electron in the Coulomb potential of a proton, $V(r) = -k e^2/r$ where $k = 1/(4\pi\epsilon_0)$, has bound states with energies
Hydrogen energy levels
- $E_n$
- Energy of the $n$-th bound state of hydrogen. Negative because these are bound (trapped) states; positive energy means the electron is free.
- $n$
- Principal quantum number. The ground state is $n = 1$.
- $-13.6 \ \text{eV}$
- The binding energy of the ground state. It equals $m_e e^4 k^2 / (2 \hbar^2)$ — an unlikely combination of constants that happens to come out to a number you can spell.
What you can predict from this formula alone. A photon emitted when an electron drops from $n$ to $m$ carries energy $E_n - E_m = 13.6 (1/m^2 - 1/n^2)$ eV. For $m = 2$ and $n = 3, 4, 5, \dots$ you get the Balmer series — the visible-light emission lines of hydrogen, 656, 486, 434, 410 nm. These are the red, teal, and blue lines you see through a diffraction grating looking at a hydrogen lamp. Quantum mechanics predicted them to four decimal places on the first try.
The full solution has three quantum numbers: $n$ (energy), $\ell$ (angular momentum magnitude), and $m$ (angular momentum projection). With spin added, $n, \ell, m, m_s$ account for every state in the atom — and, via the Pauli exclusion principle, for the entire periodic table.
7. Interactive: particle in a box
Slide the quantum number $n$ to see the wave function $\psi_n(x)$ (cyan) and its probability density $|\psi_n(x)|^2$ (violet). Count the antinodes: state $n$ has exactly $n$ of them. The energy $E_n$ grows as $n^2$, and the probability density gets wavy with more and more peaks — until, for large $n$, it starts to look roughly uniform, which is the correspondence principle in action (classical particles in a box spend equal time everywhere).
8. The uncertainty principle
Heisenberg's uncertainty relation is not a statement about measurement error. It is a statement about what kinds of states exist. For any state $|\psi\rangle$, define
Variance of an observable
- $\sigma_x^2$
- The variance of position in the state $|\psi\rangle$ — the mean of $\hat x^2$ minus the square of the mean of $\hat x$. Standard statistics.
- $\sigma_p^2$
- The variance of momentum, defined the same way.
What these measure. $\sigma_x$ tells you how spread out the wave function is in space. $\sigma_p$ tells you how spread out its Fourier transform is in momentum. A narrow, localized wave packet has small $\sigma_x$ and (necessarily) large $\sigma_p$.
Heisenberg's relation then says:
Heisenberg uncertainty principle
- $\sigma_x \sigma_p$
- Product of the standard deviations of position and momentum in a given state.
- $\hbar/2$
- The lower bound. No quantum state can beat it. The bound is saturated by Gaussian wave packets.
Why it's inevitable. Pin down $x$ to within a small window $\Delta x$, and the wave function's Fourier transform — its momentum distribution — must have support of width at least $\hbar/(2 \Delta x)$. Squeezing one always stretches the other. This is the same mathematical fact that makes a short audio pulse span a wide frequency range; quantum mechanics just applies it to the amplitude that describes a particle. The general form, valid for any two observables: $\sigma_A \sigma_B \ge \tfrac{1}{2} |\langle [\hat A, \hat B] \rangle|$.
A consequence: the electron in a hydrogen atom cannot just "fall into the nucleus." If it did, $\Delta x$ would vanish, forcing $\Delta p$ (and hence kinetic energy) to infinity. The balance between kinetic uncertainty energy and the attractive Coulomb energy determines the size of the atom — the Bohr radius $a_0 \approx 5.29 \times 10^{-11}$ m. Atoms are big because of the uncertainty principle.
9. Spin and two-level systems (qubits)
Classical angular momentum is a continuous three-vector. Quantum angular momentum — spin — has a discrete spectrum. An electron has spin $1/2$, which means its spin angular momentum along any chosen axis takes one of exactly two values: $+\hbar/2$ ("up") or $-\hbar/2$ ("down"). Not "anywhere in between." Just those two.
The state space for spin-$1/2$ is $\mathbb{C}^2$. You can write any state as
A qubit
- $|0\rangle, |1\rangle$
- The two basis states — conventionally "spin up" and "spin down" along the $z$-axis, or the computational basis of a qubit.
- $\alpha, \beta$
- Complex amplitudes, constrained by normalization.
Qubit = spin-1/2. A qubit is mathematically identical to the spin of an electron, or the polarization of a photon, or the superposition of two atomic energy levels. The hardware is different; the Hilbert space is the same.
The observables for spin are the Pauli matrices:
Pauli matrices
- $\sigma_x, \sigma_y, \sigma_z$
- Three $2 \times 2$ Hermitian matrices, each with eigenvalues $+1$ and $-1$. The spin-1/2 operators are $\hat S_i = (\hbar/2) \sigma_i$.
- $i$
- The imaginary unit, showing up in $\sigma_y$.
The three axes are incompatible. $[\sigma_x, \sigma_y] = 2 i \sigma_z$, and cyclic. You cannot simultaneously know a spin's $x$-component and its $y$-component. Measure $\sigma_z$ and find "up," and the next $\sigma_x$ measurement gives $+1$ or $-1$ with equal probability — measurement of one Pauli operator randomizes the others.
Worked example — measuring a superposed qubit
A qubit is in state $|\psi\rangle = \tfrac{1}{\sqrt{2}}(|0\rangle + |1\rangle)$. What happens when you measure $\sigma_z$? The outcomes are $+1$ (the $|0\rangle$ eigenvalue) with probability $|1/\sqrt{2}|^2 = 1/2$, and $-1$ with probability $1/2$. A 50-50 coin flip. Measure $\sigma_x$ instead, and the state $|\psi\rangle$ is already the $+1$ eigenstate of $\sigma_x$ — you get $+1$ with certainty. Same state, different basis, different outcomes. This is why the choice of measurement axis is part of the physics.
10. Entanglement and Bell's theorem
Two qubits live in $\mathbb{C}^2 \otimes \mathbb{C}^2 = \mathbb{C}^4$, spanned by $|00\rangle, |01\rangle, |10\rangle, |11\rangle$. A general two-qubit state is some complex combination of those four. Most of them cannot be written as a tensor product of single-qubit states. Those states are called entangled. The most famous one is
A Bell state
- $|\Phi^+\rangle$
- One of the four maximally-entangled Bell states.
- $|00\rangle$
- Both qubits in state $|0\rangle$.
- $|11\rangle$
- Both qubits in state $|1\rangle$.
The correlation, plain. Measure the first qubit in the $z$-basis. You get $|0\rangle$ or $|1\rangle$, 50-50. Measure the second qubit in the same basis. Always the same answer. Always. This would be easy to explain if each qubit had a "hidden" label assigned before the experiment. Bell's theorem says no such hidden label can reproduce all the correlations you actually observe when you switch between measurement axes.
Bell, in 1964, derived an inequality that any "local hidden variables" theory must satisfy. Then he showed that quantum mechanics violates the inequality for certain entangled states. Experiments starting with Alain Aspect in 1982 and culminating in loophole-free tests around 2015 have repeatedly verified the quantum prediction. No local classical story reproduces the data. The universe is non-classical in a provable, experimentally decided way. Aspect, Clauser, and Zeilinger shared the 2022 Nobel Prize in Physics for this line of work.
Entanglement is not "faster-than-light communication." You cannot use it to send a signal. But it is a genuine correlation that cannot be explained by any pre-measurement labeling. It is the physical resource that makes quantum computing, quantum cryptography, and quantum teleportation work. Most quantum information theory is, at the end of the day, bookkeeping about who has access to which part of an entangled state.
See also
Special Relativity
Merging quantum mechanics with special relativity gives quantum field theory, and the Dirac equation predicted antimatter before anyone knew such a thing existed.
General Relativity
The one place GR and QM still don't agree — black hole interiors and the very early universe — is where quantum gravity lives. Unresolved.
Linear Algebra
Bras, kets, operators, and eigenvalues are pure linear algebra over the complex numbers. If this page feels hard, go there first.
Quantum Computing
Gates, circuits, and algorithms built on qubits. The CS-side view of everything on this page, specialized to finite-dimensional Hilbert spaces.
11. Quantum mechanics in code
A couple of short primitives: construct the Hamiltonian of a 1D particle in a box, diagonalize it with NumPy, and verify the $n^2$ spectrum. Then a qubit simulation — build a state, apply a gate, compute expectation values.
import numpy as np
# ---------- Particle in a box via finite-difference Hamiltonian ----------
def box_spectrum(N=400, L=1.0, hbar=1.0, m=1.0):
dx = L / (N + 1)
main = np.full(N, -2.0)
off = np.full(N - 1, 1.0)
T = np.diag(main) + np.diag(off, 1) + np.diag(off, -1)
H = -(hbar * hbar) / (2 * m * dx * dx) * T
eigvals, eigvecs = np.linalg.eigh(H)
return eigvals, eigvecs, dx
evals, _, dx = box_spectrum()
# Exact: E_n = n^2 pi^2 / 2 for hbar=m=L=1
for n in range(5):
exact = (n + 1) ** 2 * np.pi ** 2 / 2
print(f"n={n+1}: numeric={evals[n]:.4f}, exact={exact:.4f}")
# ---------- Qubit state, Pauli operators, expectation values ----------
sx = np.array([[0, 1], [1, 0]], dtype=complex)
sy = np.array([[0, -1j], [1j, 0]], dtype=complex)
sz = np.array([[1, 0], [0, -1]], dtype=complex)
# Equal superposition: (|0> + |1>) / sqrt(2)
psi = (1 / np.sqrt(2)) * np.array([1, 1], dtype=complex)
print(f"<sz> = {np.real(psi.conj() @ sz @ psi):.3f}") # 0
print(f"<sx> = {np.real(psi.conj() @ sx @ psi):.3f}") # 1
# Bell state |Phi+> = (|00> + |11>) / sqrt(2)
bell = np.zeros(4, dtype=complex)
bell[0] = bell[3] = 1 / np.sqrt(2)
I = np.eye(2, dtype=complex)
Z1 = np.kron(sz, I)
Z2 = np.kron(I, sz)
print(f"<Z1 Z2> = {np.real(bell.conj() @ (Z1 @ Z2) @ bell):.3f}") # 1
import math
# Hydrogen-level emission wavelengths (Balmer series), no NumPy needed.
R_eV = 13.605693 # Rydberg constant in eV
hc_eV_nm = 1239.84193 # h*c in eV*nm
def balmer_wavelength_nm(n):
# Transition from n -> 2
dE = R_eV * (1 / 4 - 1 / (n * n))
return hc_eV_nm / dE
for n in range(3, 7):
print(f"Balmer n={n}->2: {balmer_wavelength_nm(n):7.2f} nm")
# Should print ~656, 486, 434, 410 nm
12. Cheat sheet
Schrödinger (time-dep)
$i\hbar\,\partial_t |\psi\rangle = \hat H |\psi\rangle$
Schrödinger (TISE)
$\hat H \psi = E \psi$
Born rule
$P(a_n) = |\langle a_n | \psi \rangle|^2$
Canonical commutator
$[\hat x, \hat p] = i\hbar$
Uncertainty
$\sigma_x \sigma_p \ge \dfrac{\hbar}{2}$
Box energy
$E_n = \dfrac{n^2 \pi^2 \hbar^2}{2 m L^2}$
Harmonic oscillator
$E_n = \hbar\omega(n + \tfrac{1}{2})$
Hydrogen levels
$E_n = -\dfrac{13.6 \ \text{eV}}{n^2}$
Pauli matrices
$\sigma_x, \sigma_y, \sigma_z$; $[\sigma_x,\sigma_y] = 2i\sigma_z$
Bell state
$|\Phi^+\rangle = \tfrac{1}{\sqrt{2}}(|00\rangle + |11\rangle)$
Further reading
- Paul Dirac — The Principles of Quantum Mechanics (1930). The book that fixed the notation. Still the best place to feel why bras and kets are the right objects.
- Richard Feynman, Robert Leighton, Matthew Sands — The Feynman Lectures on Physics, Volume III. Starts from two-slit experiments and builds the theory from the amplitude picture. Freely readable at feynmanlectures.caltech.edu/III.
- David Griffiths — Introduction to Quantum Mechanics. The standard undergraduate textbook. Problem-driven, readable, dependable.
- J. J. Sakurai and Jim Napolitano — Modern Quantum Mechanics. Graduate level. Starts from Stern–Gerlach and spin before going to wave functions; a clean path for people who prefer abstraction first.
- John Bell — On the Einstein Podolsky Rosen Paradox, Physics 1 (1964). The original Bell inequality paper. Four pages; famous for good reason.
- Michael Nielsen and Isaac Chuang — Quantum Computation and Quantum Information. The bible of quantum information. Opens with a careful presentation of qubits, measurements, and entanglement.
- Alain Aspect et al. — Experimental Test of Bell's Inequalities Using Time-Varying Analyzers, Phys. Rev. Lett. 49 (1982). The first decisive experimental violation.