7

How to Solve for Eigenstates: The Time-Independent Schrödinger Equation

 3 years ago
source link: https://hackernoon.com/how-to-solve-for-eigenstates-a-time-independent-schrodinger-equation-5q1h34on
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

How to Solve for Eigenstates: The Time-Independent Schrödinger Equation

@arvinskArvin Kushwaha

NCSSM Class of 2021 with interests in Computer Science, Physics, Entrepreneurship, and Math

If you’ve ever ventured anywhere near the field of quantum mechanics, you’ve almost certainly heard of the “mystical” Schrödinger Equation.

0 reactions
heart.png
light.png
thumbs-down.png
money.png

(Made with CodeCogs)

0 reactions
heart.png
light.png
thumbs-down.png
money.png

Much like Newton’s Second Law, which describes the propagation of particles in a classical system, where every particle has definite properties like position and momentum, the Schrödinger equation allows us to understand how a quantum system evolves, where each particle is described by a single, collective wavefunction.

0 reactions
heart.png
light.png
thumbs-down.png
money.png

What’s the difference between the Time-Dependent and Time-Independent Schrodinger Equation?

0 reactions
heart.png
light.png
thumbs-down.png
money.png

Since the Time-Dependent Schrodinger Equation (TDSE) allows for standing waves, we can solve for those specifically by simplifying the TDSE into the Time-Independent Schrodinger Equation (TISE).

0 reactions
heart.png
light.png
thumbs-down.png
money.png

(Made with CodeCogs)

0 reactions
heart.png
light.png
thumbs-down.png
money.png

If you look at the TISE, you’ll see a two main stark differences:

0 reactions
heart.png
light.png
thumbs-down.png
money.png

Firstly, there is no time-dependence (though that should have been implied by the name of the equation). Standing waves remain constant over time (except in phase), so the probability density of the particle remains constant. Thus, time at which the wavefunction is captured is irrelevant.

0 reactions
heart.png
light.png
thumbs-down.png
money.png

Secondly, instead of the partial derivative with respect to time on the left-hand side (which represents our energy operator), we have a constant energy (E), which represents the energy of the state. Therefore, the solutions to the TISE are energy eigenstates.

0 reactions
heart.png
light.png
thumbs-down.png
money.png

If you want to learn more about Quantum Mechanics before we dive in, I highly recommend checking out MIT OpenCourseWare at https://ocw.mit.edu, which has some excellent lectures on Quantum Mechanics.

0 reactions
heart.png
light.png
thumbs-down.png
money.png

HOW DO WE SOLVE IT, THOUGH?

Because of the simplified structure of the TISE, we can use a few clever (in my opinion, at least) tricks to solve for the energy levels and eigenstates.

0 reactions
heart.png
light.png
thumbs-down.png
money.png

If you’ve taken linear algebra, the word eigenstates probably reminds you of another similar term: eigenvectors. Let’s examine the (now color-coded) TISE, where we see a very familiar structure.

0 reactions
heart.png
light.png
thumbs-down.png
money.png

(Made with CodeCogs)

0 reactions
heart.png
light.png
thumbs-down.png
money.png

On the right-hand side, we have the magenta operator (a.k.a. the Hamiltonian) being applied to our eigenstate and on the left-hand side, we have the blue variable E (a.k.a. the total energy) scaling the eigenstate.

0 reactions
heart.png
light.png
thumbs-down.png
money.png

(Made with CodeCogs)

0 reactions
heart.png
light.png
thumbs-down.png
money.png

Here we have the equation describing the relationship between a transformation matrix, an eigenvector, and its corresponding eigenvalue. Just like the TISE, this is color-coded, with magenta being our transformation matrix and the blue being our eigenvalues.

0 reactions
heart.png
light.png
thumbs-down.png
money.png

THAT ALL SEEMS GREAT, BUT HOW DO WE TURN THE OPERATOR INTO A MATRIX?

Unfortunately, we can’t perfectly convert the operator into the matrix. However, we can closely approximate the operator using massive matrices.

0 reactions
heart.png
light.png
thumbs-down.png
money.png

Think back to your introductory calculus class when you had to differentiate with the difference quotient. Surprise, surprise! We’re doing that again (but in a slightly different manner.

0 reactions
heart.png
light.png
thumbs-down.png
money.png

In this instance, we will use a finite number of points to sample the wavefunction. We can approximate the second derivative with the difference quotient like so:

0 reactions
heart.png
light.png
thumbs-down.png
money.png

(Made with CodeCogs)

0 reactions
heart.png
light.png
thumbs-down.png
money.png

If we take a look at the difference quotient here, we notice that it is symmetrical, which is to say, the samples used to calculate the second derivative lie symmetrically around the point at which we seek to calculate the derivative. This means we can create a Hermitian matrix describing the second derivative of the vector describing the wavefunction.

0 reactions
heart.png
light.png
thumbs-down.png
money.png

(Made with CodeCogs)

0 reactions
heart.png
light.png
thumbs-down.png
money.png

Unfortunately, we cannot wholly describe a derivative with this matrix representation (or with any finite-size matrix).

0 reactions
heart.png
light.png
thumbs-down.png
money.png

However, with this revelation of using matrices to approximate derivatives, we are one step closer to solving this problem.

0 reactions
heart.png
light.png
thumbs-down.png
money.png

The Hamiltonian consists of two parts, the kinetic and potential energy.

0 reactions
heart.png
light.png
thumbs-down.png
money.png

(Made with CodeCogs)

0 reactions
heart.png
light.png
thumbs-down.png
money.png

Our matrix representation for the second derivative can be scaled by the Planck constant and the mass of our system to give us our kinetic energy.

0 reactions
heart.png
light.png
thumbs-down.png
money.png

As for the matrix representation of the potential energy, we simply create a diagonal matrix with our sampled potential energy function.

0 reactions
heart.png
light.png
thumbs-down.png
money.png

The total energy (represented by our Hamiltonian) is the sum of the kinetic and potential energies, giving us our final matrix:

0 reactions
heart.png
light.png
thumbs-down.png
money.png

For simplicity, we will assume that the Planck Constant is 1 and the mass is 0.5, so that our coefficients can be removed.

0 reactions
heart.png
light.png
thumbs-down.png
money.png

(Made with CodeCogs)

0 reactions
heart.png
light.png
thumbs-down.png
money.png

Now that we have our matrix representation, we can rewrite our TISE into a form we can approximate the solution for.

0 reactions
heart.png
light.png
thumbs-down.png
money.png

Bolded variables represent our matrix approximations and arrow-ed variables are vectors.

0 reactions
heart.png
light.png
thumbs-down.png
money.png

(Made with CodeCogs)

0 reactions
heart.png
light.png
thumbs-down.png
money.png

Finally, we have an eigenvalue problem we can solve! Let’s dive in!

0 reactions
heart.png
light.png
thumbs-down.png
money.png

AT LAST, THE GOOD STUFF: CODE

LANDING LIBRARIES & PREPARING PARAMETERS

0 reactions
heart.png
light.png
thumbs-down.png
money.png

We’re now at the final stretch where we can make all our lives easier just using code.

0 reactions
heart.png
light.png
thumbs-down.png
money.png

For this section, I’m assuming you have Python 3 and the NumPy, SciPy, and Matplotlib libraries ready to go.

0 reactions
heart.png
light.png
thumbs-down.png
money.png

First off, let’s import the necessary methods and modules!

0 reactions
heart.png
light.png
thumbs-down.png
money.png
import numpy as np
from scipy.sparse import diags #Allows us to construct our matrices
from scipy.sparse.linalg import eigsh #Solves the Eigenvalue problem
import matplotlib.pyplot as plt

(Made with CodeCogs)

0 reactions
heart.png
light.png
thumbs-down.png
money.png

Because our matrices are mostly just filled with zeros, we can use sparse matrices to hold our approximation for the Hamiltonian to conserve storage and optimize our calculations.

0 reactions
heart.png
light.png
thumbs-down.png
money.png

The 

scipy.sparse
 module provides a vast array of methods that we can use to make sure our program is efficient.
0 reactions
heart.png
light.png
thumbs-down.png
money.png

Now that we’ve imported the necessary information, let’s set the parameters that allow us to keep our program as flexible as possible.

0 reactions
heart.png
light.png
thumbs-down.png
money.png

First off, let’s decide how accurate we want our program to be.

0 reactions
heart.png
light.png
thumbs-down.png
money.png

Our variable 

N
 will determine the size of our matrices, where a bigger matrix typically tends to be more accurate. This comes with the heavy price of computational expense. The bigger our matrices, the longer the eigenstates will take to compute.
0 reactions
heart.png
light.png
thumbs-down.png
money.png

I find that 

N = 1000
 is a suitable value for my device, but depending on your computational constraints, another value might be better suited for you. Play around!
0 reactions
heart.png
light.png
thumbs-down.png
money.png

Next, we’ll set the physical parameters. Remember the Planck Constant and the mass of the system? We’ll set those here. For simplicity, however, we’ll just leave them as 

hbar = 1
 and 
m = 0.5
.
0 reactions
heart.png
light.png
thumbs-down.png
money.png

Of course, we can’t do much in the way of calculations without setting a coordinate system. Since we’re working in one dimension right now, we’ll use the variable 

x
 to keep track of our x-axis and the variables 
xmin
 and
xmax
 to record to limits of our axis.
0 reactions
heart.png
light.png
thumbs-down.png
money.png

Let’s set 

xmin = -1
 and 
xmax = 1
 to keep a reasonable area. To get our x, we’ll use the 
np.linspace
 method to create our axis: 
x = np.linspace(xmin, xmax, N)
.
0 reactions
heart.png
light.png
thumbs-down.png
money.png

Of course, we also need to remember 

dx
 so we can scale our kinetic energy matrix appropriately. This gives us 
dx = (xmax - xmin)/(N + 1)
.
0 reactions
heart.png
light.png
thumbs-down.png
money.png

Finally, we need the potential function for which we are solving our TISE for. Let’s start off with the finite well.

0 reactions
heart.png
light.png
thumbs-down.png
money.png

If you’ve taken a QM course before, you’ve probably come across this potential before. It’s a fairly standard potential with easily recognizable solutions.

0 reactions
heart.png
light.png
thumbs-down.png
money.png

(Image sourced from Annafitzgerald, CC BY-SA 3.0, via Wikimedia Commons)

0 reactions
heart.png
light.png
thumbs-down.png
money.png

We’ll store the potential function in the variable 

U
. But to create our well, we need to decide the length of the well (
L
) and the depth of the well (
W
). Since our axis spans from -1 to 1, we can set 
L = 1.5
, so our well spans from -0.75 to 0.75. We’ll also set 
W = 100
, because it’s nice and large, and gives us some room for adjustment (I also might have tested for some nice peaceful-looking numbers.
0 reactions
heart.png
light.png
thumbs-down.png
money.png

To create our potential functions, we can use the np.where method, which allows us to conditionally set the values of the array depending on the x-values:

0 reactions
heart.png
light.png
thumbs-down.png
money.png
U = np.where((x < L/2) & (x > -L/2), 0, W)

Let’s sum it all up!

0 reactions
heart.png
light.png
thumbs-down.png
money.png
### IMPORTS ###
import numpy as np
from scipy.sparse import diags #Allows us to construct our matrices
from scipy.sparse.linalg import eigsh #Solves the Eigenvalue problem
import matplotlib.pyplot as plt

### CONSTANTS ###
N = 1000
hbar = 1
m = 0.5
xmin = -1
xmax = 1
x = np.linspace(xmin, xmax, N)
dx = (xmax - xmin)/(N + 1)
L = 1.5
W = 1
U = np.where((x < L/2) & (x > -L/2), 0, W)

Tada! We have all of our constants set up!

0 reactions
heart.png
light.png
thumbs-down.png
money.png

MANUFACTURING MATRICES & EVALUATING EIGENSTATES

Next, we have to produce the matrices to actually calculate our eigenstates. Thankfully, all the hard work of figuring out how these matrices work has already been done.

0 reactions
heart.png
light.png
thumbs-down.png
money.png

Let’s start off with the matrix for the second-derivative.

0 reactions
heart.png
light.png
thumbs-down.png
money.png

(Made with CodeCogs)

0 reactions
heart.png
light.png
thumbs-down.png
money.png

To create this matrix, we’ll use the 

scipy.sparse.diags
 method. This method allows us to feed in a 
float
 or 
ndarray
 list containing the values for the diagonals along with an 
int
 list describing the offsets (which diagonal our values will live on).
0 reactions
heart.png
light.png
thumbs-down.png
money.png

We see that the -2 values are on the main diagonal (offset = 0). The 1’s are on the closest upper and lower diagonals (offsets = ∓1).

0 reactions
heart.png
light.png
thumbs-down.png
money.png

We also can’t forget our scaling factor 

1 / (dx**2)
!
0 reactions
heart.png
light.png
thumbs-down.png
money.png

Plugging that all in to the function, we get our variable 

d2dx2
:
0 reactions
heart.png
light.png
thumbs-down.png
money.png
d2dx2 = diags([1, -2, 1], offsets=[-1, 0, 1], shape=(N, N))/(dx**2)

From here, we can create our kinetic energy operator (

T
), which is just 
d2dx2
 scaled by 
-(hbar**2)/(2 * m)
, to give us:
0 reactions
heart.png
light.png
thumbs-down.png
money.png
T = -(hbar**2)/(2 * m) * d2dx2

We’re nearly halfway done!

0 reactions
heart.png
light.png
thumbs-down.png
money.png

To create our potential energy operator (

V
), all we have to do is stick it on a diagonal, which is easier done than said.
0 reactions
heart.png
light.png
thumbs-down.png
money.png
V = diags([U], offsets=[0], shape=(N, N))

And with that, our potential energy operator is complete.

0 reactions
heart.png
light.png
thumbs-down.png
money.png

Finally, to get our Hamiltonian we just sum the potential energy and kinetic energy.

0 reactions
heart.png
light.png
thumbs-down.png
money.png
H = T + V

With the Hamiltonian, all we have to do now is actually calculate the eigenvalues and eigenvectors.

0 reactions
heart.png
light.png
thumbs-down.png
money.png

Buuut, before we do that, let’s discuss our eigenvalue and eigenvector method 

eigsh
.
0 reactions
heart.png
light.png
thumbs-down.png
money.png

In our problem, the eigenvalues represent the energies of our wavefunctions, so we aim to find the smallest eigenvalue and eigenvector pairs.

0 reactions
heart.png
light.png
thumbs-down.png
money.png

Unfortunately, 

eigsh
, by default, returns the largest eigenvalues. However, it does have a handy keyword parameter to return the smallest eigenvalues first.
0 reactions
heart.png
light.png
thumbs-down.png
money.png
eigvals, eigvecs = eigsh(H, which="SM")
eigvals
 and 
eigvecs
 are our two variables containing the eigenvalues and eigenvectors, respectively.
0 reactions
heart.png
light.png
thumbs-down.png
money.png

To tie it all together:

0 reactions
heart.png
light.png
thumbs-down.png
money.png
### PREPARE MATRICES ###
d2dx2 = diags([1, -2, 1], offsets=[-1, 0, 1], shape=(N, N))/(dx**2)
T = -(hbar**2)/(2 * m) * d2dx2
V = diags(U)
H = T + V

### CALCULATE EIGENSTATES ###
eigvals, eigvecs = eigsh(H, which="SM", k=k)

You’ll notice there is an extra parameter 

k
, that’s just the number of solutions you want to return. The default is
6
.
0 reactions
heart.png
light.png
thumbs-down.png
money.png

VISUALIZING VECTORS & PLOTTING PROBABILITIES

Of course, as they say, “A picture is worth a thousand words.” I’d like this code to reflect that, so we’ll just create some nice old visualization code.

0 reactions
heart.png
light.png
thumbs-down.png
money.png

First, let’s analyze what our two variables (

eigvals
 and 
eigvecs
) are like.
0 reactions
heart.png
light.png
thumbs-down.png
money.png
eigvals
: An 
np.ndarray
of floats with shape 
(k,)
 listing the eigenvalues.
0 reactions
heart.png
light.png
thumbs-down.png
money.png
eigvecs
: An 
np.ndarray
of floats with shape 
(N, k)
 listing the eigenvectors.
0 reactions
heart.png
light.png
thumbs-down.png
money.png

To get the pairs of eigenvalues and eigenvectors, essentially

eigvals[i]
pairs with 
eigvecs[:, i]
.
0 reactions
heart.png
light.png
thumbs-down.png
money.png

Now that we have that, let’s start plotting! (Yay, we can finally bring in the Matplotlib!)

0 reactions
heart.png
light.png
thumbs-down.png
money.png

First, let’s plot our potential:

0 reactions
heart.png
light.png
thumbs-down.png
money.png
plt.plot(x, U)

To visualize the i-th eigenvalue and eigenvector, we will plot a dotted line at the energy level along with the eigenvector itself. Of course, these eigenvectors aren’t already located near the energy level and for our visualization, we’ll just add the energy level to the eigenvector (read as wavefunction) to translate it to the appropriate location.

0 reactions
heart.png
light.png
thumbs-down.png
money.png

We’re also going to display our probability distribution, so if you remember, the probability distribution is the product of the wavefunction with its conjugate. Thankfully, the .conj() method of np.ndarray comes to our rescue!

0 reactions
heart.png
light.png
thumbs-down.png
money.png
plt.plot(x, np.full_like(x, eigvals[i]), "k--")
plt.plot(x, eigvals[i] + eigvecs[:, i] * scale, 'k')
plt.plot(x, eigvals[i] + eigvecs[:, i] * eigvecs[:, i].conj() * scale ** 2, 'b')

Sorry, sorry, once again, I threw in an extra parameter scale (I prefer scale = 60). All for good reason, though. Depending on the scale of the system, it might be helpful to scale the wavefunctions to a more visible scale.

0 reactions
heart.png
light.png
thumbs-down.png
money.png

Throwing in the loop we get:

0 reactions
heart.png
light.png
thumbs-down.png
money.png
for i in range(k):
    plt.plot(x, np.full_like(x, eigvals[i]), "k--")
    plt.plot(x, eigvals[i] + eigvecs[:, i] * scale, 'k')
    plt.plot(x, eigvals[i] + eigvecs[:, i] * eigvecs[:, i].conj() * scale ** 2, 'b')

Of course, we can’t have a proper display without showing our plot, so:

0 reactions
heart.png
light.png
thumbs-down.png
money.png
plt.show()

Wrapping it all up, our plotting demands can be satisfied in just six very simple lines:

0 reactions
heart.png
light.png
thumbs-down.png
money.png
### PLOTTING ###
plt.plot(x, U)
for i in range(k):
    plt.plot(x, np.full_like(x, eigvals[i]), "k--")
    plt.plot(x, eigvals[i] + eigvecs[:, i] * scale, 'k')
    plt.plot(x, eigvals[i] + eigvecs[:, i] * eigvecs[:, i].conj() * scale * scale, 'b')
plt.show()

CONCATENATING CODE

We’re almost the end of our thrilling journey of actually programming the “solver,” but we’re not entirely done yet. Let’s combine all the code together to create our final program.

0 reactions
heart.png
light.png
thumbs-down.png
money.png
### IMPORTS ###
import numpy as np
from scipy.sparse import diags #Allows us to construct our matrices
from scipy.sparse.linalg import eigsh #Solves the Eigenvalue problem
import matplotlib.pyplot as plt

### CONSTANTS ###
N = 1000
hbar = 1
m = 0.5
xmin = -1
xmax = 1
x = np.linspace(xmin, xmax, N)
dx = (xmax - xmin)/(N + 1)
L = 1.5
W = 1
U = np.where((x < L/2) & (x > -L/2), 0, W)

### PREPARE MATRICES ###
d2dx2 = diags([1, -2, 1], offsets=[-1, 0, 1], shape=(N, N))/(dx**2)
T = -(hbar**2)/(2 * m) * d2dx2
V = diags(U)
H = T + V

### CALCULATE EIGENSTATES ###
eigvals, eigvecs = eigsh(H, which="SM", k=k)

### PLOTTING ###
plt.plot(x, U)
for i in range(k):
    plt.plot(x, np.full_like(x, eigvals[i]), "k--")
    plt.plot(x, eigvals[i] + eigvecs[:, i] * scale, 'k')
    plt.plot(x, eigvals[i] + eigvecs[:, i] * eigvecs[:, i].conj() * scale * scale, 'b')
plt.show()

Our graphs have four main parts. The massive potential well is fairly self-explanatory, the blue squiggles are our probability distributions, and the black curves are our wavefunction. As mentioned earlier, our dashed black line represents our energy level. Well, as far as I see, our wavefunctions seem to match up extremely well with real solutions. Notice how the probability of being outside the box seems to increase as the energy levels increase on the right-hand model. Thankfully, we find that our “solver” gives us the very same results.

0 reactions
heart.png
light.png
thumbs-down.png
money.png

Now, that we’ve got the Finite Potential Well down, let’s test out another classroom favorite: the Harmonic Oscillator.

0 reactions
heart.png
light.png
thumbs-down.png
money.png

With the Harmonic Oscillator (U = 150 * x**2), we get the following energy levels:

0 reactions
heart.png
light.png
thumbs-down.png
money.png

Just like expected, we see the equally spaced energy levels on our harmonic oscillator. Another good sign is the symmetry (and anti-symmetry) of our probability distributions and wavefunctions.

0 reactions
heart.png
light.png
thumbs-down.png
money.png

CONCLUDING REMARKS

Naturally, computational quantum mechanics seems to be quite a journey (I mean, look, we just spent around a thousand words figuring out who to convert operators into matrices). However, it’ll be worthwhile, learning as I explore the field, and the next time I have something cool or unusual, I’ll be sure to continue the series.

0 reactions
heart.png
light.png
thumbs-down.png
money.png

If you ever want to learn more about QM, I definitely recommend heading to the MIT OpenCourseWare website. They have plenty of courses available for free, including Quantum Physics I.

0 reactions
heart.png
light.png
thumbs-down.png
money.png

If you want to check out other articles by me, feel free to head to my account. I only have one other article though. However, more will definitely be coming!

0 reactions
heart.png
light.png
thumbs-down.png
money.png

If you have any suggestions, comments, or recommendations, please let me know in the comments and I’d love some constructive criticism.

0 reactions
heart.png
light.png
thumbs-down.png
money.png

Also published at https://arvin-singhk.medium.com/solving-for-eigenstates-the-time-independent-schr%C3%B6dinger-equation-b09278c4f1f1

0 reactions
heart.png
light.png
thumbs-down.png
money.png
5
heart.pngheart.pngheart.pngheart.png
light.pnglight.pnglight.pnglight.png
boat.pngboat.pngboat.pngboat.png
money.pngmoney.pngmoney.pngmoney.png
Share this story
Join Hacker Noon

Create your free account to unlock your custom reading experience.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK