In particle tracking simulations you often need to interpolate particles onto a grid in one or more dimensions. Recently I decided to write a linear particle-to-grid interpolation in one dimension in python.This is an educational introduction into interpolating particles onto a one-dimensional grid.
Interpolation of particles onto a grid
Particles are usually described by a vector of coordinates in an -dimensional phase space. Often people want to compute a density of particles along one of the coordinate axes. Let us first start with the example of particles in a two-dimensional space, they have coordinates . A common question will be: What is the density of the particles projected on the axis?
note: code blocks from here on are executed one after another in an ipython notebook. You can download the notebook here.
To answer the question you first have to think about: How do you compute the density? A builtin way in python/pylab is the histogram function.
# Generate 1 Million particles
hist(x,bins=10) # Calculate the histogram
ylabel("number of matches")
The histogram function computes the number of particles between the left and right edges of the bars respectively. We can also look at the data:
print "there are %d particles with x between\
%0.2f and %0.2f" % (numbers, bins,bins)
there are 154841 particles with x between 0.00 and 0.1
The histogram function does exactly this, count the number of particles between the edges of a bin. Visually the edges are represented by filling a bar between the left and the right edge up to a height proportional to the resulting number of particles.
Another way to look at this would be to say: the histogram function computes the density at the centers of the bins by attributing each particle to its nearest neighbouring bin. We could then compute a value for the density as follows:
# compute the width of a bin
# compute the positions of the centers of the bins
# plot the density (particles per length unit)
This looks nice and smooth, but you might say. But if you think about it, for a particle in the middle of two bins, say at the right edge of the first bin at 0.95, is it really justified to assume that it only contributes to the grid point at 0.5? It virtually has the same distance to the grid point at 1.5!
This problem can become even worse. Let us define point positions
. Here see a figure with particles with between 10 and 25:
You can see how the particle distance (which is inversely proportional to the intuitive density) changes smoothly. The density per bin can be calculated analytically to be with the bin width. Let us plot the point positions and the resulting histogram for between 10 and 25. The histogram however has a step-shape as you can see below and does not compare too well to the analytic result (grey vertical lines indicate particle positions, the green curve is the analytic particle density):
x=10*(1-1/numpy.arange(10,25.)) # the points
density=lambda x: (10/(10/(-10 + x) + (10/(-10 + x))**2))**(-1)*.05833
for i in x:
axvline(i,color="grey",linewidth="1") # grey lines for the point positions
hist(x,bins=10) # histogram of point positions
plot(numpy.arange(9,9.6,.01),density(numpy.arange(9,9.6,.01))) # analytic density function
You see how steppy the histogram looks when you compare it to the analytically calculated density? Maybe we can do better. If we think about the single particle from above again
how about we say: It should contribute to its neighbouring bins according to its distance to the bin center. There are the two bin centers at and neighbouring the particle. We should pay attention that the total amount of density created by the particle stays the same. Let us for the time being focus on the assumption that a particle only contributes to two bins. So let us zoom in on the particle and its two neighbouring bins:
The easiest way to distribute the particle density between its neighbouring bins is now to just measure its distance to its neighbouring bins, in the example and add to the grid point . This way we make sure that the particle number is conserved (the total number added to the grid is 1, as for the histogram). In our example, the particle contributes to the grid bins 1 and 2.
Now expressing this in python and adding the bin width to calculate a proper density we write a function pics2gridpy(particles, left, right, bins). The function is called with four arguments:
particles: the Array of coordinates
left: The left edge of the grid (also the zeroth grid point)
right: The right edge of the grid (no grid point here)
bins: the number of bins (including the one at the left edge)
def pics2gridpy(particles, left, right, bins):
leftIndexF = 0.
for i in range(particles.shape):
grid[leftIndex % bins]+=1-binPosition
grid[(leftIndex + 1) % bins]+=binPosition
Let’s try to run it on our example particles and again compare to the analytic density!
# because actually the method makes a circular grid, we need to add one bin
# to the left and right of the distribution
We see, this is much better. But one thing is notable: The spike for the bin at 9.0, where does it originate? In this region, the density of perticles per bin is smaller than one. Because of that you have to expect aliasing effects that can produce these spikes. In fact, normally you try to have at least a few tens of particles in the bins that matter to you because you want to have a good approximation of the real density. In our case we needed very few because we had a smooth distribution. In most cases however you sum random particles and therefore you expect some noisiness due to the random fluctuations of your distribution.
Now that we have the method let’s get an estimate of its performance for very many particles:
gives a time of 844 msec on my machine. The histogram however only takes 8.5msec. There has to be a way of optimizing this. Next post we will look at how fast we can make it using cython!