The Math Behind Kernel Density Estimation | by Zackary Nay | Sep, 2024


The next derivation takes inspiration from Bruce E. Hansen’s “Lecture Notes on Nonparametric” (2009). If you’re serious about studying extra you may consult with his authentic lecture notes here.

Suppose we needed to estimate a likelihood density perform, f(t), from a pattern of information. A great beginning place can be to estimate the cumulative distribution perform, F(t), utilizing the empirical distribution function (EDF). Let X1, …, Xn be unbiased, identically distributed actual random variables with the frequent cumulative distribution perform F(t). The EDF is outlined as:

Then, by the sturdy legislation of huge numbers, as n approaches infinity, the EDF converges nearly certainly to F(t). Now, the EDF is a step perform that would seem like the next:

import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import norm

# Generate pattern information
np.random.seed(14)
information = np.random.regular(loc=0, scale=1, dimension=40)

# Kind the info
data_sorted = np.type(information)

# Compute ECDF values
ecdf_y = np.arange(1, len(data_sorted)+1) / len(data_sorted)

# Generate x values for the traditional CDF
x = np.linspace(-4, 4, 1000)
cdf_y = norm.cdf(x)

# Create the plot
plt.determine(figsize=(6, 4))
plt.step(data_sorted, ecdf_y, the place='put up', coloration='blue', label='ECDF')
plt.plot(x, cdf_y, coloration='grey', label='Regular CDF')
plt.plot(data_sorted, np.zeros_like(data_sorted), '|', coloration='black', label='Knowledge factors')

# Label axes
plt.xlabel('X')
plt.ylabel('Cumulative Chance')

# Add grid
plt.grid(True)

# Set limits
plt.xlim([-4, 4])
plt.ylim([0, 1])

# Add legend
plt.legend()

# Present plot
plt.present()

Subsequently, if we had been to try to discover an estimator for f(t) by taking the spinoff of the EDF, we might get a scaled sum of Dirac delta functions, which isn’t very useful. As a substitute allow us to think about using the two-point central difference formula of the estimator as an approximation of the spinoff. Which, for a small h>0, we get:

Now outline the perform ok(u) as follows:

Then we’ve that:

Which is a particular case of the kernel density estimator, the place right here ok is the uniform kernel perform. Extra typically, a kernel perform is a non-negative perform from the reals to the reals which satisfies:

We are going to assume that each one kernels mentioned on this article are symmetric, therefore we’ve that ok(-u) = ok(u).

The second of a kernel, which supplies insights into the form and habits of the kernel perform, is outlined as the next:

Lastly, the order of a kernel is outlined as the primary non-zero second.

We are able to solely reduce the error of the kernel density estimator by both altering the h worth (bandwidth), or the kernel perform. The bandwidth parameter has a a lot bigger affect on the ensuing estimate than the kernel perform however can be way more tough to decide on. To exhibit the affect of the h worth, take the next two kernel density estimates. A Gaussian kernel was used to estimate a pattern generated from a regular regular distribution, the one distinction between the estimators is the chosen h worth.

import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import gaussian_kde

# Generate pattern information
np.random.seed(14)
information = np.random.regular(loc=0, scale=1, dimension=100)

# Outline the bandwidths
bandwidths = [0.1, 0.3]

# Plot the histogram and KDE for every bandwidth
plt.determine(figsize=(12, 8))
plt.hist(information, bins=30, density=True, coloration='grey', alpha=0.3, label='Histogram')

x = np.linspace(-5, 5, 1000)
for bw in bandwidths:
kde = gaussian_kde(information , bw_method=bw)
plt.plot(x, kde(x), label=f'Bandwidth = {bw}')

# Add labels and title
plt.title('Impression of Bandwidth Choice on KDE')
plt.xlabel('Worth')
plt.ylabel('Density')
plt.legend()
plt.present()

Fairly a dramatic distinction.

Now allow us to take a look at the affect of fixing the kernel perform whereas retaining the bandwidth fixed.

import numpy as np
import matplotlib.pyplot as plt
from sklearn.neighbors import KernelDensity

# Generate pattern information
np.random.seed(14)
information = np.random.regular(loc=0, scale=1, dimension=100)[:, np.newaxis] # reshape for sklearn

# Intialize a continuing bandwidth
bandwidth = 0.6

# Outline totally different kernel features
kernels = ["gaussian", "epanechnikov", "exponential", "linear"]

# Plot the histogram (clear) and KDE for every kernel
plt.determine(figsize=(12, 8))

# Plot the histogram
plt.hist(information, bins=30, density=True, coloration="grey", alpha=0.3, label="Histogram")

# Plot KDE for every kernel perform
x = np.linspace(-5, 5, 1000)[:, np.newaxis]
for kernel in kernels:
kde = KernelDensity(bandwidth=bandwidth, kernel=kernel)
kde.match(information)
log_density = kde.score_samples(x)
plt.plot(x[:, 0], np.exp(log_density), label=f"Kernel = {kernel}")

plt.title("Impression of Completely different Kernel Features on KDE")
plt.xlabel("Worth")
plt.ylabel("Density")
plt.legend()
plt.present()

Whereas visually there’s a massive distinction within the tails, the general form of the estimators are comparable throughout the totally different kernel features. Subsequently, I’ll focus primarily concentrate on discovering the optimum bandwidth for the estimator. Now, let’s discover among the properties of the kernel density estimator, together with its bias and variance.

Leave a Reply

Your email address will not be published. Required fields are marked *