OpenCV is the go-to library for image processing in Python, and scikit-image fills in the gaps for perceptual metrics. Together they can measure every meaningful quality signal — blur, noise, resolution, exposure, and compression — but the code to wire them up properly is non-trivial and easy to get wrong.
This guide covers each quality signal, explains the right metric to measure it, and shows working Python code. At the end we'll introduce a library that wraps all of this into a single production-ready call.
Setup
pip install opencv-python numpy scikit-image
1. Checking for blur
The standard technique is the Laplacian variance. The Laplacian is a second-derivative edge detector — blurry images have weak edges and therefore low Laplacian variance.
import cv2
import numpy as np
def blur_score(gray: np.ndarray) -> float:
"""Returns 0–100; below ~40 is likely blurry."""
gf = gray.astype(np.float64)
texture_var = float(np.var(gf)) + 1e-6
lap_norm = float(cv2.Laplacian(gf, cv2.CV_64F).var()) / texture_var
sx = cv2.Sobel(gf, cv2.CV_64F, 1, 0, ksize=3)
sy = cv2.Sobel(gf, cv2.CV_64F, 0, 1, ksize=3)
ten_norm = float(np.hypot(sx, sy).var()) / texture_var
index = 0.5 * lap_norm + 0.5 * ten_norm
return max(0.0, min(100.0, (index / 1.2) * 100.0))
texture_var normalises the score across images of different sizes and content types — a blank white sheet won't be falsely flagged as blurry.
2. Checking for noise
Noise detection uses a high-pass filter to isolate the residual signal on flat (non-textured) image regions. Real texture has structure; noise is random. The signal-to-noise ratio (SNR) quantifies this.
def noise_score(gray: np.ndarray) -> float:
"""Returns 0–100; below ~30 indicates noisy image."""
gf = gray.astype(np.float64)
mean, std = float(np.mean(gf)), float(np.std(gf))
snr = (mean + 1e-6) / (std + 1e-6)
snr_score = max(0.0, min(100.0, (snr / 10.0) * 100.0))
hp = np.array([[-1,-1,-1],[-1,8,-1],[-1,-1,-1]], dtype=np.float64)
residual_std = float(np.std(cv2.filter2D(gf, -1, hp, borderType=cv2.BORDER_REFLECT)))
residual_score = max(0.0, min(100.0, (1.0 - residual_std / 50.0) * 100.0))
return 0.7 * snr_score + 0.3 * residual_score
3. Checking resolution
Resolution is the simplest check: count pixels and compare to a threshold.
def resolution_score(height: int, width: int) -> float:
total = height * width
if total < 100_000: # below ~316×316
return max(0.0, (total / 100_000) * 100.0)
elif total < 400_000: # below ~632×632
return 80.0
return 100.0
4. Checking exposure
Exposure analysis uses the pixel histogram. A well-exposed image has most pixels in the mid-tones (85–170), not crushed to black (<85) or blown out to white (>170).
def exposure_score(gray: np.ndarray) -> float:
hist, _ = np.histogram(gray, bins=256, range=(0, 256))
hist = hist.astype(np.float64) / (hist.sum() + 1e-6)
dark = float(np.sum(hist[:85]))
bright = float(np.sum(hist[170:]))
if dark > 0.60:
return max(0.0, 100.0 - (dark - 0.50) * 200.0)
if bright > 0.60:
return max(0.0, 100.0 - (bright - 0.50) * 200.0)
return 100.0
5. Checking for JPEG compression artefacts
JPEG compression creates blockiness at 8×8 pixel boundaries. Measure the energy difference across those boundaries versus within blocks:
def compression_score(gray: np.ndarray) -> float:
gf = gray.astype(np.float64)
h, w = gray.shape
border_e = sum(
float(np.sum(np.abs(gf[:, x-1] - gf[:, x % w])))
for x in range(8, w, 8)
) + sum(
float(np.sum(np.abs(gf[y-1, :] - gf[y % h, :])))
for y in range(8, h, 8)
)
intra = float(np.sum(np.abs(gf[:, 1:] - gf[:, :-1]))) + float(np.sum(np.abs(gf[1:, :] - gf[:-1, :]))) + 1e-6
ratio = border_e / intra
return max(0.0, min(100.0, 100.0 - (ratio / 0.12) * 100.0))
Skip the boilerplate: use imageguard
If you don't want to maintain all five functions, the imageguard library packages them — plus pixelation detection — into a single production-ready call, with proper edge-case handling for tiny images, flat backgrounds, white-background product photos, and depth-of-field images.
imageguard — all five checks in one call
Powers the image quality checker on this site. Open-source on GitHub.
View on GitHub →from imageguard import validate
result = validate("photo.jpg")
print(result.score) # 0.0–1.0
print(result.issues) # ['blurry', 'noisy']
print(result.reason) # 'blurry'
Putting it all together: a complete quality check function
import cv2
from pathlib import Path
def quick_quality_check(image_path: str) -> dict:
img = cv2.imread(image_path)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
h, w = gray.shape
return {
"blur": blur_score(gray),
"noise": noise_score(gray),
"resolution": resolution_score(h, w),
"exposure": exposure_score(gray),
"compression": compression_score(gray),
}
scores = quick_quality_check("product.jpg")
ok = all(v >= 40 for v in scores.values())
print("Pass" if ok else "Fail", scores)