Vector Norm Calculator

L1, L2, L∞, Lp norms

CalculatorsFreeNo Signup
4.3(604 reviews)
All Tools

Loading tool...

About Vector Norm Calculator

A vector norm calculator supporting L1 (Manhattan), L2 (Euclidean), L∞ (Chebyshev), and general Lp norms. Enter custom vectors. Compare norms side by side. Understand how different norms measure 'size'. All calculations are client-side. Fundamental for optimization, machine learning, and signal processing.

Vector Norm Calculator Features

  • L1/L2/L∞/Lp
  • Custom vectors
  • Compare norms
  • Unit ball
  • Normalization
Vector norms: ||x||₁ = Σ|xᵢ| (Manhattan), ||x||₂ = √(Σxᵢ²) (Euclidean), ||x||∞ = max|xᵢ| (Chebyshev), ||x||p = (Σ|xᵢ|ᵖ)^(1/p). All satisfy: ||x|| ≥ 0, ||αx|| = |α|||x||, ||x+y|| ≤ ||x||+||y|| (triangle inequality).

How to Use

Enter a vector:

  • Components: x₁, x₂, ...
  • p: Norm parameter
  • Output: All norms

Norm Comparison

||x||∞ ≤ ||x||₂ ≤ ||x||₁ ≤ n·||x||∞. All norms are equivalent in finite dimensions. L1 promotes sparsity, L2 is rotation-invariant, L∞ bounds worst-case.

In Machine Learning

  • L1 regularization (Lasso): sparse solutions
  • L2 regularization (Ridge): small solutions
  • L∞: robust to outlier dimensions

Step-by-Step Instructions

  1. 1Enter vector components.
  2. 2View all norms.
  3. 3Compare values.
  4. 4Set custom p.
  5. 5Normalize vector.

Vector Norm Calculator — Frequently Asked Questions

Why does L1 promote sparsity?+

The L1 unit ball has 'corners' on the axes. When optimization constraints touch the ball, they tend to hit corners (where some coordinates = 0). L2's ball is round, so contact points typically have all non-zero coordinates. This is why Lasso regression produces sparse models.

What is the L0 'norm'?+

||x||₀ = number of non-zero entries. It's not actually a norm (violates homogeneity). Minimizing L0 is NP-hard (finding sparsest solution). L1 is its best convex relaxation, which is why L1 is so important in compressed sensing.

How are matrix norms related to vector norms?+

Induced matrix norm: ||A||p = max(||Ax||p/||x||p). This gives ||A||₁ = max column sum, ||A||∞ = max row sum, ||A||₂ = σ_max (largest singular value). Frobenius norm ||A||F = √(Σaᵢⱼ²) is NOT an induced norm.

Share this tool: