Learn Before
Concept

Linear Decision Boundaries, and the Assumption of Hard Margin

The motivation for using SVM’s is to generate a separating hyperplane for a binary dataset, which is linearly separable.

In topological parlance, we are assuming that the positive and negative classes are contained in disconnected sets.

Visually in R2\mathbb{R}^2, this means that we can draw a line between all of the data points, such that all of the positive points are on one side of the line, or hyperplane, and all of the negative points are on the other.

Interestingly, a dataset might not be linearly separable in Rk\mathbb{R}^k, but might be linearly separable in some d>kd > k dimensional space. More on this when talking about kernels.

0

2

Updated 2020-03-04

Tags

Data Science