
We use a classical SVM pipeline to solve binary classification problem. The model seeks a set of separating hyperplanes in feature space, \mathbf{w}^\top \phi(\mathbf{x}) + b = 0 , maximizing the geometric margin between classes. When linear separation is difficult, we map data to a higher-dimensional space and apply the kernel trick: instead of building \phi(\cdot) explicitly, we compute inner products k(\mathbf{x},\mathbf{x}')=\langle \phi(\mathbf{x}), \phi(\mathbf{x}')\rangle , which lets the SVM operate implicitly in a Hilbert space \mathcal{H}.
Quantum SVM is implemented using a quantum kernel, where input \mathbf{x} is encoded into a quantum state |\phi(\mathbf{x})\rangle=U(\mathbf{x})|0\rangle^{\otimes n} and using the state overlap (inner product or fidelity) as k(\mathbf{x},\mathbf{x}'). The circuit induces a (potentially very high-dimensional) nonlinear embedding through data-dependent single-qubit rotations and entangling gates; the choice of encoding and entanglement pattern governs which higher-order interactions are represented. These overlaps are estimated by running U(\mathbf{x}')^\dagger U(\mathbf{x}) on quantum hardware; the resulting Gram matrix feeds directly into the standard SVM pipeline.
The circuit can also be generalized to include trainable parameters (e.g., single-qubit rotation angles or entangler weights), enabling data-driven kernel adaptation. In this formulation, the feature map becomes U(\mathbf{x},\boldsymbol{\theta}) , producing states |\phi(\mathbf{x};\boldsymbol{\theta})\rangle=U(\mathbf{x},\boldsymbol{\theta})|0\rangle^{\otimes n} and an induced kernel k_{\boldsymbol{\theta}}(\mathbf{x},\mathbf{x}') \;=\; \bigl|\langle \phi(\mathbf{x};\boldsymbol{\theta}) \mid \phi(\mathbf{x}';\boldsymbol{\theta}) \rangle\bigr|^2. Its geometry is now controlled by the parameter vector \boldsymbol{\theta}. The parameters may be optimized to improve task fit — e.g., by maximizing target alignment, minimizing a margin-based surrogate, or reducing cross-validated risk—using gradient-based or gradient-free routines.
Task: A metal additive-manufacturing dataset for categorizing melt-pool modes across different metals during electron beam powder bed fusion and laser powder bed fusion. Samples are labeled as "LOF", "balling", "desirable", "keyhole", or "spatter formation". The three smallest classes were removed, and the remaining data was reduced to a two-class subset for this experiment.
Execution
Simulator
Number of qubits
4
Number of layers
2
70%
Accuracy
70%
Balanced Accuracy
0.70
Macro F1
0.81
ROC-AUC (regular/tuned)
The Trainable QSVM classifier provides fast analysis of melt-pool stability and achieves 70% accuracy, highlighting unstable regimes earlier and potentially trimming engineering review time, reducing scrap/rework and improving throughput.
Annual projected savings from reduced reviews/scrap and faster throughput.
Return on investment based on value add vs. TCO.
Efficiency gains vs. baseline review/inspection workflows.
Simple and transparent: from your brief to quantum results, code, and a paper
Map your problem to the right quantum use case
Confirm the quantum-classical hybrid approach and key assumptions
Download ready-to-run code; execute on simulator
Review reproducible results — iterate as needed
Compare against classical baseline; prepare for quantum hardware
Run your first industrial task with QSVM and get transparent results within a clear report.
Try Your First Use Case for Free