Superpositions Studio

    QNN
    Quantum Neural Networks

    Parameterized quantum circuits for classification/regression on NISQ devices — transparent, reproducible, and benchmarked on Superpositions Studio.

    Reproducible QNN training
    Downloadable code & benchmarks
    Classical ML baselines included
    Transparent metrics & results

    Overview

    A Quantum Neural Network (QNN) uses parameterized quantum circuits (PQCs) to learn mappings from inputs to outputs. Data are embedded into quantum states via encoders; trainable gates with parameters θ are optimized to minimize a loss function computed from measured observables. QNNs are researched as quantum analogs of neural networks with potentially rich expressivity.

    Why It Matters

    QNNs explore whether quantum circuits can learn useful representations, especially in small-data regimes and for problems with structure that quantum circuits can exploit. They are a core path for quantum ML research on current devices.

    How QNN Works

    A five-step process to build and train quantum neural networks for classification and regression

    01

    Choose a data encoder

    Choose a data encoder to map classical inputs to circuits (angle or amplitude encoding, optional data reuploading).

    02

    Design a parameterized ansatz

    Design a parameterized ansatz with trainable gates.

    03

    Define a loss

    Define a loss (e.g., cross-entropy for classification) based on measured observables.

    04

    Optimize parameters

    Optimize θ with gradient-free or gradient-based methods (parameter-shift, SPSA).

    05

    Validate and test

    Validate and test generalization.

    Real-World Applications

    Where QNNs provide practical solutions for quantum machine learning and classification tasks

    ML

    Classification/regression

    Classification/regression machine learning problems

    Signal Processing

    Anomaly detection

    Anomaly detection and signal processing prototypes

    Research

    Quantum feature learning

    Quantum feature learning research

    Hybrid

    Hybrid pipelines

    Hybrid pipelines with classical preprocessing or post-processing

    Strengths & Limitations

    Strengths

    • Flexible modeling with PQCs and hybrid training
    • Potentially expressive hypothesis spaces
    • Integrates with classical ML stacks

    Limitations

    • Barren plateaus and vanishing gradients
    • High shot cost and noise sensitivity
    • Advantage over classical NNs unproven generally; task-dependent

    Benchmarking and Verification

    Compare against classical baselines (logistic regression, SVM, MLP). Report accuracy, learning curves, and model/run parameters (depth, shots). Seed-controlled runs ensure reproducibility.

    Hardware & Requirements

    QubitsDepends on encoding strategy and the number of features
    DepthTied to ansatz; deeper circuits risk noise and barren plateaus
    Shots1k–50k per circuit run depending on budget
    BackendStart and prototype on a simulator, then run on a quantum device

    Proof-of-Concept Example

    Real experimental results demonstrating QNN performance

    Task

    Binary classification on a 2D moon dataset

    Ansatz

    Layered entangling circuit with data reuploading (1–3 layers)

    OptimizerSPSA / Adam
    MetricsAccuracy/AUC, loss vs epochs
    OutcomeDemonstrate stable training on simulator; hardware demo for tiny models

    FAQ

    Common questions about QNN implementation and performance

    Ready to Run QNN?

    Run QNN on Superpositions Studio — design circuits, train models, and export reproducible results.

    Powered by Superpositions Studio — Transparent, reproducible quantum computing