Last updated: June 2025
Key Takeaways
- The Madrid quantum network deployed QKD across more than 130 km of live optical fibre spanning two independent telecom operators — without disrupting existing classical traffic
- The CiViC/OpenQKD architecture achieved full quantum-classical interoperability by restricting modifications to the optical transport and encryption layers only, leaving legacy infrastructure intact
- For security architects planning quantum-safe migrations, this deployment establishes a replicable, multi-vendor, SLA-compliant blueprint that removes the “we can’t touch production” objection from the QKD conversation
[IMAGE: Macro aerial view of fiber optic cables splitting into quantum-entangled light paths beneath a city grid at night, deep blacks with cyan and teal light refractions, cinematic 8K quality, no text or human faces]
The Production Problem That Kills Most QKD Pilots
Most quantum key distribution pilots fail before they start. Not because the physics doesn’t work — it does. They fail because the security team can’t get approval to touch production infrastructure.
Here’s the scenario your procurement committee fears: a QKD module from Vendor A needs to co-propagate quantum signals alongside 40Gbps classical traffic on fibres that carry SLA-protected enterprise contracts. One misconfiguration causes signal bleed. The quantum channel collapses. Worse — the classical traffic degrades. The CISO owns the outage. The QKD project gets shelved for three years.
The Madrid quantum network, documented in arXiv:2409.01069v3 under the CiViC and OpenQKD projects, ran this exact gauntlet — and produced a working architecture. Researchers installed QKD modules from multiple vendors inside the production nodes of two separate network operators, connected them through an optically-switched network spanning more than 130 km of deployed optical fibre, and kept classical traffic running under strict service level agreements throughout.
The result is the closest thing the industry has to a validated production deployment guide for heterogeneous QKD infrastructure.
What “Heterogeneous” Actually Means for Your Attack Surface
Quantum key distribution (QKD) is a cryptographic method that uses the quantum mechanical properties of photons to distribute encryption keys between two parties. Any attempt to intercept the key exchange disturbs the quantum states in a measurable way, making eavesdropping detectable in principle. Unlike post-quantum cryptography (PQC), which relies on mathematical hardness assumptions, QKD’s security derives from physics.
But QKD’s theoretical security guarantee means nothing if the surrounding infrastructure introduces classical vulnerabilities. This is where heterogeneity becomes a security architecture problem, not just an engineering one.
The Madrid deployment was heterogeneous across three dimensions:
- Multi-vendor QKD modules: Hardware from different providers, each with distinct optical specifications, management interfaces, and key generation characteristics
- Multi-operator infrastructure: Two independent telecom operators, each with separate operational domains, NOC procedures, and contractual obligations to existing customers
- Mixed signal environment: Quantum and classical signals co-propagating on the same physical fibre plant
Each dimension multiplies your attack surface if not managed deliberately. A single-vendor, single-operator lab deployment sidesteps all three. The Madrid team addressed all three simultaneously — in production.
“This effort is intended to lay the foundation for large-scale quantum network deployments.” — CiViC/OpenQKD research team, arXiv:2409.01069v3
Technical Architecture: Where the Hard Decisions Live
Optical Isolation as a Security Control
Co-propagation of quantum and classical signals on the same fibre is the central technical challenge. Classical signals operate at power levels that would overwhelm the single-photon detectors QKD relies on. The Madrid architecture addressed this through extreme isolation from external disturbances at the optical layer — treating photon leakage as a security control problem, not merely a signal quality problem.
This framing matters for security architects. Optical isolation isn’t just about keeping the quantum channel clean. It’s about ensuring that classical traffic cannot be used as a side channel to infer information about key generation activity. If an adversary can correlate classical signal anomalies with QKD key exchange events, the timing metadata alone becomes an intelligence asset.
Surgical Modification Scope
The deployment team limited infrastructure changes to two layers: optical transport and encryption. Everything above those layers — routing protocols, network management systems, billing infrastructure, customer-facing services — remained untouched.
This constraint was not accidental. It was the architectural decision that made SLA compliance possible. By defining a clear modification boundary, the team could guarantee to both operators that legacy traffic characteristics would not change. Security architects planning QKD deployments should treat this boundary definition as a prerequisite deliverable, not an afterthought.
Comparison: Lab QKD vs. Madrid Production Deployment
| Dimension | Typical Lab/Pilot Deployment | Madrid CiViC/OpenQKD Deployment |
|---|---|---|
| Fibre environment | Dedicated dark fibre | Live production fibre with classical co-propagation |
| Vendor diversity | Single QKD vendor | Multiple QKD module providers |
| Operator scope | Single organization | Two independent telecom operators |
| SLA obligations | None | Strict SLAs protecting legacy classical traffic |
| Modification scope | Full infrastructure access | Optical transport and encryption layers only |
| Standards compliance | Optional | Mandatory, integrated into deployment requirements |
| Management complexity | Unified NOC | Joint management across separate operational domains |
| Deployment scale | Typically under 50 km | More than 130 km of deployed optical fibre |
Standards Compliance as an Operational Requirement
The Madrid deployment treated standards compliance not as a checkbox but as an active constraint on architecture decisions. The team addressed legal and quality assurance requirements alongside the technical integration work — meaning compliance shaped the design, rather than being retrofitted after the fact.
For organizations operating under NIS2, DORA, or sector-specific mandates, this sequencing is critical. Compliance requirements that arrive after architecture is locked create expensive rework. The Madrid model embeds them from the start.
Industry Context: Where QKD Deployment Actually Stands
The Regulatory Clock Is Running
NIST finalized its first three post-quantum cryptographic standards in August 2024, establishing ML-KEM, ML-DSA, and SLH-DSA as the baseline for quantum-safe key exchange and digital signatures. Federal agencies face migration deadlines, and CISA has signaled that critical infrastructure operators should treat 2030 as a planning horizon for cryptographic agility.
QKD sits in a different regulatory lane than PQC — it’s a physical-layer key distribution mechanism, not a drop-in algorithm replacement. But the two approaches are increasingly discussed as complementary: PQC handles the software and protocol stack, QKD handles high-security point-to-point key exchange where the physics-based guarantee justifies the infrastructure investment.
The Madrid deployment demonstrates that QKD can be integrated into the telecommunications ecosystem without requiring greenfield infrastructure — a finding that directly addresses the cost objection that has kept most enterprise QKD conversations theoretical.
Who Is Moving and Who Is Waiting
Telecom operators in Europe and Asia have led production QKD deployments. The CiViC and OpenQKD projects represent EU-funded efforts to move beyond single-operator demonstrations toward interoperable, multi-stakeholder infrastructure. Financial services firms in jurisdictions with long data retention requirements — where encrypted data captured today could be decrypted by a cryptographically relevant quantum computer within the retention window — have the strongest near-term business case for QKD investment.
Most North American enterprises remain in the evaluation phase. The primary blockers are not technical: they are procurement complexity (multi-vendor, multi-operator contracts), lack of internal expertise to assess QKD vendor claims, and absence of a validated reference architecture for production deployment.
The Madrid network addresses the third blocker directly.
The Cost of Waiting: A Concrete Frame
The “harvest now, decrypt later” threat model is not speculative. Nation-state adversaries with the resources to store encrypted traffic at scale are doing so. The question is not whether a cryptographically relevant quantum computer will exist — it is whether one will exist before your organization has migrated its most sensitive key exchange infrastructure.
For data with a 10-year sensitivity horizon — M&A communications, patient records, classified research — the migration clock started when adversaries began harvesting. Not when NIST published its standards.
The BeQuantum Perspective: From Blueprint to Operational Reality
The Madrid deployment answers the architecture question. It does not answer the operational question: once QKD infrastructure is running across multiple vendors and operators, how do you verify that the keys it generates are being used correctly, that the classical encryption layers haven’t been misconfigured, and that the integrity of the overall system can be audited?
This is where BeQuantum’s Digital Notary function addresses a gap the Madrid paper explicitly leaves open. The deployment documented joint management and operation of quantum and classical resources as a requirement — but the paper does not describe how key provenance, usage audit trails, or cross-operator verification are handled after key generation.
Organizations deploying QKD infrastructure need a verification layer that sits above the optical and encryption layers — one that can attest to key generation events, log cross-operator handoffs with tamper-evident records, and provide the audit trail that compliance frameworks require. BeQuantum’s PQC Layer provides algorithm-agile key management that can wrap QKD-generated keys in post-quantum authenticated envelopes, ensuring that even if the QKD channel is compromised at the classical management plane, the keys themselves carry cryptographic proof of origin.
For enterprises evaluating IceCase hardware deployments in co-location facilities — the physical-layer equivalent of the Madrid production node installations — the same principle applies: the hardware security boundary must be paired with a software attestation layer that regulators and auditors can inspect without requiring access to the quantum hardware itself.
What Your Team Should Do in the Next 90 Days
Step 1: Map your fibre dependencies (Days 1-30) Identify which of your organization’s encrypted links run over leased telecom fibre versus owned dark fibre. QKD deployment economics and SLA complexity differ significantly between these cases. The Madrid model applies most directly to leased-fibre environments where you must negotiate with an operator — document which links those are and which operator controls them.
Step 2: Assess your modification boundary (Days 31-60) For each high-sensitivity link identified in Step 1, define what layers your security team can modify without triggering SLA renegotiation or change management escalation. If you cannot answer this question, you cannot scope a QKD deployment. Request the relevant SLA schedules from your telecom providers and have your legal team flag the change notification clauses.
Step 3: Require multi-vendor interoperability evidence from QKD vendors (Days 61-90) Any QKD vendor that cannot demonstrate interoperability with at least one other vendor’s modules in a production or near-production environment should be treated as a single-vendor lock-in risk. Use the Madrid deployment as your reference benchmark. Ask vendors specifically: “Can your modules operate alongside another vendor’s hardware on the same optically-switched network under SLA-protected classical co-propagation conditions?”
Frequently Asked Questions
Q: Does deploying QKD mean we can stop migrating to post-quantum cryptography algorithms? A: No. QKD and PQC address different parts of the cryptographic stack. QKD distributes keys using quantum physics at the physical layer — it does not replace the need for quantum-safe algorithms in your TLS stack, code signing infrastructure, or certificate authority chain. NIST’s PQC standards (ML-KEM, ML-DSA, SLH-DSA) apply to software and protocol layers that QKD does not touch. A complete quantum-safe posture requires both.
Q: How does the Madrid deployment’s multi-operator model affect liability when a QKD channel fails? A: The Madrid team addressed this through strict SLA compliance that protected legacy classical traffic — meaning the QKD layer was required to fail safely without degrading classical services. For enterprise deployments, this translates to a contractual requirement: your QKD deployment agreement must specify that quantum channel failures trigger graceful fallback to classical key exchange, not service outages. Your legal team should review operator SLAs for change notification and fault isolation clauses before any production QKD work begins.
Q: What key generation rates should we expect from a production QKD deployment at this scale? A: The Madrid paper (arXiv:2409.01069v3) does not publish specific key generation rates or throughput benchmarks — this is one of the data gaps in the current literature. Rates vary significantly by QKD protocol (BB84, CV-QKD), fibre distance, and signal loss conditions. At 130 km distances, expect rates in the kilobits-per-second range under typical conditions, which is sufficient for session key refresh in high-security applications but not for bulk data encryption. Request vendor-specific rate curves at your target fibre distances before committing to a deployment architecture.