## QLOC seminar

This is the webpage for the regular meetings organised by the Quantum and Linear Optical Computation (QLOC) research group at INL.

These are meant to be informal sessions aimed at researchers and students interested in quantum information and computation at INL, U Minho, and others. The format may vary from week to week and variously include introductory talks, presentations of recent or ongoing research, seminars by visitors, or discussions about rececent papers (journal club).

The current schedule is on Wednesdays from 14.00 to 15.30. The meetings are usually hybrid (in person and via Zoom).

I will give a brief and very informal overview of some error mitigation techniques: zero-noise extrapolation, probabilistic error cancellation, dynamical decoupling, and some techniques for doing direct and indirect measurements of Pauli operators.

Here are some references (clickable links):

Review on error mitigation: arXiv:2210.00921 [quant-ph]

Probabilistic error cancellation: arXiv:1612.02058 [quant-ph]

Direct and indirect measurements: Phys. Rev. Research 1, 013006

Example of dynamical decoupling: Post on AWS Quantum Technologies Blog

Other examples of error mitigation with dynamical decoupling: arXiv:1807.08768 [quant-ph]

TBA

It is known that quantum contextuality is an important resource for quantum information tasks. However, although contextuality is an active research area with its feet firmly planted in both foundational and applied aspects, little work has been done to understand how contextuality applies in cases where there could be some signalling between measurements. In particular, standard frameworks allow for compatible measurements to be performed in any order, since the order in which measurements are performed has no affect on the observed outcome distributions. Of these approaches, the sheaf framework for contextuality, developed by Abramsky and Brandenburger in [1], has been useful in a number of ways, largely because it allows for tools already long established in the algebraic topology/sheaf theory community to be applied to the specific case of contextuality.

Here, generalising Gogiosos and Pinzani's framework for ‘definite causal setups’ [3], we introduce an enabling relation on the measurement set which allows for dependencies between measurements to be described. Several setups from the literature which have been shown to be non-classical – via violation of a inequality – are describable within this framework, where non-classicality instead arises as the absence of a global section of the relevant presheaf. In this talk I will introduce the framework, describe some examples, and describe how we can transport some of the tools developed for sheaf contextuality [1, 2] to this setting.

[1] S. Abramsky and A. Brandenburger. The sheaf-theoretic structure of non-locality and contextuality. New Journal of Physics, 13(11):113036 (2011).

[2] R. S. Barbosa. Contextuality in quantum mechanics and beyond, DPhil thesis, University of Oxford (2015).

[3] S. Gogioso and N. Pinzani. The sheaf-theoretic structure of definite causality. Electronic Proceedings in Theoretical Computer Science, 343:301–324 (2021).

In 1986 Avshalom Elitzur and Lev Vaidman introduced a gedanken experiment that raised a series of foundational discussions in quantum theory; the famous Elitzur-Vaidman bomb experiment [1]. In this QLOC we will discuss their proposal and the area of quantum imaging with undetected photons [2], giving a general overview of this field of research. In particular, we shall take the opportunity to discuss our recent results that applied the event graph formalism [3] to multipath interferometry [4] as a way of probing coherence and generalized contextuality.

[1] Avshalom C. Elitzur and Lev Vaidman. Quantum mechanical interaction-free measurements. Foundations of Physics 23(7): 987–997 (1993). arXiv:hep-th/9305002.

[2] Gabriela Barreto Lemos, Victoria Borish, Garrett D. Cole, Sven Ramelow, Radek Lapkiewicz, and Anton Zeilinger. Quantum imaging with undetected photons. Nature 512(7515): 409–412 (2014). arXiv:1401.4318 [quant-ph].

[3] Rafael Wagner, Rui Soares Barbosa, and Ernesto F. Galvão. Inequalities witnessing coherence, nonlocality, and contextuality. arXiv:2209.02670 [quant-ph] (2022).

[4] Rafael Wgner, Anita Camillini, and Ernesto F. Galvão. Coherence and contextuality in a Mach–Zehnder interferometer. arXiv:2210.05624 [quant-ph] (2022).

The Clifford hierarchy is a nested sequence of sets of quantum gates critical to achieving fault-tolerant quantum computation. Diagonal gates of the Clifford hierarchy and 'nearly diagonal' semi-Clifford gates are particularly important: they admit efficient gate teleportation protocols that implement these gates with fewer ancillary quantum resources such as magic states. Despite the practical importance of these sets of gates, many questions about their structure remain open; this is especially true in the higher-dimensional qudit setting. Our contribution is to leverage the discrete Stone–von Neumann theorem and the symplectic formalism of qudit stabiliser mechanics towards extending results of Zeng–Cheng–Chuang (2008) and Beigi–Shor (2010) to higher dimensions in a uniform manner. We further give a simple algorithm for recursively enumerating all gates of the Clifford hierarchy, a simple algorithm for recognising and diagonalising semi-Clifford gates, and a concise proof of the classification of the diagonal Clifford hierarchy gates due to Cui–Gottesman–Krishna (2016) for the single-qudit case. We generalise the efficient gate teleportation protocols of semi-Clifford gates to the qudit setting and prove that every third level gate of one qudit (of any prime dimension) and of two qutrits can be implemented efficiently. Numerical evidence gathered via the aforementioned algorithms support the conjecture that higher-level gates can be implemented efficiently.

Based on arXiv:2011.00127 [quant-ph], published in Proc. Royal Soc. A.

We have seen in recent QLOC presentations that standard quantum physics needs complex numbers, a new result in quantum foundations that might impact future quantum technologies and that furnished the analysis of the 'imaginarity' of quantum theory as a resource. Formally, working within the quantum resource theories framework [1], Hickey and Gour developed the resource theory of imaginarity [2], a new formalism that has many similarities with the resource theory of coherence [3]. In this talk I will review what is known about the resource theory of imaginarity, and discuss the recent operational characterization provided by Wu et al [4] that argued in favour of the relevance of imaginary operations in quantum information processing.

[1] Eric Chitambar and Gilad Gour. Quantum resource theories. Reviews of Modern Physics 91(2): 025001 (2019).

[2] Alexander Hickey and Gilad Gour. Quantifying the imaginarity of quantum mechanics. Journal of Physics A: Mathematical and Theoretical 51(41): 414009 (2018).

[3] Alexander Streltsov, Gerardo Adesso, and Martin B. Plenio. *Colloquium*: Quantum coherence as a resource. Reviews of Modern Physics 89(4): 041003 (2017).

[4] Kang-Da Wu et al. "Operational resource theory of imaginarity." Physical Review Letters 126(9): 090401 (2021).

Current noisy intermediate-scale quantum (NISQ) devices exhibit several limitations such as a small number of physical qubits. To address this limitation, circuit knitting techniques have been developed to partition large quantum circuits into smaller instances that can be run on current devices. In this journal club, I will review recent developments in these techniques. More specifically, I discuss wire cutting and gate cutting, and also the role of classical communication in the cost of gate cutting.

[1] C. Piveteau and D. Sutter, Circuit knitting with classical communication, arXiv:2205.00016 [quant-ph], April 2022.

[2] A. Lowe, M. Medvidović, A. Hayes, L.J. O'Riordan, T.R. Bromley, J.M. Arrazola, and N. Killoran, Fast quantum circuit cutting with randomized measurements, arXiv:2207.14734 [quant-ph], July 2022.

The promise of hybrid quantum algorithms with advantage, i.e., algorithms that can leverage (limited) quantum processing power by using classical processing, is alluring. This is mainly due to the known state-of-the-art in physical implementations of quantum computers: they enjoy short coherence times, which we would like to supplement with the classical computers we've developed over the last century. However, we may also appreciate hybrid algorithms from a theoretical point-of-view: when thinking in terms of computational complexity, it should be surprising that a fixed and limited amount of coherence can still provide us with a computational advantage. (Do note that this demands a formalization of "coherence time".) Now, take, in particular, the task of Phase Estimation. It is ubiquitous, and a self-evidently important task for quantum computing researchers. It turns out that this task is amenable to hybridization; what's more, multiple authors [1,2,3] have shown that a certain computational advantage can be achieved for any coherence depth limit, with a continuous trade-off of advantage/coherence time. In our publication, Duarte Magano and I have shown that the existence of this trade-off can be very naturally derived from the framework of Quantum Singular Value Transformations, as introduced by Gilyén et al. [4] in 2018, and in the process strengthened the assertion, to show that the task of Eigenvalue Estimation has an analogous family of hybrid algorithms that respect the same trade-off.

[1] N. Wiebe and C. Granade, Efficient Bayesian phase estimation, Physical Review Letters 117, 010503 (2016).

[2] D. Wang, O. Higgott, and S. Brierley, Accelerated variational quantum eigensolver, Physical Review Letters 122, 140504 (2019).

[3] T. Giurgica-Tiron, I. Kerenidis, F. Labib, A. Prakash, and W. Zeng, Low depth algorithms for quantum amplitude estimation, Quantum 6, 745 (2022).

[4] A. Gilyén, Y. Su, G. H. Low, and N. Wiebe, Quantum singular value transformation and beyond: exponential improvements for quantum matrix arithmetics, in Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing (STOC 2019): 193–204.

The variational quantum eigensolver (VQE) was proposed as a NISQ-friendly hybrid quantum-classical algorithm framed in the circuit model. In VQE, the angles of gates in a fixed-structure trial quantum circuit are varied in order to minimize a cost function (the expectation value of a Hamiltonian representing the problem).

In [1], a measurement-based variational quantum eigensolver (MB-VQE) was proposed. This algorithm adapts the approach of VQE to the framework of measurement-based quantum computation. In this case, rather than angles of gates in a circuit, the targets of the optimization are angles of measurement bases on a fixed-structure trial graph state.

In this talk, I will introduce the algorithm and analyse the trade-offs in the direct conversion of circuit-model VQE to MB-VQE.

The implementation and analysis of the algorithm was done jointly with Filipa Peres.

[1] R.R. Ferguson, L. Dellantonio, K. Jansen, A.A. Balushi, W. Dür, W., C. A. Muschik (2021). A measurement-based variational quantum eigensolver. Physical Review Letters, 126(22), 220501.

In the first part of this talk I will present a recent result about boson bunching. In the celebrated Hong–Ou–Mandel effect, two photons sent on a balanced beam-splitter will always bunch in one of two modes. However, any source of partial distinguishability between the photons (e.g. time-delays, difference in polarization, etc) diminishes this effect, lowering the bunching probability. This fact, together with other physical and mathematical arguments, justify the general rule-of-thumb that indistinguishable photons bunch the most. In our work we disprove this alleged straightforward link between indistinguishability and bunching by exploiting a recent finding in the theory of matrix permanents. We exhibit a family of optical circuits where the bunching of photons into two modes can be significantly boosted by making them partially distinguishable via an appropriate polarization pattern. This boosting effect is already visible in a 7-photon interferometric process, making the observation of this phenomenon within reach of current photonic technology.

In the second part of the talk I will briefly present a new method to validate the correct functioning of a boson sampler, based on how photons distribute in partitions of the output modes. Efficient validation tests are crucial to justify claims of quantum computational advantage. The method we propose is versatile and encompasses previous tests for validating boson samplers based on bunching phenomena, marginal distributions and even some suppression laws. We show via theoretical arguments and numerical simulations that our method can be used in practical scenarios to distinguish ideal boson samplers from ones affected by realistic noise sources.

Finally, I will mention an open-source package created during my thesis for studying multiphoton interference, BosonSampling.jl, written in the Julia programming language (a hands-on demo session will be held later this week).

Quantum computation promises significant speedups over its classical counterparts for certain problems. But which properties of quantum mechanics fuel such advantages? Quantum resource theories, e.g., coherence and entanglement, provide rigorous mathematical frameworks to approach this question. I will present recent results, that show that coherence serves as a quantum resource that quantitatively determines the performance of Shor's factorization algorithm (arXiv:2203.10632 [quant-ph]). Before diving headfirst into the results, I will give an accessible introduction to resource theories in general and coherence theory in particular. This includes a motivation to study resource theories in the first place and a simple setting in which an operational advantage emerges from coherence to justify why we call these frameworks “resource” theories, after all. With a brief reminder of Shor's algorithm, and armed with the necessary tools, I then present a possible approach on how to analyze Shor's algorithm (and other quantum algorithms) in terms of the employed quantum resources. We will see what this means for quantum resources in Shor's algorithm and what foundational (and even practical) insights the results give us.

In this talk I will introduce a new framework for contextuality based on simplicial sets, combinatorial models of topological spaces that play a prominent role in modern homotopy theory. Our approach extends measurement scenarios to consist of spaces (rather than sets) of measurements and outcomes, and thereby generalizes nonsignaling distributions to simplicial distributions, which are distributions on spaces modeled by simplicial sets. Strong contextuality can be generalized suitably for simplicial distributions, allowing us to define cohomological witnesses that extend the earlier topological constructions restricted to algebraic relations among quantum observables to the level of probability distributions. We will revisit foundational results in quantum theory, such as the Gleason’s theorem, Kochen–Specker theorem and Fine's theorem for the CHSH scenario.

Based on the preprint arXiv:2204.06648 joint with Aziz Kharoof and Selman Ipek.

Current quantum computers are hampered by noisy gates, low number of qubits and time-expensive access. Nevertheless, the race for showing the first experimental proof of quantum advantage has already started, i.e. when useful problems can be solved faster by quantum devices than by classical computers, making the following question pertinent: How can we get the most out of today's Noisy-Intermediate Scale Quantum (NISQ) computers?

Aiming to answer the question posed above, in this presentation I will talk about two of my most recent works: a new efficient technique to simulate open quantum systems in quantum computers, Quantum TEDOPA (Q-TEDOPA) [1], and quantum error mitigation for quantum computation [2]. In the former, I will explain how to implement Q-TEDOPA on an IBM quantum computer and discuss the speedup obtained relatively to classical simulation of open quantum systems. In the later, I will describe some quantum error mitigation techniques and how a layered implementation of these can increase the fidelity of a quantum simulation of the Heisenberg model by 2.8x (on an IBM quantum computer).

[1] José D. Guimarães, Mikhail I. Vasilevskiy, Luís S. Barbosa. Efficient method to simulate non-perturbative dynamics of an open quantum system using a quantum computer, arXiv:2203.14653 [quant-ph].

[2] José D. Guimarães, Carlos Tavares. Towards a layered architecture for error mitigation in quantum computation, IEEE International Conference on Quantum Software 2022 (accepted, to be published soon).

Quantum state tomography aims to learn properties of quantum systems from experiments. Traditionally, prediction techniques suffer from the curse of dimensionality, i.e., the number of parameters needed to describe a system grows exponentially with the size of it; and these methods inherit this dependence in the number of copies of state as well as the computation resources they require.
Tomography based on classical shadows protocol circumvents this problem. Classical shadows from N copies of the state suffice to predict arbitrary expected values of M Hermitian operators tr(O_{i} ρ) i = 1, …, M up to additive error ε given that N ≥ O (log(M) max_{i} norm_shadow(O_{i})^{2}/ε).
There is no dependence on the dimension of the system. The norm_shadow depends on the specific set of unitary operations used in the protocol. In the case of global Clifford unitaries, this norm is bounded by the Hilbert–Schmidt norm of the operator, whereas in random Pauli unitaries, the scaling depends on the operator norm and the locality of it.

In this journal club we will introduce the technique and overview some of the proofs on the number of copies of the system that are needed to achieve convergence as well as the bound in the shadow norm in the case of global Clifford unitaries. Next, we will see an application of them to quantum process tomography. By exploiting the Choi–Jamiołkowski isomorphism, quantum process tomography is equivalent to quantum state tomography, where one can use Classical Shadows to retrieve information about a quantum channel.

Bibliography

Main paper:

• Huang, H. Y., Kueng, R., Preskill, J. (2020). Predicting many properties of a quantum system from very few measurements. Nature Physics, 16(10), 1050&ndahs;1057.

(Highly recommended, the proofs are easy to follow and include enough references to previous works)

Application to quantum process tomography:

• Levy, R., Luo, D., Clark, B. K. (2021). Classical Shadows for Quantum Process Tomography on Near-term Quantum Computers. arXiv:2110.02965.

The natural distance measures used in quantum theory, such as the trace distance or diamond norm, quantify optimal statistical distinguishability between quantum objects. However, such optimal behaviour may, in general, not be attainable by quantum processes with limited circuit depth and number of qubits (as is expected in the NISQ era). To address this we introduce operational distance measures between quantum states, measurements and channels that quantify their average-case statistical distinguishability via random quantum circuits. Specifically, we consider the average Total-Variation (TV) distance between measurement outputs of two quantum processes, in which quantum objects of interest (states, measurements, or channels) are intertwined with random quantum circuits and undergo a measurement in the computational basis. Importantly, we show that once a family of random circuits forms an approximate unitary 4-design, the average TV distance can be approximated by simple explicit functions of the objects we wish to compare. These functions define bonafide measures of average-case distance and satisfy many desired properties such as triangle inequality, subadditivity, or (restricted) data processing inequality. We argue that these quantifiers are more natural for studying the performance of NISQ devices than the conventional distances such as the trace distance or diamond norm. Contrary to those measures, our average-case distances capture the generic behaviour of experiments involving only moderate-depth quantum circuits that will be attainable in near-term devices. To back up our claims, we numerically investigate the usefulness of our distance measures on families of quantum circuits originating from random instances of variational quantum algorithms performed on moderate-size systems. We observe that the average-case distances usually capture better the actual behaviour of such quantum circuits (as compared to the measures based on optimal statistical distinguishability).

The presentation is based on two recent works: arXiv:2112.14283 and arXiv:2112.14284 written together with Filip Maciejewski and Zbigniew Puchała.

In this talk, I will give an overview of arXiv:2102.07637 [quant-ph], showing that the resource theory of contextuality does not admit catalysts, i.e., there are no correlations that can enable an otherwise impossible resource conversion and still be recovered afterward. As a corollary, we observe that the same holds for nonlocality. As entanglement allows for catalysts, this adds a further example to the list of “anomalies of entanglement,” showing that nonlocality and entanglement behave differently as resources. We also show that catalysis remains impossible even if, instead of classical randomness, we allow some more powerful behaviors to be used freely in the free transformations of the resource theory.

ZX-calculus is a diagrammatic language that can be used for depicting and reasoning about quantum computations. Contrary to the rigid structure of quantum circuits, ZX-diagrams can be manipulated in a simple, visual way to yield equivalent (and oftentimes simpler) digrams. These manipulations must follow specific re-write rules, sets of which have been found that are complete for both Clifford and universal quantum circuits. In the first part of this journal club, I will make a brief and practical introduction to ZX-calculus, presenting several re-write rules and working through simple examples [1]. In the second and third parts, I will discuss some applications of this framework, namely, circuit simplification [2, 3] and classical simulation [4, 5].

[1] J. van de Wetering, ZX-calculus for the working quantum computer scientist (2020). arXiv:2012.13966 [quant-ph].

[2] R. Duncan, A. Kissinger, S. Pedrix, and J. van de Wetering, Graph-theoretic simplification of quantum circuits with the ZX calculus, Quantum 4, 279 (2020).

[3] A. Kissinger and J. van de Wetering. Reducing the number of non-Clifford gates in quantum circuits. Physical Review A, 102, 022406 (2020).

[4] A. Kissinger and J. van de Wetering. Simulating quantum circuits with ZX-calculus reduced stabiliser decompositions (2021). arXiv:2109.01076 [quant-ph].

[5] A. Kissinger, J. van de Wetering, and R. Vilmart. Classical simulation of quantum circuits with partial and graphical stabiliser decompositions (2022). arXiv:2202.09202 [quant-ph].

In this talk I want to introduce the framework of General Probabilistic Theories – GPTs for short – focusing on the very basic definitions and properties that a GPT must satisfy. We shall see the way in which the GPT formalism appears, from analysing structural constraints quantum theory must satisfy, buy asking the question: what kind of different theories may also follow from such framework? I then proceed to outline some relevant results in the literature that show the type of questions the GPT framework is well suited for pursuing.

Quantum computers promise considerable speed-ups with respect to their classical counterparts. However, the identification of the innately quantum features that enable these speed-ups is challenging. In the continuous-variable setting—a promising paradigm for the realisation of universal, scalable, and fault-tolerant quantum computing—contextuality and Wigner negativity have been perceived as two such distinct resources. Here we show that they are in fact equivalent for the standard models of continuous-variable quantum computing. While our results provide a unifying picture of continuous-variable resources for quantum speed-up, they also pave the way towards practical demonstrations of continuous-variable contextuality, and shed light on the significance of negative probabilities in phase-space descriptions of quantum mechanics.

This is a continuation of the talk on 1st July 2021.

In this journal club, I will discuss the measurement-based model of quantum computation known as Pauli-based computation (PBC) and first introduced in [1]. I will demonstrate that this model is universal and polynomial-time equivalent to the quantum circuit model. Additionally, I will show how this framework can be used to compile (universally general) Clifford+T quantum circuits [1,2] and to perform hybrid quantum-classical computation [1].

[1] S. Bravyi, G. Smith, and J. A. Smolin, Phys. Rev. X 6, 021043 (2016), arXiv:1506.01396 [quant-ph].

[2] M. Yoganathan, R. Jozsa, and S. Strelchuk, Proc. R. Soc. A 475 (2019), arXiv:1806.03200 [quant-ph].

The study of classical simulation techniques for quantum circuits and computations, in general, has been beneficial for enlarging the verification capabilities of classical devices. Furthermore, presenting valuable insights into which parameters generate additional difficulties in the simulation task, creating the gap between quantum and classical computations. In [BGL21], the authors approach the simulation task with a more state-specific procedure taking advantage of the quantum circuit generating the state intended to be measured. The presented technique provides an advantage of a constant factor size from the general process when the computation is spread through unlimited memory and an exponential advantage over simulations with polynomially restricted memory usage. Additionally, an adaptation for the MBQC model was provided. The set of efficient simulatable quantum computations was enlarged in this work by removing some restrictions to the measurement patterns imposed in previous solutions [BR07].

[BGL21] Sergey Bravyi, David Gosset, and Yinchen Liu. How to simulate quantum measurement without computing marginals. arXiv:2112.08499, 2021.

[BR07] Sergey Bravyi and Robert Raussendorf. Measurement-based quantum computation with the toric code states. PRA 76(2):22304, 2007.

Gaussian boson sampling is a model of photonic quantum computing that has attracted attention as a platform for quantum devices capable of performing tasks that are out of reach of their classical counterparts. Most recent photonic quantum computational advantage experiments were performed within this Gaussian variant of bosonsampling, having observed events with over 100 photons and seriously challenged the capabilities of competing classical algorithms. Thus, there is significant interest in solidifying the mathematical and complexity-theoretic foundations for the hardness of simulating these devices. We show that there is no efficient classical algorithm to approximately sample from the output of an ideal Gaussian boson sampling device unless the polynomial hierarchy collapses, under the same two conjectures as the original bosonsampling proposal by Aaronson and Arkhipov.

Crucial to the proof is a new method for programming a Gaussian boson sampling device such that the output probabilities are proportional to permanents of (submatrices of) an arbitrary matrix. This provides considerable flexibility in programming, and likely has applications much beyond those discussed here. We leverage this to make progress towards the goal of proving hardness in the regime where there are fewer than quadratically more modes than photons (i.e., in the high-collision regime). Our reduction suffices to prove that GBS is hard in the constant-collision regime, though we believe some ingredients of it can be used to push this direction further.

We propose a method for classical simulation of finite-dimensional quantum systems, based on sampling from a quasiprobability distribution, i.e., a generalized Wigner function. Our construction applies to all finite dimensions, with the most interesting case being that of qubits. For multiple qubits, we find that quantum computation by Clifford gates and Pauli measurements on magic states can be efficiently classically simulated if the quasiprobability distribution of the magic states is non-negative. This provides the so far missing qubit counterpart of the corresponding result [V. Veitch et al., New J. Phys. 14, 113011 (2012)] applying only to odd dimension. Our approach is more general than previous ones based on mixtures of stabilizer states. Namely, all mixtures of stabilizer states can be efficiently simulated, but for any number of qubits there also exist efficiently simulable states outside the stabilizer polytope. Further, our simulation method extends to negative quasiprobability distributions, where it provides amplitude estimation. The simulation cost is then proportional to a robustness measure squared. For all quantum states, this robustness is smaller than or equal to robustness of magic.

Reference: arXiv:1905.05374 [quant-ph]

In the area of Causal Inference, causal structures are represented by Directed Acyclic Graphs (DAGs), and our goal is to point out DAGs that explain the correlations observed in our data. This explanation is not always unique, as we may have more than one structure compatible with the given distribution. In fact, there are many causal structures that explain exactly the same set of distributions. We can group the structures that give rise to the same correlations in what we call observational equivalence classes.

Based on previous work and original results, we will present this classification of classical causal structures, which is now resolved for the case of two and three observables and partially resolved for four. Furthermore, we will discuss how this classification can help in the search for examples of “Quantum-Classical gaps”, i.e. causal structures that classically do not explain certain distributions, but that explain them when we change the hidden variables (unobservable nodes) from classical random variables to quantum systems. The Bell structure is an example of a DAG that presents a Quantum-Classical gap. Variational quantum algorithms (VQAs) have been gaining popularity as contenders for a chance at quantum advantage with noisy intermediate-scale quantum (NISQ) computers. Among them, the variational quantum eigensolver (VQE) was proposed in [1] to find the eigenstates and eigenvalues of chemical systems.

Variational quantum algorithms (VQAs) have been gaining popularity as contenders for a chance at quantum advantage with noisy intermediate-scale quantum (NISQ) computers. Among them, the variational quantum eigensolver (VQE) was proposed in [1] to find the eigenstates and eigenvalues of chemical systems.

A key part of this class of quantum algorithms is the ansatz, a parameterized circuit that prepares trial states. This is typically the bottleneck of VQAs – either directly, due to excessive circuit depth, or indirectly, due to induced trainability issues.

I will talk about a few proposals of ansätze for chemistry problems, with a focus on the dynamic ansatz of ADAPT-VQE. This algorithm, proposed in [2] and refined in [3,4], grows the ansatz from scratch using information that is accessible (via measurements) along its execution. Because the resulting wave function is system-tailored, ADAPT-VQE can produce high-accuracy results with shallower circuits than VQE with predetermined ansätze.

[1] arXiv:1304.3061 [quant-ph]

[2] arXiv:1812.11173 [quant-ph]

[3] arXiv:1911.10205 [quant-ph]

[4] arXiv:2109.05340 [quant-ph]

After the impact of the PBR theorem [1] to the field of quantum foundations, a lot of effort was made towards obtaining similar no-go results with fewer assumptions. To do so, researchers have described overlap inequalities that addressed the psi-ontic/psi-epistemic no-go questions and from this research direction, they concluded that there is a relation between psi-epistemic theories and non-contextual models: noncontextuality inequalities can be used as bounds on overlaps of ontological models, that also associate the degree of epistemicity of the model. In other words, every noncontextuality inequality can be understood as an overlap inequality [2].

In this journal club, I will discuss the recent results by Leifer and Duarte [3] where they have looked to another class of overlap inequalities, but that instead of describing the degree of distinguishability between states, in terms of their overlaps, they describe the degree of antidistinguishability. They showed that the inequalities dealing with overlaps of antidistinguishability can be understood as noncontextuality inequalities. After discussing these results, I will try to speculate relations between this framework and the coherence overlap scenarios described by Galvão & Brod from [4].

[1] Matthew F. Pusey, Jonathan Barrett, and Terry Rudolph. "On the reality of the quantum state", Nature Physics 8.6 (2012): 475-478, arXiv:1111.3328 [quant-ph].

[2] Matthew S. Leifer, and Owen JE Maroney. "Maximally epistemic interpretations of the quantum state and contextuality", Physical Review Letters 110.12 (2013): 120401, arXiv:1208.5132 [quant-ph].

[3] Matthew S. Leifer, and Cristhiano Duarte. "Noncontextuality inequalities from antidistinguishability", Physical Review A 101.6 (2020): 062113, arXiv:2001.11485 [quant-ph].

[4] Ernesto F. Galvão, and Daniel J. Brod. "Quantum and classical bounds for two-state overlaps", Physical Review A 101.6 (2020): 062110, arXiv:1902.11039 [quant-ph].

The Bell-state measurement (BSM), defined as the projection of two qubits onto maximally entangled Bell states, is an essential feature of a number of quantum communication protocols. A complete BSM is not possible using only linear-optic elements and most schemes achieve a success rate of no more than 50%. In this Journal Club, I will present two protocols able to surpass this limit by adding unentangled single-photon ancillae. Grice [1] shows that the introduction of a pair of ancillary entangled photons improves the success rate to 75%. Ewert and Van Loock [2] surpass this limit and reach a success probability of 25/32. Both [1] and [2] proposed a generalization to reach a success probability arbitrarily close to 100% through the addition of 2N − 2 ancillary photons.

An interesting application of BSM is Browne and Rudolph's Type-II fusion gate, which can be used to connect small cluster state fragments into a large cluster state for measurement-based quantum computing (MBQC). This gate is equivalent to a Bell state measurement (BSM) on a rotated basis. In the last part of this presentation, I will present an adaptation of the two efficient BSM schemes to obtain a Type-II fusion gate with the same enhanced probability [3]. Such a scheme for the construction of a linear optical cluster state is universal for MBQC.

[1] W. P. Grice. "Arbitrarily complete Bell-state measurement using only linear optical elements." Physical Review A 84.4 (2011): 042331.

[2] F. Ewert and P. van Loock. "3/4-efficient bell measurement with passive linear optics andunentangled ancillae." Physical Review Letters, 113 (2014): 140403, arXiv:1403.4841 [quant-ph].

[3] M. Gimeno-Segovia et al. "From three-photon GHZ states to ballistic universal quantum computation." Physical Review Letters 115 (2015): 020502, arXiv:1410.3720 [quant-ph].

In this journal club, I will discuss the measurement-based model of quantum computation known as Pauli-based computation (PBC) and first introduced in [1]. I will demonstrate that this model is universal and polynomial-time equivalent to the quantum circuit model. Additionally, I will show how this framework can be used to compile (universally general) Clifford+T quantum circuits [1,2] and to perform hybrid quantum-classical computation [1].

[1] S. Bravyi, G. Smith, and J. A. Smolin, Phys. Rev. X 6, 021043 (2016), arXiv:1506.01396 [quant-ph].

[2] M. Yoganathan, R. Jozsa, and S. Strelchuk, Proc. R. Soc. A 475 (2019), arXiv:1806.03200 [quant-ph].

Photons are natural carriers of high-dimensional quantum information, and the encoded qudits can benefit from higher quantum information capacity and noise-resilience. However, schemes to generate the resources needed for high-dimensional quantum computing have so far not been demonstrated for linear optics. Here, we show how to generate GHZ states of arbitrary numbers of photons in arbitrary dimensions using destructive interference in linear optical circuits described by Fourier matrices. We combine our results with recent schemes for qudit Bell measurements to show that universal linear optical quantum computing can be performed in arbitrary dimensions.

A SWAP test is a quantum circuit that measures the overlaps *r _{ρσ} = Tr(ρσ)* between two states ρ, σ. If we consider a set of

*n*quantum states, different bounds on two-state overlaps result when we consider either i) diagonal, coherence-free states, or ii) general quantum states. The difference between i) and ii) allowed us to propose novel basis-independent coherence witnesses. I will show that the inequalities for overlaps of coherence-free states correspond to noncontextuality and locality inequalities, which suggests a unified framework for resource theories of coherence, contextuality and nonlocality.

This is joint work with Rui Soares Barbosa and Rafael Wagner.

The talk will be a short an introduction to the resource theory of contextuality [2,3] in the language of the Abramsky–Brandenburger sheaf-theoretic approach [4], including some results from the recent pre-print [1].

We consider functions that transform empirical models on a scenario S to empirical models on another scenario T, and characterise those that are induced by classical procedures between S and T corresponding to 'free' operations in the (non-adaptive) resource theory of contextuality. We proceed by expressing such functions as empirical models themselves, on a new scenario [S,T] built from S and T. Our characterisation then boils down to the non-contextuality of these models.

We show that the construction [–,–] provides a closed structure in the category of measurement scenarios.

[1] Closing Bell: Boxing black box simulations in the resource theory of contextuality, Barbosa, Karvonen, & Mansfield (2021), arXiv:2104.11241 [quant-ph].

[2] Contextual fraction as a measure of contextuality, Abramsky, Barbosa, & Mansfield, in Phys. Rev. Lett. 119, 050504 (2017), arXiv:1705.07918 [quant-ph].

[3] A comonadic view of simulation and quantum resources, Abramsky, Barbosa, Karvonen, & Mansfield, in LiCS 2019, arXiv:1904.10035 [quant-ph].

[4] The sheaf-theoretic structure of non-Locality and contextuality, Abramsky & Brandenburger, New J. Phys. 13, 113036 (2011), arXiv:1102.0264 [quant-ph].

Complex numbers, namely numbers with a real and imaginary part, were invented to solve equations, such as $x^2 = -1$, which cannot be solved using real numbers. They are extremely useful in physics, especially in the field of electromagnetism where complex numbers, with the use of Euler's formula, can treat electromagnetic waves and their interference in a handy way. Even though complex numbers are a convenient mathematical tool in electromagnetism, we do not have to use them, so they are not an integral part of the theory. On the other hand, quantum theory is the only theory where complex numbers seem to play an essential role: a physical system is associated to a complex Hilbert space and the time evolution of the state describing the system is given by the Schrodinger equation, where the imaginary unit appears.

A question that has led to controversial discussions is whether it exists a real version of quantum theory where the states and observables are represented by real operators and still explain the same quantum phenomena. Previous works have shown that such a real version of quantum mechanics can reproduce the statistics of any multi-partite experiment, as long as the parts share arbitrary real quantum states. In this talk, I will present a recent work by Renou et al. [1] where they showed that complex numbers are necessary for a quantum description of nature by proving that "real" and "complex" quantum physics give different predictions in particular network scenarios involving independent quantum state sources.

[1] Renou et al., Quantum physics needs complex numbers, arXiv:2101.10873 [quant-ph]

Quantum Darwinism is a physically appealing description of how quantum systems emerge as objective in our world. This objectivity can be viewed as a notion of agreement between observers about the observables being measured and outcomes each observer acquires, but a notion of agreement and a notion of objectivity may not necessarily be an undebatable notion of classicality [1], even though it fits perfectly with what the founding fathers of quantum theory thought about classical systems (the important historical example here being the Einstein–Bohr debate [2]). In this talk I will discuss how another notion of classicality, generalized noncontextuality, can emerge if a quantum Darwinism process appears [3].

[1] Between classical and quantum, N. P. Landsman (2005), arXiv:quant-ph/0506082.

[2] Agreement between observers: a physical principle?, Patricia Contreras-Tejada et al. (2021), arXiv:2102.08966 [quant-ph].

[3] Noncontextuality as a meaning of classicality in Quantum Darwinism, Roberto D. Baldijão et al. (2021), arXiv:2104.05734 [quant-ph]

The geometrical arrangement of a set of quantum states in projective Hilbert space can be found using relational information only. This information is encoded in the overlaps between pairs of states, as well as in higher-order Bargmann invariants encoding the relative orientation of n>2 states. We describe how to measure these invariants with a generalization of the SWAP test, and how to pool the information to obtain a complete characterization of their projective-unitary invariant properties. As applications, we describe basis-independent tests for linear independence, coherence, and for the presence of complex-valued amplitudes (“imaginarity”). We also describe how higher-order invariants can be used to certify multi-system indistinguishability.

It is believed that quantum computers are more powerful than their classical counterparts. A promising approach to understand their power is to explore restricted classes of computation which can be efficiently simulated by classical devices but become universal by the addition of an extra resource. One of the most prominent examples is stabilizer circuits, the class of circuits built out of Clifford gates, which according to the Gottesmann–Knill theorem can be classically efficiently simulatable.

In this talk, I will present another interesting restricted class of circuits, which can be classically efficiently simulated, made out of a special set of unitary two-qubit gates restricted to act on nearest-neighbour (n.n.) qubits, the so-called matchgates. We will see that family of circuits comprised of these gates can be mapped to a system of non-interacting fermions, a map that is considered as a representation of the Clifford algebra of Majorana spinors, giving rise to an translation between fermions and qubits. In particular we will describe how this: (i) provides a straightforward proof of the classical simulation of these matchgate circuits and (ii) in conjunction with Clifford operations one can produce further classes of classically efficiently simulatable quantum circuits.

References:

[1] R. Jozsa, A. Miyake, "Matchgates and classical simulation of quantum circuits", Proc. R. Soc. A 464, 3089–3106 (2008), arXiv:0804.4050 [quant-ph].

[2] B. M. Terhal, D. P. DiVicenzo, "Classical simulation of noninteracting-fermion quantum circuits", Phys. Rev. A 65, 032325 (2002), arXiv:quant-ph/0108010.

Stabilizer circuits have a wide range of applications in the field of quantum computation and information theory; for example, they play a prominent role in the theory of quantum error correction and fault-tolerant computation. These circuits are made up of gates drawn only from the Clifford group and the Gottesman-Knill theorem asserts that, under certain conditions, they are efficiently classically simulable. Furthermore, such circuits have been proved to be L-complete allowing for neither universal quantum computation, nor universal classical computation.

In this journal club, I will discuss how a variety of additional ingredients might change the classical simulation complexity of sixteen cases of extended Clifford computations. In particular, it will be shown how augmenting such circuits by a purely classical ingredient (viz. adaptivity) unlocks universal quantum computation.

[1] R. Jozsa and M. van den Nest, Quantum Information and Computation 14 (2013), arXiv:1305.6190 [quant-ph].

In this talk, I will present how contextuality has recently been linked to Variational Quantum Algorithms (VQAs). VQAs hold immense importance in near-term quantum simulation. Variational Quantum Eigensolver (VQE) is one such algorithm that is used to find the ground state and ground state energy of a given Hamiltonian. I will describe:

(i) How a Contextuality test for a VQE procedure has been defined

(ii) How VQE procedures which fail this “quantumness” test admit a classical simulation which is NP-complete compared to a general VQE problem which is QMA-hard

(iii) How a general VQE procedure can then be split into a Contextual as well as a Non-contextual part such that the ground-energy predicted by the non-contextual part (via classical simulation) can be corrected for by running a VQE for just the quantum part hence saving up resources compared to the case where VQE is run for the full initial problem.

The talk will be based on:

[1] William M. Kirby and Peter J. Love. “Contextuality test of the nonclassicality of variational quantum eigensolvers.” Physical Review Letters 123, 200501 (2019). arXiv:1904.02260 [quant-ph].

[2] William M. Kirby and Peter J. Love. “Classical simulation of noncontextual Pauli Hamiltonians.” Physical Review A 102, 032418 (2020). arXiv:2002.05693 [quant-ph].

[3] William M. Kirby, Andrew Tranter, and Peter J. Love. “Contextual Subspace Variational Quantum Eigensolver.” arXiv preprint (2020). arXiv:2011.10027 [quant-ph].

In this talk, we present a quantum advantage scheme which is a fermionic analogue of Boson Sampling. This scheme, called Fermion Sampling, uses fermionic linear optical operations together with magic input states. On the one hand side, we provide hardness guarantees for this scheme which is at a comparable level to that of the state-of-the-art hardness guarantees for Random Circuit Sampling, surpassing that of Boson Sampling. On the other hand, we argue that one might perhaps even construct practically useful sampling schemes based on Fermion Sampling similarly to those constructed based Boson Sampling. Finally, we discuss the experimental feasibility of our scheme.

Deep Neural Networks are universal function approximators that are central for designing systems that learn from unstructured or even unlabeled data. Variational Quantum Circuits, also known as Quantum Neural Networks, are new models that exploit effects like superposition, entanglement, and interference, and have already shown potential advantages such as speed-ups in training and faster processing for some classification problems. In this talk we show that Variational Quantum Circuits can be used to devise the optimal policy for Reinforcement Learning agents, therefore bringing potential quantum advantages to interactive learning frameworks.

We will outline the main existing models and results of quantum walks in literature, and some of the algorithms based on this technique over graphs. Moreover, we would like to present some simulations and physical realisations of quantum walks, as well as open questions.

In this talk I will discuss noncontextuality, mentioning it's different perspectives and possible approaches with a focus on generalized noncontextuality [1]. This is an operational-probabilistic approach to noncontextuality treating not only measurement procedures but also preparations and transformations: the main idea is to introduce this topic and the developments in the direction of using/certifying quantum-over-classical advantages.

[1] Spekkens, R. Physical Review A 71.5 (2005): 052108.

In this informal talk I will review some aspects of two quantum advantage experiments that were reported in 2019 and 2020: the Google Quantum AI random circuit experiment using superconducting chips [1] and the Gaussian Boson Sampling experiment by the University of Science and Technology of China group [2].

[1] https://www.nature.com/articles/s41586-019-1666-5

[2] https://science.sciencemag.org/content/370/6523/1460

The question of nonlocality is very famous from a paper by Einstein, Podolsky, and Rosen. They questioned the completeness of quantum mechanics by the fact that it would imply the existence of nonlocal effects. Later, John Bell with his theorem showed that a test to this question could be performed with a physical experiment, and several experiments have indicated that quantum mechanics is correct.

However, the limits of nonlocality were questioned, mainly because quantum mechanics does not expand to the maximum amount of correlation allowed by the theory of general relativity. This question was very debated, resulting in interesting results, for instance, measures of nonlocality in communications problems. Similarly, the amount of correlation present in quantum states serves as a measure of non-classicality in the resources used by Measurement-based Quantum Computing schemes. These measures are of great interest to understand what does give quantum computers their computational advantage.

Quantum operations represented by a positive Wigner function can be efficiently classically simulated. Thus Wigner negativity is a necessary (though not sufficient) resource for quantum speedup. We wish to derive an experimentally accessible witness for Wigner negativity. More precisely, our goal is to derive a bound Fn such that if the fidelity of our unknown target state with the nth Fock state is greater than Fn then we can certify that the Wigner function associated to the unknown state displays some negativity somewhere. The computation of the bound Fn can be phrased as an infinite-dimensional linear program. We derive a lower bound on Fn by considering a restriction of the problem which yields a hierarchy of finite-dimensional semi-definite programs. We are able to provide an analytical feasible solution for any rank of the hierarchy which ensures a lower bound. The proof makes use of powerful techniques (Zeilberger's algorithm) to prove binomial identities. We believe this bound is tight but deriving a matching upper bound is still an open question. The convergence of the hierarchy to the original infinite linear program is also not proven.

I will talk about the Bayesian strategy for parameter estimation, and its application to the characterization of quantum systems. This includes the generic algorithm for parameter estimation using Bayesian inference, as well as improved protocols using Bayesian experimental design. I will also present numerical results for the estimation of a spin precession frequency.

Here are my two main references:

arXiv:1207.1655 [quant-ph]

arXiv:1111.0935 [quant-ph]

I will give a tutorial on the basic notions to understand the physical principles that govern superconducting qubits. To do so, I will go first through some of the most fascinating concepts in the theory of superconductivity, such as the breaking of electron number conservation, the concept of superconducting phase and the Josephson effect. I will discuss the transmon qubit, relevant for IBM quantum hardware, and basic notions in circuit quantum electrodynamics, and how these relates to single qubit gates, two qubit gates, and the readout process. The level of the presentation will be kept apt for last year undergraduate students, master students.

In Bristol, we are interested in making silicon chips which can prepare and manipulate quantum states of light. I will discuss how we conduct large scale experiments with these chips and focus on some recent results on quantum correlated sampling machines. These devices use entanglement to perfectly control the correlations for high dimensional systems between remote users. This property can be used for efficient verification of quantum advantage experiments and for applications in quantum communication.

In this talk, the Grover algorithm [1] will be introduced, including both a quantum-simulation-motivated derivation [2] and a geometric interpretation of the amplitude amplification [3]. Then, the application of the Grover algorithm to the implementation of the Gutzwiller ansatz [4] on quantum hardware will be discussed.

References:

[1] L. K. Grover, arXiv:quant-ph/9605043

[2] M. A. Nielsen and I. L. Chuang, Quantum Computation and Quantum Information, CUP

[3] M. C. Gutzwiller, Phys. Rev. Lett. 10, 159 (1963)

[4] M. Boyer et al., arXiv:quant-ph/9605034

I will present Bayesian networks and their importance for problems involving uncertainty. Subsequently, the problem of modeling decision-making processes in a stochastic environment will be used to demonstrate the transition from a classical scenario to a completely quantum one.

Ernesto will introduce the basics of the Feynman path-integral (FPI) approach to simulating quantum circuits. Unlike Schrodinger-type algorithms that store the whole wavefunction, the FPI algorithm is highly parallelizable, and uses only polynomial-sized memory (in the number of qubits), and exponential time. Then Quinn will describe how we adapted the approach to the simulation of linear-optical circuits, implementing it in Python. This is on-going work in collaboration with Raffaele Santagati, Jake Bulmer and Alex Jones.

In this talk I will give a brief introduction to variational quantum eigensolvers and their applications. I will then focus on some specific experimental implementations and present one of the methods that have been proposed to target more efficiently excited states of quantum systems.

I will review the original quantum teleportation protocol [1], and discuss some variations on this theme. These include: one-bit teleportation [2], port-based teleportation [3], gate teleportation [4], and postselected teleportation [5].

References:

[1] https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.70.1895

[2] https://arxiv.org/abs/quant-ph/0002039

[3] https://arxiv.org/abs/0807.4568

[4] https://arxiv.org/abs/quant-ph/9908010

[5] https://arxiv.org/abs/1003.4971

I'll recap the notion of contextuality presented in Part 1 of this talk two weeks ago. Then, I'll focus on the role of contextuality as a resource, particularly in the context of measurement-based quantum computation, establishing a quantifiable relationship between a measure of contextuality and the amount of quantum advantage.

In this talk, we will discuss Quantum Hamiltonian Learning, which is a family of protocols exploiting a form of Bayesian Inference which uses a quantum computer to compute the update to our knowledge. We will start by describing the quantum likelihood estimation protocol and conclude by looking at a couple of experimental demonstrations involving the study of the electron spin in the NV centre in diamond.

a couple of references:

[1] N. Wiebe, C. Granade, C. Ferrie, and D. G. Cory, Phys. Rev. Lett. 112, 190501 (2014).

[2] J. Wang, S. Paesani, R. Santagati, S. Knauer, A. A. Gentile, N. Wiebe, M. Petruzzella, J. L. O’Brien, J. G. Rarity, A. Laing, and M. G. Thompson, Nature Physics 13, 551–555 (2017).

[3] R. Santagati, A. A. Gentile, S. Knauer, S. Schmitt, S. Paesani, C. Granade, N. Wiebe, C. Osterkamp, L. P. McGuinness, J. Wang, M. G. Thompson, J. G. Rarity, F. Jelezko, and A. Laing, Phys. Rev. X 9, 021019 (2019).

This will be an introductory talk on contextuality, a distinguishing non-classical feature of quantum systems, and its relationship with quantum advantage in informatic tasks.

I will talk about recent and on-going research on overlaps and related quantities, and what they tell us about non-classicality.