SUCCESS STORY: Ernesto F. Galvão, funding through the FET-Open programme

March 20, 2020


Ernesto-Galvão_1.jpg

Ernesto F. Galvão is the Leader of the Quantum and Linear-Optical Computation Group at INL. He holds degrees in physics from the University of Oxford (PhD 2002), Federal University of Rio de Janeiro (Master´s 1998), and Pontifical Catholic University of Rio de Janeiro (Bachelor´s 1996). In his research, he studies different quantum computational models to identify and quantify resources capable of achieving a quantum advantage in information processing.

Ernesto F. Galvão recently won a FET-Open funding call. The programme aims at research that goes beyond what is currently known or even imagined. PHOQUSING | FET-OPEN proposes the implementation of two quantum sampling machines with different technologies to perform cross-checks and exploit the advantages of each platform.  The outcomes of PHOQUSING promise to outstrip what is experimentally possible today, establishing photonics as a leading new quantum computational technology in Europe.

Below a full interview with Ernesto F. Galvão, where he explains the project and the work path.

The whole concept of the project revolves around photonic quantum computation. What is it?

Quantum computers are devices that process information in a different way from normal computers, by using counter-intuitive quantum phenomena such as superposition and entanglement. These features provide computational short-cuts that enable solving some problems which were previously considered intractable.

Because of this great theoretical promise, for decades there has been a scientific and technological push towards understanding the potential and limitations of quantum computers, and actually building larger and more capable devices. Quantum computers are very susceptible to interference, finicky to control, and often require extreme cryogenics and other complex techniques. There are many challenges involved in building these systems, but today there already are a few technological platforms where functioning prototypes were built. For example, there have been impressive demonstrations using superconducting quantum circuits (IBM, Google) and ion traps (Alpine Quantum Technologies, IonQ).

Much of my research revolves around another quantum system that allows the construction of quantum computers: photons. Photonic quantum computation uses single photons to encode information, and lenses, mirrors and other optical elements to process information, with single-photon detectors for read-out and control. I have been doing theory to support experiments that demonstrate small-scale devices, characterize the resources used, and explore ways to use them computationally. Currently, the prototypes are still small, but the advantage of working at room temperature may help them scale up in the future. Photons are also excellent information carriers, so they also play the important role of connecting quantum computers based on other platforms, such as ion traps.

 

What will be the impact in the near future?

Researchers make very different predictions as to when quantum computers will start delivering useful results. Last year there was an important experimental demonstration of quantum advantage by the Google AI research team, in which a superconducting chip performed a computation that would take supercomputer days or years to do (the exact estimate is still being debated). The problem was quite contrived and does not yet translate into something useful. The first useful quantum computations may come in a few years, and will likely be simulations of new materials and drugs, as they seem to require fewer resources. But the race is on to identify other killer applications for noisy, near-term quantum computers. The photonic devices our project will build will explore some of these proposed applications, and likely lead to new ones.

Large-scale quantum computation, on the other hand, will require error-correcting techniques, and overcoming a lot of other engineering problems, which will likely take much longer to develop. But when large quantum computers are finally built, we will be using the full computational power allowed by Nature, with technological spin-offs which are hard to predict.

  

How did PHOQUSING start and how does the project correlate with INL’s Research Strategy?

This project is a natural development of a long-standing collaboration I have had with the experimental quantum optics group led by Fabio Sciarrino (Univ. Sapienza, Rome) since 2012. Together with my collaborator (and former PhD student) Daniel Brod from Universidade Federal Fluminense, we have been brainstorming ideas and suggesting new theoretical approaches to characterize and explore the computational capabilities of integrated interferometers developed jointly between the Rome lab and the Milan Polytechnic group of Roberto Osellame.

With time, the devices have increased in complexity. Many of the more recent experiments I participated in concern the characterization of coherence and particle indistinguishability, which are key resources necessary for better-than-classical performance in photonic quantum computation. PHOQUSING is a concerted effort to create computationally useful devices based on larger interferometers, with a goal of having thousands of reconfigurable elements to manipulate more than 10 single photons for computational purposes.

This project is well aligned with INL’s research strategy, as it investigates an emerging technology (photonic quantum computation), demonstrating some of its key aspects to deliver first proof-of-principle applications.

 

What will be the main applications of these specialized devices?

Currently, there are already some theoretical proposals for applications of these devices, which have yet to be experimentally implemented. These include Monte Carlo integration (useful for financial pricing, for example), testing for some graph-theoretical properties, applications of machine learning to identifying patterns in quantum experiments, and verifying proposed solutions to computationally intractable problems. Our goal is to demonstrate these, and other applications yet to be developed, using the photonic devices we will build. 

The noisy, imperfect devices we are setting out to build are an intermediate stepping stone towards the much more demanding requirements for fully scalable quantum computers. Besides being computationally interesting on their own, these devices will help build the necessary technologies for future, large-scale, error-protected photonic quantum computers.

The consortium behind PHOQUSING joins a unique, experimental and theoretical background of academic partners with two of the key European start-ups of new quantum technologies. This is an outstanding example of an interdisciplinary team.

 

How did such a network start? 

Europe has a well-established expertise in both photonics and quantum computation research. More recently, a few start-ups have appeared on the scene, mostly arising as spin-offs from academic research. This is the case with Quandela and QuiX, two start-ups that joined our consortium. As so often happens, academic collaborations opened up the path for this more applied research – the Rome lab already had on-going collaborations with members of Quandela, for example. On the theory side, I have been working with the Rome lab for many years, and I have also collaborated in the past with Elham Kashefi, a theoretical computer scientist at Sorbonne University who also joined the consortium. Putting together this proposal was a question of identifying aligned research interests and capabilities among all these partners, which, I must say, was done brilliantly by the project coordinator Fabio Sciarrino.

This pan-European consortium is, in part, the outcome of a crescendo of connections and relationships you built throughout your career.

 

How challenging was it to give the first step establishing contact with critical actors?

My long-time partnership with the project coordinator, Fabio Sciarrino, started with a serendipitous meeting in a conference in Brazil, in 2011, where I attended a talk by him. I immediately saw the connection between what I was studying theoretically, and the experimental capabilities he was developing. After over a year of back-and-forth emails, we finally converged on the design of a feasible experiment, and that was the beginning of our collaboration. I met another member of the consortium, Elham Kashefi, during my doctorate, and we kept in touch since which resulted in a collaboration a few years back.

 

For those starting their own scientific and technological networks, would you say that nourishing your connections over time is the key? 

Certainly. Two of my collaborators in this project are people with whom I’ve worked for close to a decade. When establishing a new collaboration, there is an initial amount of effort required, to establish a common language and understand the point of view and research priorities of your new partner. After this initial hurdle, it is important to keep in touch and always lookout for new opportunities – with a bit of luck and the right partners, they’ll show up.