Microelectronic industry scaling down to mesoscopic physics

Get Complete Project Material File(s) Now! »

Microelectronic industry scaling down to mesoscopic physics

Since the 60’s, the density of transistors integrated on a microprocessor doubles every 18 months, driven by Moore’s law. The last generation of MOS transistors, used for instance in the Pentium 4 (2005), has typical dimensions below 100nm.
At a length scale of a few nm, purely quantum efiects like energy quantization in the channel of the transistor or tunneling from the gate to the channel, will strongly degrade the performances of this device. CMOS technology scaling to smaller and smaller dimensions is therefore likely to be compromised by these quantum efiects and fundamental limits of miniaturization are expected to be reached in the near future, even if progress in materials can still signiflcantly improve transistor performance.
Nevertheless, quantum efiects in general are not necessarily only a nuisance and we’ll see in the next section, dedicated to quantum computing, that quantum physics provides new principles for information processing. A computer taking advantage of these principles would be able to perform some tasks which are intractable for classical computers.
Although, no quantum computer has yet been operated, many physical systems have been investigated to explore their potential for making quantum processors. Among these, superconducting circuits made of Josephson junctions have attracted a wide interest because present fabrication techniques provide useful design °exibility. In addition, their quantumness is now demonstrated.
The aim of this Ph.D thesis is to further investigate the quantronium circuit developed in the Quantronics group since 2001. This circuit implements a quantum bit, which is the building block of a quantum computer. During this Ph.D, full manipulation and control of the quantum state of this device has been achieved experimentally. The decoherence phenomenon which prevents ordinary circuits from behaving quantum mechanically has also been characterized, and a new setup invented at Yale University by M.Devoret, for reading out the state of our qubit in a non-destructive way, has been implemented.

Quantum computing

As pointed out in the early 1980’s by D. Deutsch [1], a processor that exploit the laws of quantum mechanics could indeed perform some computational tasks more e–ciently than can be achieved by classical processors. This striking discovery started a new fleld called quantum computing.
The simplest description for a quantum processor is an ensemble of coupled two level systems, called quantum bits or qubits. A quantum algorithm consists in the controlled evolution of the quantum state of the whole processor. Measurements of the quantum state are performed during the computation to provide the answer of the problem. It was shown soon after the proposition of quantum computing that a small number of single-qubit and two-qubit operations is su–cient for implementing any unitary evolution of a quantum processor. But, could quantum computers provide a large enough advantage over classical ones to compensate for their greater complexity?
In order to answer this question, one flrst needs to evaluate the di–culty of a given problem for classical computers, using complexity theory.

Evaluating the complexity of a problem

Complexity theory evaluates how the resources needed to solve a given problem, like time or memory, scale with the size of the problem. For instance the addition of two integers takes a time proportional to the number of digits of the integers. One says it has O(N) complexity, where N is the number of digits.
In the ensemble of computable problems, there is a hierarchy, which deflnes two subensembles: † ’P’: the ensemble of problems having a polynomial complexity. For instance, the multiplication of two integers takes a time proportional to the square of the number of digits: it has a O(N2) com plexity. These problems are said to be easy, since a polynomial cost is almost always supportable. † ’NP’ problems, whose solutions are easy to check. For instance, the factorization of integers. Given an integer n, it is easy to check whether or not an other given integer a is a prime factor of n : the euclidian division of n by a, takes a polynomial time and gives the answer. The P ensemble belongs to NP since if you can flnd the solution of a problem in a polynomial time, you can also verify a solution in a polynomial time. But it seems that NP is larger than P, and so contains problems that cannot be solved in polynomial time. For example, if you want to flnd a, a prime factor of n, until now, you have to try almost every integer up to pn , a process that takes an exponential time in the length of n. As a consequence, this factorization problem seems for the moment not to be in P. An important problem in computer science is to prove that NP is indeed larger than P.
This classiflcation in difierent complexity classes relies on the modern Church-Turing thesis which asserts that all classical computers are equivalent. This means that the complexity class of a given problem does not depend on the hardware, i.e. the classiflcation in P or NP problems is universal: a P problem on a Pentium is also a P problem on the last Cray. This universality does not extend however to quantum processors since quantum physics provides new principles for building a computer which is not subject to the Church-Turing thesis. Indeed, in 1994, P. Shor invented a quantum algorithm for the factorization of integers having a polynomial complexity [2]. The impact of this discovery was extremely important since most of cryptographic protocols used nowadays (such as RSA encoding protocols) rely on the di–culty of factorizing large integers .

Quantum resources

Quantum computing uses several resources relying on the fundamental principles of quantum physics. The main quantum resource is linearity. Due to the superposition principle, a quantum computer could operate at the same time on a linear combination of several input data and give the output as a superposition of all the results (see flgure 1.2).
This massive parallelism is not directly useful because the measurement process needed at the end of the calculation projects the quantum processor on a unique state, with a certain probability. As a consequence, quantum algorithms are not deterministic. Furthermore the no-cloning theorem [3] forbids the copy of an unknown quantum state, this probability distribution is not measurable in a single shot.
The art of quantum algorithmics is precisely to restore the power of quantum parallelism by circum-venting the drawback of this unavoidable projection at readout. The idea is that, for particular types of problems difierent from the simple evaluation of a function, the flnal readout step can provide the searched answer with a high probability. This « quantum distillation » is exemplifled by L. K. Grover’s search algorithm [4]. This algorithm can flnd indeed a particular state in a Hilbert space of dimension N, in a time O(pN), whereas the best classical search algorithm in a data base of N elements has a O(N) complexity. Starting from a quantum state given by the superposition of all Hilbert space elements, the evolution drives it towards the state that fulfllls the particular condition required.
Underlying to this concept of a clever unitary evolution that can be compared with a sort of « dis-tillation » of the quantum states, is the entanglement phenomenon. This is a general property of almost all states in a Hilbert space and it can be seen as a resource by itself, even if, for the moment, no sat-isfactory measure exists for quantifying this property in a many qubit systems. Entanglement gives rise to non classical correlations between parts of a composite system. Information can be encoded in these correlations, leading to astonishing results. For instance, sharing an entangled pair of qubits permits to communicate two classical bits by sending only one qubit (superdense coding [5]), or to teleport an unknown qubit state by sending two classical bits [6].
The resources of quantum physics can thus be exploited to solve particular tasks, such as: quantum Fourier transform [7], search problems in an unstructured ensemble, and more important, simulation of a quantum system [8, 9]. Today, it is still unknown how many problems can be solved e–ciently with a quantum computer. The real potential of a quantum computer compared to a classical one is so far unknown. As a consequence, an important motivation for building such a quantum processor is to deepen our understanding of quantum physics, and in particular of the quantum-classical boundary.

The problem of decoherence

From the experimental point of view the implementation of a quantum processor is obviously a formidable task because quantum states are extremely fragile. Indeed, quantum superpositions of states are very sensitive to errors introduced by the coupling to the environment which is unavoidable if manipulation and measurement have to be performed. This decoherence phenomenon (see chap 4), was considered to be a fundamental limit for quantum computation, until P. Shor [10], A. M. Steane [11] and D. Gottesman [12] proposed schemes for quantum error correction. The idea is to use redundancy, as with classical error correction, by entangling the main qubits with auxiliary ones, called ancillas. The correction of errors is possible by measuring these ancilla qubits and getting enough information on the errors to correct properly the main qubits without losing quantum coherence. For instance, four auxiliary qubits are required to implement this scheme in the frame of the model of single qubit errors. To work, these quantum error correcting codes need a minimal accuracy for every typical qubit operation. If the probability of errors per gate is below a critical value of about 1 over 104 operations, which is presently beyond the reach of all proposed implementations, then an arbitrarily long quantum computation could be performed.

Physical implementations of qubits

The requirements for implementing a quantum computer are given by the DiVincenzo criteria [13]. There are needs for:
† Scalability: a large number of reliable qubits is needed;
† E–cient initialization of the qubit state (reset);
† Quantum coherence: long coherence times compared to gate operation time;
† High fldelity readout of individual qubits;
† Availability of a universal set of quantum gates
As the physical systems currently used for building qubits do not simultaneously satisfy all these criteria, one can roughly classify qubits in two types:
† Microscopic systems like nuclear spins, ions or atoms. They are intrinsically quantum and have long coherence times, but they are not easily scalable mainly because of their typical size.
† Macroscopic systems, like quantum dots or superconducting circuits, are easily scalable with lithog-raphy techniques, but not easily quantum, since they are well coupled to their environment.
Having the DiVincenzo criteria in mind, we shall see more precisely what the advantages and the draw-backs are of the main physical systems used as qubits.

Microscopic qubits

Nuclear spins

In NMR quantum computing, qubits are encoded in the nuclear spins of a molecule in a magnetic fleld [14]. As it is impossible to measure a single nuclear spin about 1020 identical molecules are used to get a reasonable signal with a weak ensemble measurement. The state of the system is controlled by applying resonant radiofrequency pulses and logic gates are obtained from scalar coupling, which is an interaction between neighbor spins mediated by the electrons of the chemical bounding. Since kBT is much higher than the Zeeman splitting it is not possible to initialize the spins in a pure state, hence, the state of the spin ensemble is highly mixed. It is a major issue, since, a well known input state has to be prepared before performing any quantum algorithm. Fortunately, this problem can be circumvented by preparing a particular state, called a pseudo pure state, which behaves dynamically like a pure state [15].
Despite important breakthroughs in 1998 with the implementation of the Deutsch-Josza algorithm [16] and in 2001 with the implementation of the Shor’s factorizing algorithm [17], NMR quantum computing is limited by the preparation of this initial pseudo pure state. This preparation either costs an exponential time in the number of qubits or reduces exponentially the signal to noise ratio, which makes NMR quantum computing not scalable.

Trapped ions

In 1995, I. Cirac and P. Zoller proposed the implementation of a quantum computer with trapped ions [18]. The qubits are stored either in a long-lived optical transition of an ion [19], or in the ground state hyperflne levels [20]. These systems are well known in metrology for their use as frequency standards, since coherence times of several minutes are available.
The ions are conflned in a harmonic potential created by a Paul trap and because they repel each other by Coulomb interaction, the mean distance between ions is a few microns allowing individual optical addressing for manipulation and measurement (see flgure 1.3).
Figure 1.3: A set of electrodes creates a combination of DC and AC electric flelds (called a Paul trap) suitable for conflning ions. The ions repeal each other by Coulomb interaction allowing individual optical addressing. The modes of vibration of the string are used for coupling the ions.
The idea for implementing quantum gates between ions is to couple the electronic degrees of freedom of the ions to the vibrational modes of the string with Raman transition, and use these phonons as a quantum bus.
With such a scheme a CNOT-gate [21], four ion entangled states [22] and quantum error correction have been achieved [23]. However, the single trap quantum computer is limited to a small number of ions. New « on-chip » architectures based on registers of interconnected traps should permit scaling to a much higher number of ions without inducing signiflcant decoherence [24]. This new type of architecture is presently the most promising for implementing a quantum computer.

READ  Active Learning underwritten by the constructivist approach

Atoms in cavity

Atoms in cavity have been extensively used since the mid 90’s mostly for studying quantum measurement and entanglement. Such systems consist of a high Q cavity, which quantizes the spectrum of the vacuum and enhances dramatically the interaction between an atom and the electromagnetic fleld.
Quantum logic operations based on Rabi oscillations between the atom and the cavity have been achieved [25] and the progressive decoherence process of an atom entangled with a mesoscopic coherent fleld has been observed [26], enlightening the quantum-classical boundary. A three particle GHZ (Green-berger, Horne, and Zeilinger) entangled state has been prepared [27], using the fleld of the cavity like a quantum memory for storing the information of an atom. In the context of quantum computing, these experiments enlighten the roles of entanglement and decoherence, which are two major phenomenon of this fleld. Nevertheless, the scalability of such a system is limited by the preparation of single atomic samples and by the cavity itself. New schemes for creating Bose Einstein condensates on a chip are now being studied [28], the idea is to use microfabricated circuits similar to those used for ions to trap and manipulate Rydberg atoms.

Macroscopic qubits based on electronic circuits

Electronic quantum bits divide into two classes. In the flrst class, qubits are encoded in the degrees of freedom of individual electrons trapped or propagating in a semiconductor circuit. Either the orbital state or the spin state of the electron can be used to make a qubit. In the second class, qubits are encoded in the quantum state of an entire electrical circuit. This strategy has only been used for superconducting circuits, which are the only ones with su–ciently weak decoherence for that purpose.

Semiconductor structures

In a semiconductor, transport properties rely on microscopic quantum efiects, like for instance the mod-ulation of the carrier density with an electric fleld. On a macroscopic scale, these properties are subject to a statistical averaging which suppresses any quantum behavior. In order to recover such a behavior, one possibility is to conflne a small number of electrons in a quantum dot. If the length of the dot is comparable to the Fermi wavelength, then, energy quantization and Coulomb repulsion permit one to isolate a single electron on a unique quantum state of the dot [29]. The most advanced experiments consists in using a 2D electron gas properly biased with gate electrodes (see flgure 1.6). The spin state of an electron trapped in such a dot and subject to a magnetic fleld (parallel to the electron gas) has a long relaxation time, up to about 1ms. Spin manipulation can be performed using ac magnetic flelds and the exchange interaction between neighboring dots provides a controllable coupling of the qubits. A single shot readout is achieved by transferring the spin information into the charge of a dot, which can be measured using a quantum point contact transistor [30]. However, the coherence time is of the order of 10 ns, much shorter than the relaxation time, due to random magnetic flelds produced by the nuclear spins of the GaAs substrate [31]. Although decoupling pulse methods inspired from NMR could in principle be used to suppress decoherence due to these random flelds, a more reliable solution would be to use materials having a zero nuclear spin.
Alternative approaches have been proposed [32, 33], using ballistic electron propagating in quantum wires. This is the so-called « °ying qubit » which consists in encoding the information in the presence or not of an electron propagating on an electronic mode of the circuit. Most promising systems are probably edge channels of a 2D electron gas in the Quantum Hall Efiect regime, which provide waveguides for electrons where the phase coherence length can exceed several tens of „m. In addition, single deterministic electron sources using quantum dot in the Coulomb blockade regime have been demonstrated [34].
A fundamental difierence here with others types of qubits is that the two states are degenerate but well decoupled due to spatial separation. However, they can be coupled at will in quantum points contacts, which are the equivalent of beam splitters in quantum optics. Coulomb interaction between electrons was flrst proposed for implementing logical operations between °ying qubits, but new ideas exploiting single electron sources, the Fermi statistics and the linear superposition of electronic waves with beam splitters, can lead to entangled 2 electron states [35]. The detection of a single electron in a short time (a few ns), which is an important issue remains to be solved and the mecanisms of dephasing and energy relaxation of an electron in an edge channel, which afiects the coherence of the °ying qubit, will have to be characterized.

Superconducting qubits

Due to the absence of dissipation, superconductivity gives the opportunity to use a collective quantum degree of freedom, namely, the superconducting phase.
When crossing the critical temperature of a superconductor, the electrons bind to form Cooper pairs. The superconducting ground state can be seen as resulting from the Bose Einstein condensation of these Cooper pairs into a single macroscopic quantum state called the superconducting condensate. This condensate is fully characterized by the order parameter ¢ of the transition given by: ¢ = j¢jeiµ;
where j¢j is the superconducting energy gap, isolating the ground state from the flrst excitations, and µ the superconducting phase. At su–ciently low kBT compared to the gap, the main microscopic excitations are frozen out and the superconducting phase becomes a robust macroscopic quantum degree of freedom. The energy spectrum of an isolated superconducting electrode thus consists of a non degenerate ground state well separated from excited quasiparticle states.
When two superconducting electrodes are weakly coupled by tunnel efiect across a thin isolating barrier, a Josephson junction is formed, which is the simplest possible circuit and the building block of superconducting qubits. This circuit is characterized by two energy scales: Ej, the Josephson energy, characterizing the strength of the tunnel coupling, and Ec = (2e)2=2C, the charging energy of one Cooper pair on the capacitance of the junction (see flgure 1.5). These two energies are involved in the Hamiltonian of the junction:
H = Ec b 2 ¡ j b
N E cos µ;
where µ is the difierence between the phases of each electrode and N is the number of Cooper pairs b having crossed the junction. These two variables are quantum mechanically conjugated: [N; µ] = i.
When Ec À Ej, i.e. for small junctions (below 0:1„m2 for aluminium junctions), the eigenstates of the circuit are close to charge number states, whereas, in the opposite limit, they are close to phase states.
Due to the non-linearity of the Josephson Hamiltonian, Josephson junctions can be used to build systems having anharmonic atomic-like spectrum, the two lowest energy levels of which form the qubit. Although superconducting qubits are so far implemented with circuits made of several junctions their behavior is always ruled by the comparison between a principal charging energy and a principal Josephson energy deflned by the size of the junctions and the topology of the circuit. Depending on the ratio of these two energies Ej and Ec, several types of superconducting qubits have been realized:
† charge qubit (Ej=Ec ¿ 1) [36] [37] [38].
† °ux qubit (Ej=Ec … 10) [39].
† phase qubit (Ej=Ec À 1) [40].
Figure 1.6: Difierent types of superconducting qubits depending on the ratio Ej=Ec, from left to right: NIST, Delft, Saclay, Chalmers qubits. For large Ej compared to Ec, the eigenstates of the system are almost phase states except near degeneracy points, whereas for large Ec, the eigenstates are almost charge states.

The Quantronium

In this thesis, we have investigated the Quantronium, a charge-°ux qubit (Ej=Ec … 1) described in details in chapter 2.
The flrst successful manipulation of the quantum state of a circuit was performed in 1999 by Nakamura et.al. [36] using a circuit derived from the Cooper pair box [41], which is the simplest tunable Josephson circuit.
Since a very short coherence time was also obtained in 2000 for the °ux qubit [42], it became clear that getting rid of decoherence was mandatory for making qubits. Although decoherence sources were not analyzed in detail at that time, the dephasing induced by the variations of the qubit transition frequency due to the °uctuations of the control parameters appeared as an important source of decoherence [43]. The quantronium, developed since 2001 in the Quantronics group [44], is the flrst qubit circuit with a design that protects it from the dephasing resulting from random °uctuations of the control parameters.

Description of the circuit

The quantronium is also derived from the Cooper pair box. It is made of a superconducting loop interrupted by two small Josephson junctions to form an island (see flgure 1.7). This island can be biased by a gate voltage Vg and the °ux ` in the loop can be tuned by an external magnetic fleld. These two knobs can be recast in terms of the reduced parameters Ng = CgVg=2e, which is the reduced charge induced on the island by the gate, and – = `=’0, where ’0 = ~=2e is the reduced °ux quantum, the superconducting phase difierence across the two junctions in series. These two parameters permit to tune the properties.

Table of contents :

1 Introduction and summary 
1.1 Microelectronic industry scaling down to mesoscopic physics
1.2 Quantum computing
1.2.1 Evaluating the complexity of a problem
1.2.2 Quantum resources
1.2.3 The problem of decoherence
1.3 Physical implementations of qubits
1.3.1 Microscopic qubits
1.3.2 Macroscopic qubits based on electronic circuits
1.4 The Quantronium
1.4.1 Description of the circuit
1.4.2 Readout of the quantum state
1.5 NMR-like manipulation of the qubit
1.5.1 Rabi oscillations
1.5.2 Combined rotations
1.5.3 Implementation of robust operations
1.6 Analysis of decoherence during free evolution
1.6.1 Noise sources in the quantronium
1.6.2 Relaxation measurement
1.6.3 Dephasing measurement
1.6.4 Summary and analysis of decoherence during free evolution
1.7 Decoherence during driven evolution
1.8 Towards Quantum Non Demolition measurement of a qubit
1.8.1 Principle of the ac dispersive readout of the quantronium: the JBA
1.8.2 Characterization of the microwave readout circuit
1.8.3 Measurement of the quantronium qubit with a JBA
1.8.4 Partially non-demolition behavior of the readout
1.9 Conclusion
2 A superconducting qubit: the Quantronium 
2.1 The Cooper pair box
2.2 The Quantronium
2.2.1 Quantronium circuit
2.2.2 Energy spectrum
2.2.3 The optimal working point strategy
2.2.4 Loop current
2.3 Measuring the quantum state of the quantronium
2.3.1 Principle of the switching readout
2.3.2 Dynamics of a current biased Josephson junction
2.3.3 Escape dynamics of the readout junction coupled to the split Cooper pair box
2.4 Experimental setup and characterization of the readout
2.4.1 Current biasing line
2.4.2 Measuring line
2.4.3 Experimental characterization of the readout junction
2.4.4 Experimental characterization of the quantronium sample A
2.4.5 Spectroscopy of the qubit
2.4.6 Back-action of the readout on the qubit
2.4.7 Conclusion
3 Manipulation of the quantum state of the Quantronium 
3.0.8 Bloch sphere representation
3.1 Manipulation of the qubit state with non-adiabatic pulses
3.1.1 Non-adiabatic DC pulses
3.1.2 Non-adiabatic AC resonant pulses
3.2 Combination of rotations: Ramsey experiments
3.2.1 Principle
3.2.2 Experimental results
3.3 Manipulation of the quantum state with adiabatic pulses: Z rotations
3.3.1 Principle
3.3.2 Experimental setup and results
3.4 Implementation of more robust operations
3.4.1 Composite rotations
3.4.2 Fidelity of unitary operations
3.4.3 The CORPSE sequence
3.4.4 Implementation of a robust NOT operation
3.5 Conclusion
4 Analysis of decoherence in the quantronium 
4.1 Introduction
4.1.1 Decoherence
4.1.2 Decoherence in superconducting quantum bits
4.1.3 Decoherence sources in the Quantronium circuit
4.2 Theoretical description of decoherence
4.2.1 Expansion of the Hamiltonian
4.2.2 Depolarization (T1)
4.2.3 Pure dephasing
4.2.4 1=f noise: a few strongly coupled °uctuators versus many weakly coupled ones
4.2.5 Decoherence during driven evolution
4.3 Experimental characterization of decoherence during free evolution
4.3.1 Longitudinal relaxation: time T1
4.3.2 Transverse relaxation: coherence time T2
4.3.3 Echo time TE
4.3.4 Discussion of coherence times
4.4 Decoherence during driven evolution
4.4.1 Coherence time ~ T2 determined from Rabi oscillations
4.4.2 Relaxation time ~ T1 determined from spin-locking experiments
4.5 Decoherence mechanisms in the quantronium: perspectives, and conclusions
4.5.1 Summary of decoherence mechanisms in the quantronium
4.5.2 Does driving the qubit enhance coherence?
4.5.3 Coherence and quantum computing
5 Towards a Non Demolition measurement of the quantronium 
5.1 Readout strategies
5.1.1 Drawbacks of the switching readout
5.1.2 New dispersive strategies
5.2 The Josephson bifurcation ampli¯er
5.2.1 Principle of the qubit state discrimination
5.2.2 Dynamics of the JBA at zero temperature
5.2.3 Solution stability and dynamics in the quadrature phase-space
5.2.4 Theory at ¯nite temperature
5.2.5 Dykman’s approach of the bifurcation
5.3 Experimental characterization of the JBA
5.4 Measurement of the qubit with a JBA
5.4.1 Characterization of the QND behavior of the readout
5.5 Conclusion
6 Conclusions and perspectives 

GET THE COMPLETE PROJECT

Related Posts