Solid Phase Epitaxial Regrowth of intrinsic silicon 

Get Complete Project Material File(s) Now! »

Atomistic simulation

Technology Computer Aided Design (TCAD) process simulation tools are widely used in semiconductor industry as a support to technology development. They aim to simulate the process steps of the fabrication of electronic devices and predict the final impurities, stress distribution and device geometry. The results are generally used as an input for device simu-lators to depict electrical characteristics. The coupling between process and device simulators provides a complete simulation flow of a given process. This enables to analyze the influence of process parameters on electrical characteristics and therefore predict the impact of a process optimization on the device performances without having to test it experimentally. However, to be predictive, process simulation tools need to account for the many physical phenomena occurring at the different steps of the process. They generally rely on continuum models, i.e. a set of phenomenological differential equations that are solved using finite element methods (FEM) or finite volume methods (FVM). Such models therefore ignore the atomistic nature of solids. However, as the size of the devices is scaled down, atomistic methods become very appealing to provide a description of the materials at the atomistic scale and get insight into the physical mechanisms as illustrated in Fig. 1.13.
Atomistic approaches actually refer to a broad class of methods that we should clarify before going further. The most fundamental of these methods aim to compute the electronic structure of a system by solving more or less approximate quantum-mechanical equations. Numerical methods used to achieve these calculations can be splitted into two categories:
• ab-initio methods
• semi-empirical methods.
Electronic structure calculations can then be used to compute various material properties or formation energies. However, these methods cannot predict the dynamical properties of a sys-tem1. This leads us to the second class of atomistic methods that can be used to model atomic or electronic transport phenomena. It is important to emphasize that electronic structure calcu-lations presented above may be used in conjunction with these methods to describe the inter-actions between the particles occurring during the transport. Since electronic transport is out of the scope of this manuscript, we will focus hereinafter on atomic transport. The atomistic methods used to simulate atomic transport aim to describe the evolution of interacting atoms of a system, generally upon the Born–Oppenheimer approximation2. The most direct, and therefore the most fundamental, of these techniques is the molecular dynamics (MD) method which consists of simulating the evolution of interacting atoms by integrating their classical equations of motion. The forces between the atoms have to be correctly described by using potential energy surfaces (PES) that can be computed from the electronic structure (using the atomistic methods described above) or from an empirical interatomic potential. However, a huge limitation of this technique resides in the accessible simulation timescales that do not ex-ceed 10´6 s. In contrast, lattice or off-lattice kinetic Monte Carlo (KMC) approaches permit to overcome this limitation by evolving a system from state to state to account for long-timescale dynamics. Intermediate approaches can also be used to extend the timescales of MD simula-tions [Voter et al. 2002]. We will briefly describe MD and KMC methods hereafter.

Molecular Dynamics methods

Molecular Dynamics methods have been developed in the 1950s [Alder & Wainwright 1957] and have become very popular to treat problems related to condensed matter physics or other science disciplines. The basic principle relies on the assumption that the motion of particles can be treated classically upon the Born–Oppenheimer approximation. For a system of n particles, the equation of motion for each atom i is solved using Newton’s law: m i B2rI “ FI p r1, r2, …, rN q (1.1) where mi is the mass of the atom i, rI its position and FI the forces acting on it.
Fig. 1.14 shows a simplified Molecular Dynamics algorithm highlighting the main steps. At the beginning of the simulation, the system has to be initialized. It means that initial posi-tions and velocities have to be assigned to all the particles present in the simulation domain. Then, the forces FI acting on each particle are computed. In classical molecular dynamics simulations an empirical potential is used in order to treat domains that are large enough to cor-rectly capture the physical phenomenon of interest. However, the main concern of these poten-tials is their reliability in accurately predicting quantities for which they were not fit. In particu-lar, the choice of the interatomic potential depends on the studied physical phenomenon. Com-monly used interatomic potentials for silicon include the Tersoff’s potential [Tersoff 1988], the Stillinger–Weber potential [Stillinger & Weber 1985] and its recent parametrization called SW115 [Albenze & Clancy 2005] or the environment dependent interatomic potential (EDIP) [Bazant et al. 1997, Justo et al. 1998]. Finally, once the forces have been computed, the new atomic positions are calculated by solving Newton’s equation of motion by integrating Eq.1.1. This step is commonly achieved by a Verlet integration [Verlet 1967, Verlet 1968]. Molecular dynamics methods are a vast subject that would require a complete book and are therefore out of the scope of this section. The interested reader is invited to consult the dedicated literature such as [Frenkel & Smit 2001].

Kinetic Monte Carlo methods

The concept of Monte Carlo (MC) simulation concerns the methods relying on the use of random numbers in their algorithms. The name actually refers to the famous eponymous casino in the city of Monaco. They were introduced in the late 1940’s at Los Alamos Na-tional Laboratory [Metropolis & Ulam 1949] and have known a wide development since then. Various algorithms have thus emerged, their only similarity being the use of random num-bers [Landau & Kurt 2009]. Among them, the most famous include Metropolis Monte Carlo [Metropolis et al. 1953] and Kinetic Monte Carlo (KMC)3 [Young & Elcock 1966] algorithms. Metropolis Monte Carlo was developed in the 1950s to study the static properties of a system. In contrast, KMC method was introduced in the 1960s to study dynamical properties of a system and will be described hereinafter.
Before going further into the details of the KMC technique, it is important to emphasize the assumptions it is based on. With Molecular Dynamics most of the time is spent to de-scribe atomic vibrations while microscopic phenomena of interest (i.e. atomic jumps) occur over a longer timescale. Therefore, it can be assumed that the evolution of a system is char-acterized by occasional transitions from one state to another, where each state corresponds to an energy basin and two adjacent states are separated by an energy barrier (Ea) as depicted on Fig. 1.15. The potential energy barrier has to be large in comparison with kB T for the event to be considered as rare. In addition, after a transition has occurred, it can be consid-ered that the particle loses the memory of how it entered into the energy basin and the escape probability becomes independent of previous events. The processes are therefore Markovian [Gardiner 2009]. Then, if the transition rates are known a priori, it will be possible to de-scribe the evolution of a system over much longer timescales than MD does. This is indeed the underlying concept of KMC method.
It can be shown that transitions are described using a Poisson process which is commonly used in the theory of stochastic processes [Gardiner 2009, Fichthorn & Weinberg 1991]. Con-sider an event with rate r, then the transition probability density f ptq giving the probability rate that the transition occurs at time t is: f ptq “ r ˆ exp p´rtq . (1.2)
Generalizing to N independent Poisson processes, with rate ri, results in a Poisson process with rate R “ řN ri and the transition probability density for this process is: F ptq “ R ˆ exp p´Rtq . (1.3) The evolution of a system can be described by picking up random events with a probability proportional to their rate as shown in Fig. 1.16. The events are put end to end with a size proportional to their rate, a random number s is generated on the interval p0, 1q and the event with the rate Rn´1 ă sR ď Rn is chosen. To evolve the clock of the system, the time has to be incremented by 1{R (the mean time period between successive events). More generally, the clock of the system can be incremented by a random time Δt, drawn from Eq. 1.3: Δt “ ´ logps1 q{R (1.4) where s1 is a random number on the interval p0, 1q. The use of random numbers to evolve the clock of the system gives a better description of the stochastic nature of the involved processes and is justified because xlogps1 qy “ ´1. This procedure corresponds to the most common KMC algorithm referred as the residence-time algorithm [Bortz et al. 1975]4 that is presented schematically in Fig.1.17. It is worth noting that this algorithm will be used throughout this manuscript (see section 1.2.2.2 for additional information).

READ  Percent water absorption: A simple method for estimation of sorghum grain hardness

Transition State Theory (TST)

Transition state theory (TST) approximation is the formalism used in the Kinetic Monte Carlo method or in other accelerated dynamics methods to define the transition rates [Voter et al. 2002]. It was developed in 1935 by Eyring, Evans and Polanyi in order to define a theoretical frame-work to describe reaction rates [Laidler & King 1983]. Within TST, the transition rate kT ST is given by the canonical expectation of the flux through the dividing surface between an initial state A and a final state B, as shown in Fig. 1.18. It can be expressed as [Glasstone et al. 1941]:

Presentation of MMonCa

MMonCa is a recent multi–material oriented KMC simulator that has been used in the frame of this PhD to implement new models. It has been created by Dr. Ignacio Martín–Bragado at IMDEA Materials institute (Madrid, Spain). It is written in C++ and is integrated with the TCL library for user interactions. It contains two independent modules that can achieve different levels of modeling:
• the lattice kinetic Monte Carlo (LKMC) module relies on the lattice of the considered material and is used to simulate phase changes such as solid phase epitaxial regrowth (SPER) where crystalline orientation plays an important role.
• the object kinetic Monte Carlo (OKMC) module is an off-lattice simulator and is used to study defects evolution inside a solid. A presentation of this module can be found in [Martin-Bragado et al. 2013].
The simplified structure of the MMonCa simulator is shown in Fig. 1.19. The user launches a simulation through an input script containing TCL commands and special commands that are internally mapped with MMonCa C++ methods in order to carry out a dedicated task (initialize the simulation domain, simulate an anneal, etc.).

Goal of this work

The goal of this thesis is to model junction formation through SPER by means of an atomistic lattice kinetic Monte Carlo method. Chapter 2 explains the model for intrinsic silicon and its capabilities to predict faceting and twin defects generation that may play an important role in junction formation of advanced devices. Chapter 3 generalizes the model to take into account the influence of non–hydrostatic stress that strongly impacts the recrystallization kinetics. Fi-nally, in chapter 4 the influence of dopant impurities is introduced into the model. The final chapter summarizes this dissertation and suggests future directions of research.

Thermodynamics and kinetics of crystallization

Thermodynamics of amorphous to crystalline transition

At standard temperature and pressure conditions, silicon atoms arrange into a diamond cubic crystal structure where each atom is covalently bonded to four neighbors. Indeed, to min-imize the overall energy, 3s and 3p silicon orbitals hybridize to form four tetrahedral sp3 orbitals [Ashcroft & Mermin 1976]. Crystalline phase is therefore the lowest energy form of silicon. Amorphous silicon (α-Si) is formed by introducing long–range disorder in the silicon lattice and can be obtained experimentally by different methods such as rapid quenching, ion implantation or low temperature deposition. Its structure has been the subject of experimental and theoretical studies for many years. It can be described as a covalently bonded fourfold coordinated continuous random network (CRN) where the angles between bonds diverge from the ideal tetrahedral angle while bond lengths are only slightly distorted [Zachariasen 1932]. Experimental data based on high–energy x–ray and neutron diffraction measurements give an average bond angle distortion (Δθ) of 9–10˝ [Laaziri et al. 1999] whereas data extracted from Raman spectroscopy give a larger range of 9–12˝ [Kail et al. 2010]. CRN models based on bond–swapping algorithm have been shown to be able to reproduce the experimental radial dis-tribution function (RDF) of α-Si and α-Ge [Wooten et al. 1985, Barkema & Mousseau 2000]. Nevertheless, in a recent paper, Treacy and Borisenko point out that CRN models fail to explain fluctuation electron microscopy (FEM) data observed by several groups and show-ing the presence of topological crystallinity in α-Si ordering at the 10 to 20 Å length scale [Treacy & Borisenko 2012]. They suggest instead that a paracrystalline model is consistent with both FEM and RDF data. The real structure of α-Si also exhibits complex features that depend on the preparation conditions as well as on the thermal history. In particular, it should be noted that strong differences may arise between hydrogenated (α-Si:H) and pure (α-Si) amorphous silicon. In this manuscript, we will mainly focus on α-Si produced by ion–implantation.
Beyond the discussion about the structural topology of amorphous silicon, it is clear that α-Si always has a higher Gibbs free energy (Ga) than c-Si (Gc) and exists as a kinetically frozen metastable phase. As a consequence, the transition from an amorphous state to a crystalline state is thermodynamically favorable since it results into a reduction of the crystallization free energy ΔGac ” Ga ´ Gc defined as: ΔGacpT q “ ΔHacpT q ´ T ΔSacpT q, (2.1) where ΔHac and ΔSac are the crystallization enthalpy and entropy respectively. ΔGac can therefore be determined by evaluating the crystallization enthalpy and entropy. Both are cal-culated from the excess specific heat of amorphous phase (cp,a) with respect to crystalline phase (cp,c) Δcp ” cp,a ´ cp,c that can be measured by differential scanning calorimetry (DSC) [Roura et al. 2011].

Table of contents :

Introduction 
1 Context and goal of this work 
1.1 Technological context
1.1.1 3D sequential integration
1.1.2 Low thermal budget process
1.1.2.1 Amorphization
1.1.2.2 Recrystallization
1.1.3 Challenges for junction formation
1.1.3.1 Amorphization engineering
1.1.3.2 Recrystallization control
1.1.3.3 Influence of End Of Range defects
1.1.3.4 Dopant activation
1.1.3.5 The need of numerical simulation
1.2 Atomistic simulation
1.2.1 Molecular Dynamics methods
1.2.2 Kinetic Monte Carlo methods
1.2.2.1 Transition State Theory (TST)
1.2.2.2 Presentation of MMonCa
1.3 Goal of this work
2 Solid Phase Epitaxial Regrowth of intrinsic silicon 
2.1 Background
2.1.1 Thermodynamics and kinetics of crystallization
2.1.1.1 Thermodynamics of amorphous to crystalline transition
2.1.1.2 Kinetics of Solid Phase Epitaxial Regrowth (SPER)
2.1.1.3 Kinetics of Random Nucleation and Growth (RNG)
2.1.2 Anisotropy and defects formation
2.2 LKMC model
2.2.1 Anisotropic growth
2.2.1.1 Implemented model
2.2.1.2 Plane detection
2.2.1.3 The particular case of {100} microscopic configurations
2.2.2 Defect formation
2.3 Planar regrowth
2.3.1 Model calibration
2.3.2 (100) substrate
2.3.3 (110) substrate
2.3.4 (111) substrate
2.4 Multidirectional SPER
2.4.1 Influence of trenches
2.4.2 Regrowth of box–shaped amorphous regions
2.4.3 SPER in FDSOI MOSFETs
2.4.3.1 SPER of x110y–aligned –Si(100)
2.4.3.2 SPER of x100y–aligned –Si(100)
2.5 Summary
3 Impact of stress on Solid Phase Epitaxial Regrowth 
3.1 Conventions and notations
3.2 Background
3.2.1 Influence of hydrostatic pressure: the notion of activation volume
3.2.2 Generalization to a non–hydrostatic stress
3.2.2.1 The concept of activation strain tensor
3.2.2.2 A dual–timescale model of stressed SPER
3.3 LKMC Model
3.4 Atomistic simulation of SPER upon stress
3.4.1 In–plane uniaxial stress
3.4.1.1 Regrowth velocity
3.4.1.2 Interface roughness
3.4.2 Normal uniaxial stress
3.4.3 Hydrosatic pressure
3.5 Summary
4 Influence of impurities on Solid Phase Epitaxial Regrowth 
4.1 Background
4.1.1 Solid solubility and metastable solubility
4.1.1.1 Solid solubility
4.1.1.2 Metastable Solubility
4.1.2 Impurity–related mechanisms during SPER
4.1.2.1 Impurity–dependent regrowth velocity
4.1.2.2 Impurity redistribution
4.2 Dopant–enhanced regrowth velocity
4.2.1 Analytical modeling
4.2.2 Atomistic LKMC modeling
4.2.2.1 Electrostatic calculation
4.2.2.2 LKMC model
4.2.2.3 Results
4.3 Summary
5 Summary and suggestions for future work 
5.1 Summary
5.1.1 Regrowth anisotropy and regrowth–induced defects
5.1.2 Influence of stress
5.1.3 Influence of dopants
5.2 Suggestions for future work
A Amorphous/Crystalline interface extraction
A.1 Interface position
A.2 Interface roughness
A.3 Interface velocity
B Numerical solution of the 3D Poisson equation
B.1 Linear Poisson equation
B.2 Non-linear Poisson equation
Résumé en français
Introduction
Chapitre 1: Contexte et but de ce travail
Chapitre 2: Recristallisation par épitaxie en phase solide du silicium intrinsèque
Chapitre 3: Impact de la contrainte sur la recristallisation par épitaxie en phase solide
Chapitre 4: Influence des impuretés sur la recristallisation par épitaxie en phase solide
Conclusion et suggestions pour les recherches futures
Bibliography 
List of communications

GET THE COMPLETE PROJECT

Related Posts