Equalization, Decoding and Joint Equalization and Decoding 

Get Complete Project Material File(s) Now! »

SIMULATION RESULTS

The various SISO equalizers discussed in this chapter are used as a replacement for the optimal MAP equalizer in a turbo equalizer, and the resulting turbo equalizer is simulated for a number of scenarios.
The corresponding EXIT charts are shown to evaluate the effectiveness of each low complexity SISO equalizer. In each case the performance of the reduced complexity turbo equalizer is compared to that of a CTE.

MMSE-LE and MMSE-DFE

In [12] the MMSE-LE and MMSE-DFE based turbo equalizers are simulated through a channel h = f0:227;0:46;0:688;0:46;0:227g of length L = 5, using a rate 1=2 recursive systematic convolutional encoder with octal generator polynomial G = f7;5g, where the first generator is the feedback part. Nu = 215 uncoded bits are encoded to Nc = 216 coded bits and interleaved using an S-random interleaver with S = 0:5 p 0:5Nc. Fig. 3.8 shows the performance of the MMSE-LE and MMSE-DFE based turbo equalizers compared to that of the CTE employing a MAP equalizer. It is clear that, although the convergence of the MMSE-LE-TE is worse than that of the CTE, the MMSE-LE-TE still achieves coded AWGN performance after 14 iterations, as if the CIR were h = 1. The MMSE-DFE however does not perform well, even after a great number of iterations. Fig. 3.9 shows the corres- ponding exit chart for Eb=N0 = 4 dB, from which it is clear that the mutual information at the input and the output of the MMSE-LE equalizer converges to 1, while the MMSE-DFE fails to transfer much information.

SFE

The SFE based turbo equalizer (SFE-TE) in [10] is simulated for a number of dispersive channels. A rate 1=2 recursive convolutional encoder with generator (1+D2)=(1+D+D2) was used before interleaving. The first channel is of length L = 5 and has an CIR h = f0:227;0:46;0:688;0:46;0:227g for which M1 = 9 and M2 = 5 were used. Nu = 215 bits were encoded before interleaving, using a random interleaver, and transmission. The SFE-TE was iterated 14 times and compared to a CTE using a MAP equalizer. Fig. 3.10 shows the performance of the SFE-TE compared to that of the CTE, from which it can be seen that both the CTE and SFE-TE achieve matched filter performance, although the SFE-TE does so only for Eb=N0 5 dB. The corresponding EXIT chart is shown in Fig. 3.11.

READ  Technical Challenges of Interactive Model Development

Optimization

In order to evaluate the performance of the DBN-TE using dynamic LLR updates, the system was simulated for fading (at 0 km/h mobile speed) and static channels with and without dynamic LLR updates. Fig. 4.31 and Fig. 4.32 show the performance of the DBN-TE with and without fading (slow fading), for a channel length of L = 3. In Fig. 4.31 it can be seen that the BER performance is best when dynamic updates are performed. From Fig. 4.32 it is clear that dynamic LLR updates provide a significant performance gain for this fading channel. Fig. 4.33 and Fig. 4.34 show the performance of the DBN-TE with and without fading, for a channel length of L = 5. Fig. 4.33 shows that the BER performance does not increase with an increase in Eb=N0 unless dynamic LLR updates are applied. It is clear that dynamic LLR updates provides and increase in performance. In Fig. 4.34 it is clear once again that dynamic LLR updates alone provide improved BER performance. From the results presented here it can be concluded that optimization via dynamic LLR updates allows for an improvement in BER performance. The DBN-TE will therefore henceforward be simulated using dynamic LLR updates for optimization.

CHAPTER 1 Introduction
1.1 Problem Statement
1.2 Research Objectives and Questions
1.3 Hypothesis and Approach
1.4 Research Goals
1.5 Research Contribution
1.6 Overview of Study
CHAPTER 2 Equalization, Decoding and Joint Equalization and Decoding 
2.1 The MLSE and MAP Algorithms
2.2 Equalization
2.3 Decoding
2.4 Non-Iterative Joint Equalization and Decoding (NI-JED)
2.5 Simulation Results
2.6 Concluding Remarks
CHAPTER 3 Turbo Equalization 
3.1 Turbo Equalizer
3.2 Reduced Complexity SISO Equalizers
3.3 Computational Complexity Analysis
3.4 EXIT charts
3.5 Simulation Results
3.6 Concluding Remarks
CHAPTER 4 Dynamic Bayesian Network Turbo Equalizer 
4.1 Dynamic Bayesian Networks
4.2 Modeling a Turbo Equalizer as a Quasi-DAG
4.3 The DBN-TE Algorithm
4.4 Complexity Reduction
4.5 Computational Complexity Analysis
4.6 Simulation Results
4.7 Concluding Remarks
CHAPTER 5 Hopfield Neural Network Turbo Equalizer 
5.1 The Hopfield Neural Network
5.2 Hopfield Neural Network Turbo Equalizer
5.3 Optimization
5.4 Computational Complexity Analysis
5.5 Simulation Results
5.6 Concluding Remarks
CHAPTER 6 Conclusion 
6.1 Summary
6.2 Further Research
6.3 Concluding Remarks

GET THE COMPLETE PROJECT

Related Posts