I just came across the tutorial collection and would like to express my gratitude and support to your cause. I am an engineer in data communications and am preparing a course on digital communications systems. I look forward to benefiting from your work.

If there is any way I can help, please let me know.

I finished reading part 1 and part 2 of the Turbo code tutorial. It is the most useful material I have ever found. You really spent the effort to understand the details, unlike some text books, which just blindly copies other books. Thanks again!

I have a few questions and comments.

1. On page 3, you said: “According to Shannon, the ultimate code would be one where a message is sent infinite times, each time shuffled randomly”. Would you like to elaborate how you inferred this from Shannon’s paper (or somewhere else)? I just read Shannon’s paper recently and did not get that revelation. Also, if you repeat a message infinite times, wouldn’t your coding rate go to zero?

2.On page 7, last paragraph, “for a M-PSK, there would be 3N bits…” Should it be 8-PSK? Otherwise, I don’t understand where the 3N come from.

3. On page 9 after equation (1.1), the text says “This is a sensitive metric, quite a bit better…”. Is that a typo from “sensible”? Otherwise, could you explain what is the metric sensitive to?

4. On page 10, second to the last paragraph, it seems “Fig. 8” should be “Fig. 9”.

5. On page 11, there is a “reference source not found” error.

6. On page 16, you spent quite some space to discuss how to get P(u_k) from L(u_k). However, I am not clear on where this information is used. It seems in the iteration process, we only care about L_e. Only at the final output we need to convert L(u_k) to P(u_k). And even that is often unnecessary. L(u_k) itself is a good soft decision metric. And if we need hard decision, we just look at its sign. Am I missing some points here?

7. In figure 1, you have the notation that y_k^{i,p} representing the parity bits in the code, where i runs from 1 to n, indicating the n rate-1 encoders. Later, in equation (1.22) and onward, there is a summation of i=2 to q. Is this also summing over the encoders, or the parity bits of the same encoder (which is rate 1/q then)?

8. On page 12, when discussing how you can divide the y sequence into three parts and why you can drop some dependencies in the probability expressions, you might want to mention that this is the property of a causal code and a memory-less channel.

9. This is probably related to question 6. I read many books and papers but still don’t have a good justification of using L_e as feedback a priori probability. Why don’t we use the whole L instead, for that purpose? Is that an ad hoc choice or is there a deeper reason behind?

Thanks again for your contribution! It’s a great pleasure reading your tutorial. If you find it easier to discuss in email, I’d be happy to do it, as well.

By the way, I cannot find your tutorial on LDPC. I hope you will have it soon. ðŸ™‚

Thanks for all these comments. I am trying to finish up a book so don’t have time to go over this subject.
I can send you the word document, perhaps you can mark your edits there for me.
It will be a lot easier for me to understand your comments that way.

Hi, Charan.
I am reading your tutorial. I think it is really brilliant work. I am an engineer of satellite communication. Your jobs is instructive.
In tutorial 24a, you said Ms.Jian Qi’s paper is an excellent reference. Where can I find that paper? Would you mind sending a link?
Thanks a lot.

Hi Charan, Thank you so much for this very detailed tutorial on Turbo Codes. I am originally from Ghana and currently doing my masters in Japan. In my attempt to further understand what i read about Turbo decoding algorithm, i wrote a program in MATLAB. The issue that i have run into is that my Eb/No vs BER graph tends to oscillate instead of decrease smoothly with every iteration . I was hoping you could give me some pointers as to the possible errors i could be making. Thank you.

I just came across the tutorial collection and would like to express my gratitude and support to your cause. I am an engineer in data communications and am preparing a course on digital communications systems. I look forward to benefiting from your work.

If there is any way I can help, please let me know.

Thank you again for your inspiration and effort!

Feng Ouyang

Hi Feng,

So nice of to say that. thanks.

Next time I write a paper, perhaps I will ask you to review it?

Thanks again,

Charan

Charan:

I finished reading part 1 and part 2 of the Turbo code tutorial. It is the most useful material I have ever found. You really spent the effort to understand the details, unlike some text books, which just blindly copies other books. Thanks again!

I have a few questions and comments.

1. On page 3, you said: “According to Shannon, the ultimate code would be one where a message is sent infinite times, each time shuffled randomly”. Would you like to elaborate how you inferred this from Shannon’s paper (or somewhere else)? I just read Shannon’s paper recently and did not get that revelation. Also, if you repeat a message infinite times, wouldn’t your coding rate go to zero?

2.On page 7, last paragraph, “for a M-PSK, there would be 3N bits…” Should it be 8-PSK? Otherwise, I don’t understand where the 3N come from.

3. On page 9 after equation (1.1), the text says “This is a sensitive metric, quite a bit better…”. Is that a typo from “sensible”? Otherwise, could you explain what is the metric sensitive to?

4. On page 10, second to the last paragraph, it seems “Fig. 8” should be “Fig. 9”.

5. On page 11, there is a “reference source not found” error.

6. On page 16, you spent quite some space to discuss how to get P(u_k) from L(u_k). However, I am not clear on where this information is used. It seems in the iteration process, we only care about L_e. Only at the final output we need to convert L(u_k) to P(u_k). And even that is often unnecessary. L(u_k) itself is a good soft decision metric. And if we need hard decision, we just look at its sign. Am I missing some points here?

7. In figure 1, you have the notation that y_k^{i,p} representing the parity bits in the code, where i runs from 1 to n, indicating the n rate-1 encoders. Later, in equation (1.22) and onward, there is a summation of i=2 to q. Is this also summing over the encoders, or the parity bits of the same encoder (which is rate 1/q then)?

8. On page 12, when discussing how you can divide the y sequence into three parts and why you can drop some dependencies in the probability expressions, you might want to mention that this is the property of a causal code and a memory-less channel.

9. This is probably related to question 6. I read many books and papers but still don’t have a good justification of using L_e as feedback a priori probability. Why don’t we use the whole L instead, for that purpose? Is that an ad hoc choice or is there a deeper reason behind?

Thanks again for your contribution! It’s a great pleasure reading your tutorial. If you find it easier to discuss in email, I’d be happy to do it, as well.

By the way, I cannot find your tutorial on LDPC. I hope you will have it soon. ðŸ™‚

Feng

Feng,

Thanks for all these comments. I am trying to finish up a book so don’t have time to go over this subject.

I can send you the word document, perhaps you can mark your edits there for me.

It will be a lot easier for me to understand your comments that way.

Charan

Hi, Charan.

I am reading your tutorial. I think it is really brilliant work. I am an engineer of satellite communication. Your jobs is instructive.

In tutorial 24a, you said Ms.Jian Qi’s paper is an excellent reference. Where can I find that paper? Would you mind sending a link?

Thanks a lot.

I did some Goggling and could not find Jian Qi. The link is not where I reference in my paper. This is a big issue with internet references!

Thanks,

Charan

Hi Charan, Thank you so much for this very detailed tutorial on Turbo Codes. I am originally from Ghana and currently doing my masters in Japan. In my attempt to further understand what i read about Turbo decoding algorithm, i wrote a program in MATLAB. The issue that i have run into is that my Eb/No vs BER graph tends to oscillate instead of decrease smoothly with every iteration . I was hoping you could give me some pointers as to the possible errors i could be making. Thank you.