IT 2302 — INFORMATION THEORY AND CODING (Regulation 2008) NOVEMBER/DECEMBER 2010 QUESTION PAPERS
ANNA UNIVERSITY QUESTION PAPER INFORMATION THEORY AND
CODING FOR IT DEPARTMENT STUDENTS
B.E./B.Tech. DEGREE EXAMINATION, NOVEMBER/DECEMBER 2010
Fifth Semester
Information Technology
IT 2302 — INFORMATION THEORY AND CODING
(Regulation 2008)
Time : Three hours Maximum : 100 Marks
Answer ALL questions
PART A — (10 × 2 = 20 Marks)
1. Differentiate:
Uncertainty, Information and Entropy.
2. Define channel capacity.
3. State the advantages of Lempel-Ziv algorithm over Huffman coding.
4. Why is LPC not suitable to encode music signals?
5. Distinguish between global color table and local color table in GIF.
6. State the various methods used for text compression.
7. What is Syndrome?
8. Why are cyclic codes extremely well suited for error detection?
9. What are conventional codes? How are they different from block codes?
10. State the principle of Turbo coding.
2. Define channel capacity.
3. State the advantages of Lempel-Ziv algorithm over Huffman coding.
4. Why is LPC not suitable to encode music signals?
5. Distinguish between global color table and local color table in GIF.
6. State the various methods used for text compression.
7. What is Syndrome?
8. Why are cyclic codes extremely well suited for error detection?
9. What are conventional codes? How are they different from block codes?
10. State the principle of Turbo coding.
PART B — (5 × 16 = 80 Marks)
11. (a) (i) Explain briefly the source coding theorem. (6)
(ii) Given five symbols 4 3 2 1 0 and , , , S S S S S with their respective probabilities. 0.4, 0.2, 0.2, 0.1 and 0.1. Use Huffman's encoding for symbols and find the average code word length. Also prove that it satisfies source coding theorem. (10)
Or
(b) State and prove the properties of mutual information. (16)
12. (a) Explain the concepts of frequency masking and temporal masking. How they are used in
perceptual coding? (16)
Or
(b) Explain in detail Adaptive Huffman coding with the help of an example. (16)
13. (a) With a block diagram, explain the working of JPEG encoder and decoder. (16)
Or
(b) With a block diagram, explain the MPEG algorithm for video encoding. (16)
14. (a) For a linear block code, prove with example that
(i) The syndrome depends only on error pattern and not on transmitted code word. (8)
(ii) All error patterns that differ by a codeword have the same syndrome. (8)
Or
(b) Determine the encoded message for the following 8-bit data codes using the CRC generating polynomial . g(x) = x4+x3+x0
(i) 11001100
(ii) 01011111. (16)
15. (a) A rate 1/3 convolution encoder has generating vectors g1= 100 ,g2= 101 and g3=111
(i) Sketch the encoder configuration. (5)
(ii) Draw the code tree, state diagram and trellis diagram. (5)
(iii) If the message sequence is 10110, determine the output sequence of the encoder. (6)
Or
(b) Explain in detail the Viterbi algorithm for decoding of convolutional codes with a suitable example. (16)
Or
(b) Explain in detail Adaptive Huffman coding with the help of an example. (16)
13. (a) With a block diagram, explain the working of JPEG encoder and decoder. (16)
Or
(b) With a block diagram, explain the MPEG algorithm for video encoding. (16)
14. (a) For a linear block code, prove with example that
(i) The syndrome depends only on error pattern and not on transmitted code word. (8)
(ii) All error patterns that differ by a codeword have the same syndrome. (8)
Or
(b) Determine the encoded message for the following 8-bit data codes using the CRC generating polynomial . g(x) = x4+x3+x0
(i) 11001100
(ii) 01011111. (16)
15. (a) A rate 1/3 convolution encoder has generating vectors g1= 100 ,g2= 101 and g3=111
(i) Sketch the encoder configuration. (5)
(ii) Draw the code tree, state diagram and trellis diagram. (5)
(iii) If the message sequence is 10110, determine the output sequence of the encoder. (6)
Or
(b) Explain in detail the Viterbi algorithm for decoding of convolutional codes with a suitable example. (16)
two marks For Nov/Dec 2011 Examination IT Third Year
Students
Information Theory and Coding IT2302 IT
2302 Important 2 two marks For Nov/Dec 2011
Examination IT Third Year Students
IT2302 - Information Theory
and Coding
Important 2 marks
UNIT (I-V)
State the properties of entropy.
is meant by Kraft McMillan inequality?
Define sampling and quantization
4.
State the various methods for text compression
5.
Differentiate vocoder and waveform coder
6.
What is prefix coding?
7.
Define Uncertainty and information
8.
What are the advantages of coding speech at low bit rates?
9.
What are convolutional codes? How are they different from block
codes?
10. What is
Syndrome?
11. Why
cyclic codes are well suited for error detection?.
12. What is
the formula to find the inverse and forward DCT?
13. What is
meant by D-frame and its significance
14. Show that
c={000, 001, 101} is not a linear code
15. Distinguish
between global color table and local color table in GIF
16. State the
various methods for text compression
17. What is a constraint
length in convolutional codes?
18. What is
the principle of turbo codes?
AU Nov/Dec 2012 examinations
Anna University,Chennai
Nov/Dec 2012 Examinations
Rejinpaul.com Important
Questions
Information
Theory and coding IT2302
UNIT I-V
1.Encode the following messages with their respective probability
using basic Huffman algorithm
M1
|
M2
|
M3
|
M4
|
M5
|
M6
|
M7
|
M8
|
1/2
|
1/8
|
1/8
|
1/16
|
1/16
|
1/16
|
1/32
|
1/32
|
calculate the efficiency of
coding and comment on the result
2. Find the
channel matrix of the resultant channel. Find P(z1) if P(x1)= 0.6 and P(x2)=
0.4
3. State and
prove the source coding theorem
4. State and
prove the properties of mutual information
5. A
discrete memory less source has an alphabet of seven symbols whose
probabilities of occurrence are as described below
Symbol: s0
s1 s2
s3 s4
s5 s6
Prob : 0.25 0.25 0.0625
0.0625
0.125
0.125
0.125
Compute the Huffman code
for this source moving combined symbols as high as possible
6. Explain
the LPC model of analysis and synthesis of speech signal. State the advantages
of coding speech at low bit rates.
7. Explain
in detail about Adaptive Huffman Coding
8. With a
block diagram explain psychoacoustic model
9. Explain
the compression principles of P and B frames
10.
explain the working of JPEG encoder
11. Explain in detail about
H.261
12.
Consider the (7, 4) Hamming code defined by the generator polynomial g(x) =
1+x+x3. The code word 1000101 is sent over a noisy channel,
producing the received word 0000101 that has a single error. Determine the
syndrome polynomial s(x) for this received word. Find its corresponding
message vector m and express m in polynomial m(x).
13. Consider a (7, 4) cyclic
code with generator polynomial g(x) = 1+x+x3. Let data d=
(1010). Find the corresponding systematic code word
14.
Determine the encoded message for the following 8- bit data codes using the CRC
generating polynomial g(x) = x4+x3+x0.
(a) 11001100(b) 01011111
15.
Construct a convolutional encoder for the following specifications: rate
efficiency ½, constraint length 3, the connections from the shift register to
modulo – 2 adder are described by the following equations, g1(x)
=1+x+x2, g2(x)=1+x2.Determine the output
codeword for the message [10011]
16.
Explain the Turbo Decoding in detail
17. A
convolution encoder has a single shift register with 2 stages, 3 mod-2 adders
and an output Mux. The generator sequence of the encoder as follows: g(1)=(1,0,1),
g(2)=(1,1,0) g(3)=(1,1,1). Draw the block diagram
and encode the message sequence (1110) and also draw the state diagram
0 comments:
Post a Comment