- Networking and Communication Applied Engineering
- Course Details
- Communications Catalogue | Electrical & Computer Engineering

Time permitting, topics such as quantum repeaters and blind quantum computation may also be explored. While the initial focus of the course is on error correction, the techniques used to analyze the performance of these codes, and the algorithmic methods used to decode them, connect to diverse areas of statistical inference including machine learning and statistical physics.

## Networking and Communication Applied Engineering

The error-correction portion of the course is designed to complement ECE H Error Control Codes , but that course is not a prerequisite. Students in engineering, the computer sciences, and math will find this course interesting. This integration, however, creates a new host of vulnerabilities stemming from cyber or physical intrusion potentially leading to devastating physical effects. The security of a system is as strong as its weakest link. Thus, the scale and complexity of the smart grid, along with its increased connectivity and automation make the task of cyber-physical protection particularly challenging.

This course introduces students to timely topics in cyber-physical security of modern power systems. Topics include: introduction to communication security practices; power system security and stability; cyber-physical system attacks; distributed control and network adaptation strategies for smart grid resilience. Leon-Garcia This course is one of two companion courses on network softwarization offered simultaneously in the Winter session. The first course introduces concepts and principles of network softwarization while the second course this one focuses on hands on experience with softwarization technologies and enablers.

Frey Advanced concepts in machine learning and probabilistic inference. An introductory course on inference algorithms or machine learning should be taken prior to this course.

Topics covered: Probability models, neural networks, graphical models, Bayesian networks, factor graphs, Markov random fields MRFs. Structured models, convolutional networks, transformations as hidden variables, bivariate and trivariate potentials, high-order potentials. Exact probabilistic inference, variable elimination, sum-product and max-product algorithms, factorizing high-order potentials. Approximate probabilistic inference, iterated conditional modes, gradient-based inference, loopy belief propagation, variational techniques, expectation propagation, sampling methods MCMC.

Learning in directed and undirected models, EM, sampling, contrastive divergence. Deep belief networks. Applications to image processing, scene analysis, pattern recognition, speech recognition, computational biology. Prerequisite: ECEH1 or equivalent. An Introduction to the basic theory, the fundamental algorithms, and the computational toolboxes of machine learning. The focus is on a balanced treatment of the practical and theoretical approaches, along with hands on experience with relevant software packages. Unsupervised learning methods covered in the course will include: principal component analysis, k-means clustering, and Gaussian mixture models.

Techniques to control overfitting, including regularization and validation, will be covered. Hatzinakos Signal processing techniques using special purpose digital hardware and general purpose digital computers are playing an increasingly important role. The course deals with some introductory and some advanced topics in the area. In particular, it presents the characterization of random discrete time signals. It provides an introduction to traditional and modern statistical discrete time signal processing frameworks, including processing with second-, higher- and fractional lower -order statistics.

It discusses sampling and multirate signal conversion; linear prediction and optimum linear filters; least squares methods for system modeling and design; theory and applications of adaptive filters. It also deals with applications in signal and image processing and analysis. Plataniotis This course will present the concepts of the main processing techniques for digital image processing. It will cover image enhancement and restoration, digital filtering linear and nonlinear , local space operators, image analysis, and elements of vision. It will also describe the impact of digital image processing to the more important fields of application.

Prerequisites: ECEH1 or equivalent. Hatzinakos Spectrum estimation is an important area of digital signal processing that finds applications in sonar and radar, geophysics and oil exploration, radioastronomy, biomedicine, speech and image processing. This course will cover the basic principles and wide variety of signal processing techniques developed for spectral analysis.

### Course Details

Topics include: definitions of power spectrum; conventional spectrum estimation methods; maximum likelihood method of Capon; maximum entropy method; parametric modeling of time series; AR and ARMA spectrum estimation; harmonic decomposition techniques; duality between spectral analysis and array processing; signal and noise subspace methods in array processing.

Higher-order-spectral analysis methods and applications. Hatzinakos This is an introductory level course for graduate students or practitioners to gain knowledge and hands-on experiences in biometric systems and security applications. The following figure indicates a continuous-time signal x t and a sampled signal x s t. When x t is multiplied by a periodic impulse train, the sampled signal x s t is obtained. To discretize the signals, the gap between the samples should be fixed.

That gap can be termed as a sampling period T s. Sampling frequency is the reciprocal of the sampling period. This sampling frequency, can be simply called as Sampling rate. The sampling rate denotes the number of samples taken per second, or for a finite set of values.

For an analog signal to be reconstructed from the digitized signal, the sampling rate should be highly considered. The rate of sampling should be such that the data in the message signal should neither be lost nor it should get over-lapped. Hence, a rate was fixed for this, called as Nyquist rate. Suppose that a signal is band-limited with no frequency components higher than W Hertz.

## Communications Catalogue | Electrical & Computer Engineering

That means, W is the highest frequency. For such a signal, for effective reproduction of the original signal, the sampling rate should be twice the highest frequency. The sampling theorem, which is also called as Nyquist theorem , delivers the theory of sufficient sample rate in terms of bandwidth for the class of functions that are bandlimited. To understand this sampling theorem, let us consider a band-limited signal, i. For the continuous-time signal x t , the band-limited signal in frequency domain, can be represented as shown in the following figure.

We need a sampling frequency, a frequency at which there should be no loss of information, even after sampling. For this, we have the Nyquist rate that the sampling frequency should be two times the maximum frequency. It is the critical rate of sampling. If the signal x t is sampled above the Nyquist rate, the original signal can be recovered, and if it is sampled below the Nyquist rate, the signal cannot be recovered.

The following figure explains a signal, if sampled at a higher rate than 2w in the frequency domain. The above figure shows the Fourier transform of a signal x s t.

- Networking and Communication Applied Engineering.
- Algebraic Aspects of Digital Communications 2009.
- Advantages of Digital Communication.

Here, the information is reproduced without any loss. There is no mixing up and hence recovery is possible. The result will be as shown in the above figure. The information is replaced without any loss. Hence, this is also a good sampling rate.

We can observe from the above pattern that the over-lapping of information is done, which leads to mixing up and loss of information. This unwanted phenomenon of over-lapping is called as Aliasing. In the transmitter section of PCM, a low pass anti-aliasing filter is employed, before the sampler, to eliminate the high frequency components, which are unwanted. The signal which is sampled after filtering, is sampled at a rate slightly higher than the Nyquist rate.

This choice of having the sampling rate higher than Nyquist rate, also helps in the easier design of the reconstruction filter at the receiver. It is generally observed that, we seek the help of Fourier series and Fourier transforms in analyzing the signals and also in proving theorems. Fourier transform is a powerful mathematical tool which helps to view the signals in different domains and helps to analyze the signals easily.

The digitization of analog signals involves the rounding off of the values which are approximately equal to the analog values. The method of sampling chooses a few points on the analog signal and then these points are joined to round off the value to a near stabilized value. Such a process is called as Quantization.

The analog-to-digital converters perform this type of function to create a series of digital values out of the given analog signal. The following figure represents an analog signal. This signal to get converted into digital, has to undergo sampling and quantizing. The quantizing of an analog signal is done by discretizing the signal with a number of quantization levels. Quantization is representing the sampled values of the amplitude by a finite set of levels, which means converting a continuous-amplitude sample into a discrete-time signal.

The following figure shows how an analog signal gets quantized. The blue line represents analog signal while the brown one represents the quantized signal. Both sampling and quantization result in the loss of information. The quality of a Quantizer output depends upon the number of quantization levels used. The discrete amplitudes of the quantized output are called as representation levels or reconstruction levels.

oxominavufit.tk The spacing between the two adjacent representation levels is called a quantum or step-size. The following figure shows the resultant quantized signal which is the digital form for the given analog signal. The type of quantization in which the quantization levels are uniformly spaced is termed as a Uniform Quantization.

The type of quantization in which the quantization levels are unequal and mostly the relation between them is logarithmic, is termed as a Non-uniform Quantization. There are two types of uniform quantization. They are Mid-Rise type and Mid-Tread type. The following figures represent the two types of uniform quantization. The Mid-Rise type is so called because the origin lies in the middle of a raising part of the stair-case like graph. The quantization levels in this type are even in number. The Mid-tread type is so called because the origin lies in the middle of a tread of the stair-case like graph.