

Courses

Undergraduate and Graduate Courses (3) 
 EE113 DIGITAL SIGNAL PROCESSING (UndergraduateLevel Course)
Reference: A. H. Sayed, DiscreteTime Processing and Filtering, Lectures Notes (distributed by the instructor).
Course objective: To provide students with a thorough introduction to the key concepts and tools they will need to understand and manipulate discretetime signals and systems, with emphasis on timedomain analysis, transformdomain analysis, and frequencydomain analysis.
Course description: Discretetime signals and systems, LTI systems, impulse response sequence, linear and circular convolution, solution of difference equations, zeroinput and zerostate solutions, bilateral and unilateral ztransforms, transfer functions, DiscreteTime Fourier Transform (DTFT) and properties, frequency response, DiscreteFourier Transform (DFT) and properties, Fast Fourier Transform, ContinuousTime Fourier Transform, sampling and reconstruction, Nyquist's theorem.
Topics Covered
Part A: TimeDomain Techniques
 Motivation for DiscreteTime Processing.
 Fundamental Sequences, Energy, Power.
 Periodic Sequences.
 DiscreteTime Systems.
 Linear TimeInvariant (LTI) Systems.
 ImpulseResponse Sequence.
 Linear Convolution.
 Homogeneous Difference Equations.
 Solving Difference Equations.
 ZeroInput and ZeroState Solutions.
Part B: zTransform Techniques
 zTransform.
 Inverse zTransform.
 Partial Fractions.
 Transfer Functions, Poles, Zeros.
 Unilateral zTransform.
Part C: FrequencyDomain Techniques
 DiscreteTime Fourier Transform (DTFT): Definition and Convergence.
 DiscreteTime Fourier Transform (DTFT): Properties.
 Frequency Response.
 Discrete Fourier Transform (DFT): Definition.
 Discrete Fourier Transform (DFT): Properties.
 Fast Fourier Transform (FFT).
 Circular Convolution.
Part D: Sampling Theory
 ContinuousTime Fourier Transform (FT).
 Sampling Theorem.
 Reconstruction Theorem.
 Linking the transforms FT, DTFT, and DFT.
Part E: Experimentation and Examples
 Carbon Dating.
 Fibonacci Numbers.
 Digital Oscillators.
 Water Reverberations.
 Diffraction of Light.
 Negative Feedback.
 Eliminating 60Hz Interference.
 Touch Tone Telephony.
 IIR and FIR Filtering.
 Time and Frequency Representations.
 EE210A ADAPTATION AND LEARNING (GraduateLevel Course)
a) New Video Lectures covering both Adaptation and Learning Components Coming.
b) Watch Video Lectures on Adaptive Filters Component
References:
 A. H. Sayed, Adaptive Filters,
John Wiley & Sons, NJ, ISBN 9780470253885, xxx+786pp, 2008.
 A. H. Sayed, Online Inference and Learning, Lecture Notes (distributed by the instructor).
Course objective: To provide a unified and thorough treatment of the theories of adaptation and learning in a cohesive and motivated manner.
Course description: The study of mechanisms for adaptation and learning from streaming data is a topic of immense practical relevance and deep theoretical challenges. In this course, we take a broad view of the field and pursue a powerful unifying treatment of the subject matter, which also highlights connections with adjacent fields. The presentation covers, in some depth, various aspects related to the topics of singleagent adaptation and machine learning, with application to the theory and practice of both adaptive filters and pattern classifiers/learning from streaming data. Since a thorough understanding of the performance and limitations of adaptive filters and learners/classifiers requires a solid grasp of the fundamentals of estimation and inference theories, the course devotes some effort towards understanding estimation and inference techniques. In particular, the course covers optimal and linear estimation methods and stochasticgradient algorithms for optimization, adaptation, and learning, including the analysis of their convergence behavior, stability range, and meansquare error performance metrics. The course also covers various techniques for online learning including Bayes and naive classifiers, nearestneighbor rules, decision trees, logistic regression, discriminant analysis, Perceptron, support vector machines, kernel methods, bagging, boosting, random forests, crossvalidation, neural networks, deep learning, convolutional networks, principal component analysis, and independent component analysis. The course considers several examples related to adaptation and learning including channel estimation, channel equalization, echo cancellation, and pattern classification.
Topics Covered
Part A: Estimation Theory
 Optimal Estimation.
 Vector Estimation.
 Linear Estimation, Regression.
 Design Examples.
 Linear Models.
Part B: Adaptation Theory
 GradientDescent Algorithms.
 StochasticGradient Algorithms.
 LeastSquares Theory.
 Recursive LeastSquares.
 MeanSquareError Performance.
 Tracking Performance.
 Transient Performance.
 Stability Conditions.
Part C: Learning Theory
 Learning and Generalization.
 Bayes classifiers.
 NearestNeighbor (NN) Rules.
 Decision Trees.
 Risk Functions.
 Regularization, Sparsity.
 Logistic Regression.
 Discriminant Analysis (LDA,FDA).
 The Perceptron.
 Support Vector Machines (SVM).
 Gradient, Subgradient, Proximal Learning.
 Kernel Methods.
 Bagging and Boosting. Random Forests.
 CrossValidation.
 Neural Networks. Deep Networks.
 Convolutional Networks.
 Principal Component Analysis (PCA).
 Independent Component Analysis (ICA).
Part D: Experimentation and Projects usually Selected from:
 Adaptive Channel Estimation.
 Linear and DFE Channel Equalization
 Acoustic and Line Echo Cancellation.
 OFDM Receivers.
 SVM Learning Machines.
 Boosting and Cross Validation.
 Discriminant Analysis.
 Neural Networks.
 Deep Learning.
 Convolutional Networks.
 EE210B INFERENCE OVER NETWORKS (GraduateLevel Course)
Watch Video Lectures
References:
 A. H. Sayed, Adaptation, Learning, and Optimization over Networks, Foundations and Trends in Machine Learning, vol. 7, issue 45, NOW Publishers, BostonDelft, 518pp, 2014. ISBN 9781601988508.
 Additional articles distributed by the instructor.
Course objective: To provide a unified and thorough treatment of the theory of distributed adaptation, optimization, and learning by multiagent systems consisting of nodes interconnected by a graph topology.
Course description: The course deals with the topic of information processing over graphs. It covers results and techniques that relate to the analysis and design of networks that are able to solve optimization, adaptation, and learning problems in an efficient and distributed manner through localized interactions among their agents. The treatment covers three intertwined topics: (a) how to perform distributed optimization over networks; (b) how to perform distributed adaptation over networks; and (c) how to perform distributed learning over networks. In these three domains, the course examines and compares the advantages and limitations of noncooperative, centralized, and distributed stochasticgradient solutions. There are many good reasons for the peaked interest in distributed implementations, especially in this day and age when the word “network” has become commonplace whether one is referring to social networks, power networks, transportation networks, biological networks, or other types of networks. Some of these reasons have to do with the benefits of cooperation in terms of improved performance and improved robustness and resilience to failure. Other reasons deal with privacy and secrecy considerations where agents may not be comfortable sharing their data with remote fusion centers. In other situations, the data may already be available in dispersed locations, as happens with cloud computing. One may also be interested in learning and extracting information through data mining from Big Data sets. The course devotes some good effort towards quantifying the limits of performance of distributed solutions and towards discussing design procedures that can bring forth their potential more fully. The presentation adopts a useful statistical perspective and derives tight performance results that elucidate the meansquare stability, convergence, and steadystate behavior of the learning networks. The course also illustrates how distributed processing over graphs gives rise to some revealing phenomena due to the coupling effect among the agents. The course overviews such phenomena in the context of adaptive networks, and considers examples related to distributed sensing, intrusion detection, distributed estimation, online adaptation, clustering, network system theory, and machine learning applications.
Topics Covered
Part A: Background Material
 Linear Algebra and Matrix Theory Results.
 Complex Gradients and Complex Hessian Matrices.
 Convexity, Strict Convexity, and Strong Convexity.
 MeanValue Theorems. Lipschitz Conditions.
Part B: SingleAgent Adaptation and Learning
 SingleAgent Optimization.
 StochasticGradient Optimization.
 Convergence and Stability Properties.
 MeanSquareError Performance Properties.
Part C: Centralized Adaptation and Learning
 Batch and Centralized Processing.
 Convergence, Stability, and Performance Properties.
 Comparison to SingleAgent Processing.
Part D: MultiAgent Network Model
 Graph Properties. Connected and StronglyConnected Networks.
 MultiAgent Inference Strategies.
 Limit Point and Pareto Optimality
 Evolution of Network Dynamics
Part E: MultiAgent Network Stability and Performance
 Stability of Network Dynamics.
 LongTerm Error Dynamics.
 Performance of MultiAgent Networks.
 Benefits of Cooperation.
 Role of Informed Agents.
 Adaptive Combination Strategies.
 Gossip and Asynchronous Strategies.
 Constrained Optimization.
 Proximal Strategies.
 ADMM Strategies.
 Clustering.

 

