spacer
spacer search

UCLA Adaptive Systems Laboratory

Search
spacer
header
Home
Awards
Members
Publications
Software
Courses
Seminars
Photos
Contact
 
Home arrow Courses

Courses


Undergraduate and Graduate Courses (3)

  1. EE113 DIGITAL SIGNAL PROCESSING (Undergraduate-Level Course)

    Reference: A. H. Sayed, Discrete-Time Processing and Filtering, Lectures Notes (distributed by the instructor).

    Course objective: To provide students with a thorough introduction to the key concepts and tools they will need to understand and manipulate discrete-time signals and systems, with emphasis on time-domain analysis, transform-domain analysis, and frequency-domain analysis.

    Course description: Discrete-time signals and systems, LTI systems, impulse response sequence, linear and circular convolution, solution of difference equations, zero-input and zero-state solutions, bilateral and unilateral z-transforms, transfer functions, Discrete-Time Fourier Transform (DTFT) and properties, frequency response, Discrete-Fourier Transform (DFT) and properties, Fast Fourier Transform, Continuous-Time Fourier Transform, sampling and reconstruction, Nyquist's theorem.

    Topics Covered

    Part A: Time-Domain Techniques
    • Motivation for Discrete-Time Processing.
    • Fundamental Sequences, Energy, Power.
    • Periodic Sequences.
    • Discrete-Time Systems.
    • Linear Time-Invariant (LTI) Systems.
    • Impulse-Response Sequence.
    • Linear Convolution.
    • Homogeneous Difference Equations.
    • Solving Difference Equations.
    • Zero-Input and Zero-State Solutions.

    Part B: z-Transform Techniques
    • z-Transform.
    • Inverse z-Transform.
    • Partial Fractions.
    • Transfer Functions, Poles, Zeros.
    • Unilateral z-Transform.

    Part C: Frequency-Domain Techniques
    • Discrete-Time Fourier Transform (DTFT): Definition and Convergence.
    • Discrete-Time Fourier Transform (DTFT): Properties.
    • Frequency Response.
    • Discrete Fourier Transform (DFT): Definition.
    • Discrete Fourier Transform (DFT): Properties.
    • Fast Fourier Transform (FFT).
    • Circular Convolution.

    Part D: Sampling Theory
    • Continuous-Time Fourier Transform (FT).
    • Sampling Theorem.
    • Reconstruction Theorem.
    • Linking the transforms FT, DTFT, and DFT.

    Part E: Experimentation and Examples
    • Carbon Dating.
    • Fibonacci Numbers.
    • Digital Oscillators.
    • Water Reverberations.
    • Diffraction of Light.
    • Negative Feedback.
    • Eliminating 60Hz Interference.
    • Touch Tone Telephony.
    • IIR and FIR Filtering.
    • Time and Frequency Representations.


  2. EE210A ADAPTATION AND LEARNING (Graduate-Level Course)

    a) New Video Lectures covering both Adaptation and Learning Components Coming.
    b) Watch Video Lectures on Adaptive Filters Component

    References:
    1. A. H. Sayed, Adaptive Filters, John Wiley & Sons, NJ, ISBN 978-0-470-25388-5, xxx+786pp, 2008.
    2. A. H. Sayed, Online Inference and Learning, Lecture Notes (distributed by the instructor).

    Course objective: To provide a unified and thorough treatment of the theories of adaptation and learning in a cohesive and motivated manner.

    Course description: The study of mechanisms for adaptation and learning from streaming data is a topic of immense practical relevance and deep theoretical challenges. In this course, we take a broad view of the field and pursue a powerful unifying treatment of the subject matter, which also highlights connections with adjacent fields. The presentation covers, in some depth, various aspects related to the topics of single-agent adaptation and machine learning, with application to the theory and practice of both adaptive filters and pattern classifiers/learning from streaming data. Since a thorough understanding of the performance and limitations of adaptive filters and learners/classifiers requires a solid grasp of the fundamentals of estimation and inference theories, the course devotes some effort towards understanding estimation and inference techniques. In particular, the course covers optimal and linear estimation methods and stochastic-gradient algorithms for optimization, adaptation, and learning, including the analysis of their convergence behavior, stability range, and mean-square error performance metrics. The course also covers various techniques for online learning including Bayes and naive classifiers, nearest-neighbor rules, decision trees, logistic regression, discriminant analysis, Perceptron, support vector machines, kernel methods, bagging, boosting, random forests, cross-validation, neural networks, deep learning, convolutional networks, principal component analysis, and independent component analysis. The course considers several examples related to adaptation and learning including channel estimation, channel equalization, echo cancellation, and pattern classification.

    Topics Covered

    Part A: Estimation Theory
    • Optimal Estimation.
    • Vector Estimation.
    • Linear Estimation, Regression.
    • Design Examples.
    • Linear Models.

    Part B: Adaptation Theory
    • Gradient-Descent Algorithms.
    • Stochastic-Gradient Algorithms.
    • Least-Squares Theory.
    • Recursive Least-Squares.
    • Mean-Square-Error Performance.
    • Tracking Performance.
    • Transient Performance.
    • Stability Conditions.

    Part C: Learning Theory
    • Learning and Generalization.
    • Bayes classifiers.
    • Nearest-Neighbor (NN) Rules.
    • Decision Trees.
    • Risk Functions.
    • Regularization, Sparsity.
    • Logistic Regression.
    • Discriminant Analysis (LDA,FDA).
    • The Perceptron.
    • Support Vector Machines (SVM).
    • Gradient, Sub-gradient, Proximal Learning.
    • Kernel Methods.
    • Bagging and Boosting. Random Forests.
    • Cross-Validation.
    • Neural Networks. Deep Networks.
    • Convolutional Networks.
    • Principal Component Analysis (PCA).
    • Independent Component Analysis (ICA).

    Part D: Experimentation and Projects usually Selected from:
    • Adaptive Channel Estimation.
    • Linear and DFE Channel Equalization
    • Acoustic and Line Echo Cancellation.
    • OFDM Receivers.
    • SVM Learning Machines.
    • Boosting and Cross Validation.
    • Discriminant Analysis.
    • Neural Networks.
    • Deep Learning.
    • Convolutional Networks.


  3. EE210B INFERENCE OVER NETWORKS (Graduate-Level Course)
    Watch Video Lectures

    References:
    1. A. H. Sayed, Adaptation, Learning, and Optimization over Networks, Foundations and Trends in Machine Learning, vol. 7, issue 4-5, NOW Publishers, Boston-Delft, 518pp, 2014. ISBN 978-1-60198-850-8.
    2. Additional articles distributed by the instructor.

    Course objective: To provide a unified and thorough treatment of the theory of distributed adaptation, optimization, and learning by multi-agent systems consisting of nodes interconnected by a graph topology.

    Course description: The course deals with the topic of information processing over graphs. It covers results and techniques that relate to the analysis and design of networks that are able to solve optimization, adaptation, and learning problems in an efficient and distributed manner through localized interactions among their agents. The treatment covers three intertwined topics: (a) how to perform distributed optimization over networks; (b) how to perform distributed adaptation over networks; and (c) how to perform distributed learning over networks. In these three domains, the course examines and compares the advantages and limitations of non-cooperative, centralized, and distributed stochastic-gradient solutions. There are many good reasons for the peaked interest in distributed implementations, especially in this day and age when the word “network” has become commonplace whether one is referring to social networks, power networks, transportation networks, biological networks, or other types of networks. Some of these reasons have to do with the benefits of cooperation in terms of improved performance and improved robustness and resilience to failure. Other reasons deal with privacy and secrecy considerations where agents may not be comfortable sharing their data with remote fusion centers. In other situations, the data may already be available in dispersed locations, as happens with cloud computing. One may also be interested in learning and extracting information through data mining from Big Data sets. The course devotes some good effort towards quantifying the limits of performance of distributed solutions and towards discussing design procedures that can bring forth their potential more fully. The presentation adopts a useful statistical perspective and derives tight performance results that elucidate the mean-square stability, convergence, and steady-state behavior of the learning networks. The course also illustrates how distributed processing over graphs gives rise to some revealing phenomena due to the coupling effect among the agents. The course overviews such phenomena in the context of adaptive networks, and considers examples related to distributed sensing, intrusion detection, distributed estimation, online adaptation, clustering, network system theory, and machine learning applications.

    Topics Covered

    Part A: Background Material
    • Linear Algebra and Matrix Theory Results.
    • Complex Gradients and Complex Hessian Matrices.
    • Convexity, Strict Convexity, and Strong Convexity.
    • Mean-Value Theorems. Lipschitz Conditions.

    Part B: Single-Agent Adaptation and Learning
    • Single-Agent Optimization.
    • Stochastic-Gradient Optimization.
    • Convergence and Stability Properties.
    • Mean-Square-Error Performance Properties.

    Part C: Centralized Adaptation and Learning
    • Batch and Centralized Processing.
    • Convergence, Stability, and Performance Properties.
    • Comparison to Single-Agent Processing.

    Part D: Multi-Agent Network Model
    • Graph Properties. Connected and Strongly-Connected Networks.
    • Multi-Agent Inference Strategies.
    • Limit Point and Pareto Optimality
    • Evolution of Network Dynamics

    Part E: Multi-Agent Network Stability and Performance
    • Stability of Network Dynamics.
    • Long-Term Error Dynamics.
    • Performance of Multi-Agent Networks.
    • Benefits of Cooperation.
    • Role of Informed Agents.
    • Adaptive Combination Strategies.
    • Gossip and Asynchronous Strategies.
    • Constrained Optimization.
    • Proximal Strategies.
    • ADMM Strategies.
    • Clustering.



spacer
spacer