EE4685 Machine learning, a Bayesian perspective

Topics: Mathematical foundation for machine learning algorithms, presented from a statistical (Bayesian) and optimization point of view.

This course provides a solid mathematical foundation for machine learning algorithms, presented from a statistical (Bayesian) and optimization point of view. It is considered to be a second (more advanced) course, after an introductory course that would have presented the general concepts.

Machine Learning is an umbrella for a range of methods coming from different scientific communities, such as (EE) statistical learning, statistical signal processing, adaptive signal processing, image processing and analysis, system identification and control, and (CS) pattern recognition, data mining and information retrieval, computer vision, and computational learning. The name “Machine Learning” indicates what these disciplines have in common: to learn from data, and then make predictions.

What one tries to learn from data is their underlying structure and regularities, via the development of a model, which can then be used to provide predictions. To this end, a number of diverse approaches have been developed, ranging from optimization of cost functions, whose goal is to optimize the deviation between what one observes from data and what the model predicts, to probabilistic models that attempt to model the statistical properties of the observed data.

Contents

The course covers 3 introductory chapters and 9 regular chapters of a recently published rather exhaustive book. The introductory chapters summarize the mathematical language used in the course: conditional probability, distributions, random processes (as also seen in EE4C03), and repeat basic tools in Estimation (regression) and Detection (classification), as seen in ET4386. The course itself then presents four groups of techniques:

  • Sparsity-aware learning: using sparsity as a regularizer in inverse problems; relation to modeling assumptions (priors). Algorithms such as OMP, FISTA, CoSaMP, elastic net, total variation.
  • Bayesian learning: Learning in Reproducing Kernel Hilbert Spaces; Support Vector Machines; k-means; inference; conjugate priors; the EM algorithm; Gaussian mixture models; non-parametric models; variational bound approximation technique; sparse Bayesian learning; relevance vector machine (RVM).
  • Graphical models: Bayesian networks, factor graphs, message passing algorithms. Hidden Markov models. Learning graphical models.
  • Neural networks and deep learning: perceptron, backpropagation, universal approximation, autoencoder, learning deep networks.

Learning goals

The objectives of the course are as follows:

  • Present the fundamentals of sparse modeling, Bayesian learning, probabilistic graphical models, neural networks and deep learning in a unifying context.
  • For each of these, present compelling examples that demonstrate how the theory is applied in practice. Very often, this centers on the translation of a problem into a suitable model description, such that it becomes apparent which tools can be applied.

Book

“Machine Learning, A Bayesian and Optimization Perspective” by Sergios Theodoridis, Elsevier Academic Press, 2nd Edition (Feb 2020), hardcover ISBN: 9780128188033

Teachers

dr.ir. Justin Dauwels

Machine learning, with applications to autonomous vehicles and biomedical signal processing

Last modified: 2023-11-03

Details

Credits: 5 EC
Period: 0/0/4/0
Contact: Justin Dauwels