About this course
This course introduces the fundamentals of the field of sparse representations, starting with its theoretical concepts, and systematically presenting its key achievements. We will touch on theory and numerical algorithms.
Modeling data is the way we – scientists – believe that information should be explained and handled. Indeed, models play a central role in practically every task in signal and image processing. Sparse representation theory puts forward an emerging, highly effective, and universal such model. Its core idea is the description of the data as a linear combination of few building blocks – atoms – taken from a pre-defined dictionary of such fundamental elements.
A series of theoretical problems arise in deploying this seemingly simple model to data sources, leading to fascinating new results in linear algebra, approximation theory, optimization, and machine learning. In this course you will learn of these achievements, which serve as the foundations for a revolution that took place in signal and image processing in recent years.
What you'll learn
- About the fundamental ideas of sparse representation theory – exploring properties such as uniqueness, equivalence, and stability.
- About sparse coding algorithms and their proven ability to perform well.
1. Part 1: Sparse Representations in Signal and Image Processing: Fundamentals.
2. Part 2: Sparse Representations in Image Processing: From Theory to Practice.
While we recommend taking both courses, each of them can be taken independently of the other. The duration of each course is five weeks, and each part includes: (i) knowledge-check questions and discussions, (ii) series of quizzes, and (iii) Matlab programming projects. Each course will be graded separately, using the average grades of the questions/discussions [K] quizzes [Q], and projects [P], by Final-Grade = 0.1K + 0.5Q + 0.4P.
The following includes more details of the topics we will cover in the first course:
- Overview of Sparseland, including mathematical warm-up and intro to L1-minimization.
- Seeking sparse solutions: the L0 norm and P0 problem.
- Theoretical analysis of the Two-Ortho case of P0, including definitions of Spark and Mutual-Coherence.
- Theoretical analysis of the general case of the P0 problem.
- Greedy pursuit algorithms including: Thresholding (THR), Orthogonal Matching Pursuit (OMP) and its variants.
- Relaxation pursuit algorithms including Basis Pursuit (BP).
- Theoretical guarantees of pursuit algorithms: THR, OMP and BP.
- Practical tools to solve approximate problems, including exact solution of unitary case, Iterative Re-weighted Least Squares algorithm (IRLS) and Alternating Direction Method of Multipliers (ADMM).
- Theoretical guarantees to approximate solutions including definition of Restricted Isometry Property (RIP) and pursuit algorithms’ stability.
Pursue a Verified Certificate to highlight the knowledge and skills you gain $99.00
Official and Verified
Receive an instructor-signed certificate with the institution's logo to verify your achievement and increase your job prospects
Add the certificate to your CV or resume, or post it directly on LinkedIn
Give yourself an additional incentive to complete the course
Support our Mission
EdX, a non-profit, relies on verified certificates to help fund free education for everyone globally
Who can take this course?
Unfortunately, learners from one or more of the following countries or regions will not be able to register for this course: Iran, Cuba and the Crimea region of Ukraine. While edX has sought licenses from the U.S. Office of Foreign Assets Control (OFAC) to offer our courses to learners in these countries and regions, the licenses we have received are not broad enough to allow us to offer this course in all locations. EdX truly regrets that U.S. sanctions prevent us from offering all of our courses to everyone, no matter where they live.