Fast Algorithms for dense structured matrices


Prof. Shiv Chandrasekaran, (main speaker - SC), ECE department, University of California Santa Barbara, Santa Barbara, CA, United States 


Dr. Nithin Govindarajan, (co-lecturer), ESAT, KU Leuven, Leuven, Belgium 


Schedule and place

This 15-hour course will take place in 5 sessions over 5 half-day on March 2, 4, 7, 9, 11, 2022 

at KU Leuven - Huis Bethlehem, Aula Wolfspoort, Schapenstraat 34, 3000 Leuven

Parking is possible at Redingenstraat or Schapenstraat with the code 1702# (valid daily).
Please note this code should be used to enter and to leave the parking.

Schedule: 3 hours/day.

March 2 from 9h00 to 12h30
March 4 from 9h00 to 12h30
March 7 from 13h30 to 17h00
March 9 from 13h30 to 17h00
March 11 from 9h00 to 12h30

Each session consists of two lectures with a duration of 1.5 hours.

Teaching method: Face-to-face by default, but remote or dual-mode solutions may be considered if necessary.


This course provides an introduction to the foundations of fast algorithms for structured matrices. After a general overview of Strassen's method for dense matrices and algorithms for sparse matrices, the emphasis of the course will primarily be on algorithms for dense structured matrices. Two structures to be discussed in detail are: (i) the low displacement structure pioneered by Kailath et al.,  and (ii) the low-rank structures pioneered by Dewilde & van der Veen, Eidelman & Gohberg, Hackbusch, Rokhlin & Greengard, and others. We shall discuss the impact of these algorithms in various application domains such as fast solvers for PDEs, integral equations, problems in systems & control, and polynomial arithmetic. Both theory and algorithms will be emphasized in the course.

Prerequisites: Linear algebra, computational linear algebra

Session 1
Date: 09:00-12:30, March 2, 2022
1. Introduction and motivation for fast structured linear algebra (Shiv)
    - Basic notation and complexity of basic linear algebra operations
    - An enlightening example: divide & conquer and Strassen's algorithm
    - Implications and limitations of Strassen's algorithm
    - Iterative methods vs. direct methods
2. Fast structured linear algebra: two important classical examples (Shiv)
    - Example 1 - the DFT matrix and Cooley-Tuckey's FFT algorithm
    - Example 2 - sparse matrices

Session 2
Date: 09:00-12:30, March 4, 2022
1. Toeplitz, Hankel, circulant, and DFT matrices (Nithin)
    - Fast multplication with circulant matrices using Cooley-Tuckey's idea
    - Fast matrix-vector multiply for Toeplitz and Hankel matrices
    - Extending the FFT algorithm to perform any-sized DFT
    - Applications: fast polynomial arithmetic
    - Limitations of FFT: inversion of Toeplitz or Hankel matrices 
2. Introduction to displacement rank theory (Shiv)
    - Definition of low-displacement-rank (LDR) matrices
    - Important examples of LDR matrices
    - Intermezzo: solving the Sylvester equations AX + XB = C
    - Link between Sylvester equations and Gauss elimination on LDR matrices

Session 3
Date: 13:30-17:00, March 7, 2022 
1. Displacement rank theory and the Schur algorithm (Shiv)
    - Description of the Schur algorithm
    - An important case study: the Cauchy matrix
    - (Fast) conversion of Toeplitz, Hankel, and Vandermonde matrices to Cauchy matrices
    - Stability concerns and implementation
    - When does displacement rank theory lead to fast algorithms? 
2. Sequentially semi-separable matrices - I  (Nithin)
    - A motivating example: banded matrices and their inverses
    - Off-diagonal low rank and its algebraic properties
    - More examples of matrices with low off-diagonal rank structure
    - A straightforward fast matrix-vector multiply exploiting low rank
    - Exploiting additional structure: the algebraic relationship between overlapping low-rank blocks
    - The sequentially semi-separable (SSS) representation

Session 4
Date: 13:30-17:00, March 9, 2022 
1. Sequentially semi-separable matrices - II (Shiv)
    - Construction of SSS matrices
    - A fast matrix-vector multiply for SSS matrices
    - Algebra on SSS matrices: addition and multiplication
    - Links to systems theory
2. Sequentially semi-separable matrices - III (Shiv)
    - Fast LU factorization of SSS matrices
    - A fast SSS solver through sparse embedding
    - A fast Toeplitz solver using SSS matrices: the Cauchy connection

Session 5
Date: 09:00-12:30, March 11, 2022
1. Hierachically semi-separable matrices - I (Shiv)
    - A motivating example: a one-dimensional electromagnetic problem
    - Well-separability and the Fast Multipole Method (FMM) matrix partitioning
    - Advantages of FMM-like structures in higher-dimensional settings
    - Inverse problems: the need for algebraic closure under inversion
    - Hierachically semi-separable (HSS) matrices and their algebraic properties
2. Hierachically semi-separable matrices - II (Shiv)
    - A fast matrix-vector multiply for HSS matrices
    - A fast HSS solver through sparse embedding
    - Summary: key take-aways, open questions, and unadressed topics


Course material

To be determined


The student's grade will be evaluated based-off their performance on the course assignments. There will be in total 5 take-home assignments which will be handed over to the students at the end of every session. These assignments contain a collection of hands-on coding exercises along with some theoretical questions.

It is recommended that the coding assignments are done in the Julia programming language environment, however the students are *free* to work with their preferred programming language of choice.