摘要：The conventional sparse Bayesian learning (SBL) algorithm suffers from high computational complexity. Recently, SBL has been implemented with low complexity based on the approximate message passing (AMP) algorithm. However, it is vulnerable to ‘difficult’ measurement matrices as AMP can easily diverge. Damped AMP has been used to alleviate the problem at the cost of significantly slowing the convergence rate. In this talk, I will introduce a new low complexity SBL algorithm, which is designed based on the AMP with unitary transformation (UTAMP). I will show that, compared to state-of-the-art AMP based SBL algorithms, our proposed UTAMP-SBL is much more robust and converges much faster, leading to remarkably better performance. In many cases, the performance of the algorithm can approach the support-Oracle MMSE bound closely.