# Householder Bidiagonalization in D

Author: Dr Chibisi Chima-Okereke Created: December 18, 2020 01:18:07 GMT Published: December 18, 2020 02:56:03 GMT

## Introduction

Bidiagonalization is used in various applications in linear algebra. It transforms a matrix $$A \in \mathbb{R}^{m \times n}, \quad m \geq n$$ into a bidiagonal matrix, described as having non-zero values only in the diagonal and above or below the diagonal with zero elsewhere. A previous article implemented the QR decomposition by using Householder transformations to induce zeros in particular parts of the matrix, resulting in the upper triangular matrix $$R$$ along with the orthogonal transformation matrix $$Q$$. Householder bidiagonalization uses Householder transformations to transform the matrix $$A$$ to a bidiagonal matrix $$B \in \mathbb{R}^{m \times n}, \quad m \geq n$$ where the bottom $$m - n$$ rows are zero. This is done by generating the appropriate orthogonal transformations such that $$A = U B V^{T}$$ where $$U$$ and $$V$$ are orthogonal matrices. The matrices $$U$$ and $$V$$ are the accumulated Householder transformations. This article is a short note presenting the implementation of bidiagonalization in the D programming language. Householder Bidiagonalization is also known as the Golub-Kahan Bidiagonalization and these names will be used interchangeably in the text to refer to the same transformation. A bidiagonal matrix is of the form:

$$B = \begin{pmatrix} a_{1} & b_{1} & \quad & \quad & \quad\\ \quad & a_{2} & b_{2} & \quad & \quad\\ \quad & \quad & \ddots & \ddots & \quad\\ \quad & \quad & \quad & \ddots & b_{n - 1}\\ \quad & \quad & \quad & \quad & a_{n}\\ \end{pmatrix}$$

or:

$$B = \begin{pmatrix} a_{1} & \quad & \quad & \quad & \quad\\ b_{1} & a_{2} & \quad & \quad & \quad\\ \quad & b_{2} & \ddots & \quad & \quad\\ \quad & \quad & \ddots & \ddots & \quad\\ \quad & \quad & \quad & b_{n - 1} & a_{n}\\ \end{pmatrix}$$

Reference to Householder Bidiagonalization algorithm can be found here:

## The Golub-Kahan Bidiagonalization algorithm

The Golub-Kahan (or Householder) Bidiagonalization algorithm is given below:

$$\text{for }k = 1:n$$
$$\quad \mathbf{x} = A[k:m, k]$$
$$\quad \mathbf{u}_{k} = \mathbf{x} + \text{sign}(\mathbf{x})\lVert\mathbf{x}\rVert_{2}\mathbf{e}_{1}$$
$$\quad \mathbf{u}_{k} = \mathbf{u}_{k}/\lVert\mathbf{u}_{k}\rVert_{2}$$
$$\quad A[k:m, k:n] = A[k:m, k:n] - 2\mathbf{u}_{k}(\mathbf{u}_{k}^{T}A[k:m, k:n])$$
$$\quad \text{if}(k \leq n - 2)$$
$$\qquad \mathbf{x} = A[k, (k + 1):n]$$
$$\qquad \mathbf{v}_{k} = \mathbf{x} + \text{sign}(\mathbf{x})\lVert\mathbf{x}\rVert\mathbf{e}_{1}$$
$$\qquad \mathbf{v}_{k} = \mathbf{v}_{k}/\lVert\mathbf{v}_{k}\rVert_{2}$$
$$\qquad A[k:m, (k + 1):n] = A[k:m, (k + 1):n] - 2(A[k:m, (k + 1):n]\mathbf{v}_{k})\mathbf{v}_{k}^{T}$$
$$\quad end$$
$$end$$

where $$\mathbf{u}_{k}$$ and $$\mathbf{v}_{k}$$ are Householder vectors.

### Implementation of Golub-Kahan Bidiagonalization algorithm

The implementation of the Golub-Kahan Bidiagonalization algorithm in D is given below with comments:

auto bidiag(T)(Matrix!(T) A)
{
auto B = A.dup;
immutable auto m = A.nrow;
immutable auto n = A.ncol;
auto U = eye!(T)(m);
auto V = eye!(T)(n);
for(long k = 0; k < B.ncol; ++k)
{
//Create the Householder vector
auto x = B[k..$, k].array; auto u = x.dup; u += sign(x)*norm(x); T _norm = norm(u); imap!((a) => a/_norm)(u); //Accumulate the Householder transformation vectors in U U = gemm(U, F(u).expand(m)); auto mtmp = B[k..$, k..$]; //Apply the Householder transformation to the matrix B[k..$, k..$] = mtmp - 2*outer(u, gemv!(Column)(mtmp, u)); if(k < (n - 2)) { //Create the Householder vector x = B[k, (k + 1)..$].array;
auto v = x.dup;
v += sign(x)*norm(x);
_norm = norm(v);
imap!((a) => a/_norm)(v);
//Accumulate the Householder transformation vectors in V
V = gemm(V, F(v).expand(n));
mtmp = B[k..$, (k + 1)..$];
//Apply the Householder transformation to the matrix
B[k..$, (k + 1)..$] = mtmp - 2*outer(gemv!(Row)(mtmp, v), v);
}
}
//Cleanup
imodify!((a) => abs(a) > max(1E-14, typeof(a).epsilon) ? a : 0, double)(B);
return [B, U, V];
}


The expand function is given by:

auto expand(T)(Matrix!(T) x, size_t n)
if(isFloatingPoint!(T))
{
if(x.ncol != n)
{
size_t n0 = n - x.ncol;
auto y = eye!(T)(n);
y[n0..$, n0..$] = x;
return y;
}
return x;
}


### Numerical example

Matrix A is given below:

auto A = Matrix!(double)([0.8594598509, 0.8886035203, 0.8149294811, 0.7431045200, 0.8032585254,
0.0587533356, 0.7245921139, 0.5380305406, 0.7342256338, 0.6982547215,
0.7176400044, 0.0539911194, 0.3670289037, 0.9701228316, 0.8404100032,
0.4112932913, 0.3075223914, 0.5798244230, 0.0015286701, 0.7890766996,
0.9781337455, 0.2921431712, 0.0432923459, 0.9428416709, 0.9646959945,
0.0354323143, 0.4898468039, 0.4513681016, 0.2107982126, 0.4445287671,
0.8115565467, 0.7058405790, 0.5527189195, 0.5410537042, 0.9117912347,
0.1149175267, 0.8406228190, 0.6040554044, 0.4260203703, 0.2376075180,
0.2164094832, 0.1800869710, 0.7479251262, 0.0009715103, 0.8810979640,
0.8647838791, 0.5856765260, 0.0127644690, 0.5744975219, 0.1985024847], 10, 5);
writeln("A: ", A);

A: Matrix(10 x 5)
0.8595    0.7176    0.9781    0.8116    0.2164
0.8886    0.05399   0.2921    0.7058    0.1801
0.8149    0.367     0.04329   0.5527    0.7479
0.7431    0.9701    0.9428    0.5411    0.0009715
0.8033    0.8404    0.9647    0.9118    0.8811
0.05875   0.4113    0.03543   0.1149    0.8648
0.7246    0.3075    0.4898    0.8406    0.5857
0.538     0.5798    0.4514    0.6041    0.01276
0.7342    0.001529  0.2108    0.426     0.5745
0.6983    0.7891    0.4445    0.2376    0.1985


The bidiagonalization of the matrix is given as:

auto bd = bidiag(A);
"bd:\n".writeln(bd);


output giving B, U, and V:

bd:
[Matrix(10 x 5)
-2.288     3.141      0          0          0
0          -1.224     -0.5055    0          0
0          0          0.7179     0.5443     0
0          0          0          0.9904     -0.5413
0          0          0          0          -0.3952
0          0          0          0          0
0          0          0          0          0
0          0          0          0          0
0          0          0          0          0
0          0          0          0          0
, Matrix(10 x 10)
-0.3757    0.1943     -0.02317   -0.2816    -0.2814    -0.2818    -0.5632    -0.5143    -0.01057   -0.03002
-0.3884    -0.4504    0.2443     -0.1733    0.01233    0.5822     -0.1541    0.0419     -0.4206    0.1231
-0.3562    -0.2488    0.0438     0.3774     0.4817     -0.321     0.03563    -0.1346    -0.1492    -0.5386
-0.3248    0.2103     -0.4734    -0.255     -0.1015    0.4115     0.3701     -0.07551   0.1976     -0.447
-0.3511    0.5542     0.2029     0.1251     -0.2274    -0.1419    0.06895    0.5262     -0.3952    -0.05597
-0.02568   0.434      0.02681    0.5851     0.2242     0.4944     -0.2822    -0.2341    0.1439     0.1344
-0.3167    0.1076     0.4652     -0.05997   0.07417    -0.09397   0.5894     -0.3756    0.1953     0.3567
-0.2352    0.1154     0.05038    -0.392     0.5783     -0.03521   -0.2677    0.4074     0.4297     0.1313
-0.3209    -0.3483    0.07769    0.366      -0.4764    -0.02554   -0.0942    0.2653     0.5734     -0.006341
-0.3052    -0.1078    -0.6685    0.1845     0.09752    -0.1842    0.09207    0.02833    -0.1823    0.573
, Matrix(5 x 5)
1          0          0          0          0
0          -0.4831    -0.5842    -0.3689    -0.5378
0          -0.5116    -0.3033    -0.02116   0.8036
0          -0.6025    0.7516     -0.2464    -0.1064
0          -0.3767    -0.04102   0.896      -0.2317
]


We can confirm the transformation by reconstituting A:

"UBV:\n".writeln(gemm!(CblasNoTrans, CblasTrans)(gemm(bd, bd), bd));

UBV:
Matrix(10 x 5)
0.8595    0.7176    0.9781    0.8116    0.2164
0.8886    0.05399   0.2921    0.7058    0.1801
0.8149    0.367     0.04329   0.5527    0.7479
0.7431    0.9701    0.9428    0.5411    0.0009715
0.8033    0.8404    0.9647    0.9118    0.8811
0.05875   0.4113    0.03543   0.1149    0.8648
0.7246    0.3075    0.4898    0.8406    0.5857
0.538     0.5798    0.4514    0.6041    0.01276
0.7342    0.001529  0.2108    0.426     0.5745
0.6983    0.7891    0.4445    0.2376    0.1985


That’s it! Thank you for reading.