Block Wiedemann algorithm explained

The block Wiedemann algorithm for computing kernel vectors of a matrix over a finite field is a generalization by Don Coppersmith of an algorithm due to Doug Wiedemann.

Wiedemann's algorithm

Let

M

be an

n x n

square matrix over some finite field F, let

xbase

be a random vector of length

n

, and let

x=Mxbase

. Consider the sequence of vectors

S=\left[x,Mx,M2x,\ldots\right]

obtained by repeatedly multiplying the vector by the matrix

M

; let

y

be any other vector of length

n

, and consider the sequence of finite-field elements

Sy=\left[yx,yMx,yM2x\ldots\right]

We know that the matrix

M

has a minimal polynomial; by the Cayley–Hamilton theorem we know that this polynomial is of degree (which we will call

n0

) no more than

n

. Say
n0
\sum
r=0
r
p
rM

=0

. Then
n0
\sum
r=0

y(pr(Mrx))=0

; so the minimal polynomial of the matrix annihilates the sequence

S

and hence

Sy

.

But the Berlekamp–Massey algorithm allows us to calculate relatively efficiently some sequence

q0\ldotsqL

with
L
\sum
i=0

qiSy[{i+r}]=0\forallr

. Our hope is that this sequence, which by construction annihilates

yS

, actually annihilates

S

; so we have
L
\sum
i=0

qiMix=0

. We then take advantage of the initial definition of

x

to say

M

L
\sum
i=0

qiMixbase=0

and so
L
\sum
i=0

qiMixbase

is a hopefully non-zero kernel vector of

M

.

The block Wiedemann (or Coppersmith-Wiedemann) algorithm

The natural implementation of sparse matrix arithmetic on a computer makes it easy to compute the sequence S in parallel for a number of vectors equal to the width of a machine word – indeed, it will normally take no longer to compute for that many vectors than for one. If you have several processors, you can compute the sequence S for a different set of random vectors in parallel on all the computers.

It turns out, by a generalization of the Berlekamp–Massey algorithm to provide a sequence of small matrices, that you can take the sequence produced for a large number of vectors and generate a kernel vector of the original large matrix. You need to compute

yiMtxj

for some

i=0\ldotsimax,j=0\ldotsjmax,t=0\ldotstmax

where

imax,jmax,tmax

need to satisfy

tmax>

d
imax

+

d
jmax

+O(1)

and

yi

are a series of vectors of length n; but in practice you can take

yi

as a sequence of unit vectors and simply write out the first

imax

entries in your vectors at each time t.

Invariant Factor Calculation

The block Wiedemann algorithm can be used to calculate the leading invariant factors of the matrix, ie, the largest blocks of the Frobenius normal form. Given

M\in

n x n
F
q
and

U,V\in

b x n
F
q
where

Fq

is a finite field of size

q

, the probability

p

that the leading

k<b

invariant factors of

M

are preserved in
2n-1
\sum
i=0

UMiVTxi

is

p\geq\begin{cases}1/64,&ifb=k+1andq=2\\left(1-

3
2b-k

\right)2\geq1/16&ifb\geqk+2andq=2\ \left(1-

2
qb-k

\right)2\geq1/9&ifb\geqk+1andq>2\end{cases}

.[1]

References

Notes and References

  1. Harrison . Gavin . Johnson . Jeremy . Saunders . B. David . 2022-01-01 . Probabilistic analysis of block Wiedemann for leading invariant factors . Journal of Symbolic Computation . en . 108 . 98–116 . 10.1016/j.jsc.2021.06.005 . 0747-7171. 1803.03864 .