Abstract
In practical implementations of estimation algorithms,
designers usually have information about the range in
which the unknown variables must lie, either due to physical
constraints (such as power always being nonnegative) or due to
hardware constraints (such as in implementations using fixedpoint
arithmetic). In this paper we propose a fast (that is, whose
complexity grows linearly with the filter length) version of the
dichotomous coordinate descent recursive least-squares adaptive
filter which can incorporate constraints on the variables. The
constraints can be in the form of lower and upper bounds on each
entry of the filter, or norm bounds. We compare the proposed
algorithm with the recently proposed normalized non-negative
least mean squares (LMS) and projected-gradient normalized
LMS filters, which also include inequality constraints in the
variables.
designers usually have information about the range in
which the unknown variables must lie, either due to physical
constraints (such as power always being nonnegative) or due to
hardware constraints (such as in implementations using fixedpoint
arithmetic). In this paper we propose a fast (that is, whose
complexity grows linearly with the filter length) version of the
dichotomous coordinate descent recursive least-squares adaptive
filter which can incorporate constraints on the variables. The
constraints can be in the form of lower and upper bounds on each
entry of the filter, or norm bounds. We compare the proposed
algorithm with the recently proposed normalized non-negative
least mean squares (LMS) and projected-gradient normalized
LMS filters, which also include inequality constraints in the
variables.
Original language | English |
---|---|
Pages (from-to) | 752-756 |
Number of pages | 5 |
Journal | IEEE Signal Processing Letters |
Volume | 23 |
Issue number | 5 |
DOIs | |
Publication status | Published - 6 Apr 2016 |