Variational expectation-maximization training for Gaussian networks

N Nasios, A G Bors

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

This paper introduces variational expectation-maximization (VEM) algorithm for training Gaussian networks. Hyperparameters model distributions of parameters characterizing Gaussian mixture densities. The proposed algorithm employs a hierarchical learning strategy for estimating a set of hyperparameters and the number of Gaussian mixture components. A dual EM algorithm is employed as the initialization stage in the VEM-based learning. In the first stage the EM algorithm is applied on the given data set while the second stage EM is used on distributions of parameters resulted from several runs of the first stage EM. Appropriate maximum log-likelihood estimators are considered for all the parameter distributions involved.

Original languageEnglish
Title of host publication2003 IEEE XIII WORKSHOP ON NEURAL NETWORKS FOR SIGNAL PROCESSING - NNSP'03
Place of PublicationNEW YORK
PublisherIEEE
Pages339-348
Number of pages10
ISBN (Print)0-7803-8177-7
Publication statusPublished - 2003
Event13th IEEE Workshop on Neural Networks for Signal Processing (NNSP 2003) - Toulouse
Duration: 17 Sept 200319 Sept 2003

Conference

Conference13th IEEE Workshop on Neural Networks for Signal Processing (NNSP 2003)
CityToulouse
Period17/09/0319/09/03

Keywords

  • EM ALGORITHM
  • MODELS

Cite this