In todays open networking environment, the assumption that the learning agents that join a system are homogeneous is becoming increasingly unrealistic. This makes ef fective coordination particularly dif ficult to learn, especially in the absence of learning agent standards. In this short paper we investigate the problem of learning to coordinate with heterogeneous agents. We show that an agent employing the FMQ algorithm, a recently developed multiagent learning method, has the ability to converge towards the optimal joint action when teamed-up with one or more simple Q-learners. Specifically, we show such convergence in scenarios where simple Q-learners alone are unable to converge towards an optimum.
|Publication status||Published - 2005|