Abstract
Inductive learning algorithms that have been applied to learning in description logics (DL) have not been as well studied and optimized as the more general class of feature-based learning algorithms. This paper proposes a way to apply feature-based learners to DL learning tasks by presenting a method to compute a feature-vector representation for DL instances. The representation is based on concepts computed by a DL learning algorithm and by a feature generation method that has previously been applied to sequence categorization tasks. We show encouraging empirical test results using the feature-based learning systems Ripper, C5.0, and Naive Bayes. 1 Introduction Description logics (DL) are a well-studied formalism for the representation of knowledge, for which inductive learning problems have been defined in the past [3, 5]. When presented with a set of labeled examples (i.e., DL individuals), the learning algorithms compute a hypothesis to predict the label of new, previously unseen...
Original language | Undefined/Unknown |
---|---|
Publication status | Published - 1999 |