GPU-Accelerated Hypothesis Cover Set Testing for Learning in Logic

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

ILP learners are commonly implemented to consider sequentially each training example for each of the hypotheses tested. Computing the cover set of a hypothesis in this way is costly, and introduces a major bottleneck in the learning process. This computation can be implemented more efficiently through the use of data level parallelism. Here we propose a GPU-accelerated approach to this task for propositional logic and for a subset of first order logic. This approach can be used with one’s strategy of choice for the exploration of the hypothesis space. At present, the hypothesis language is limited to logic formulae using unary and binary predicates, such as those covered by certain types of description logic. The approach is tested on a commodity GPU and datasets of up to 200 million training examples, achieving run times of below 30ms per cover set computation.
Original languageEnglish
Title of host publicationCEUR Proceedings of the 28th International Conference on Inductive Logic Programming
PublisherCEUR Workshop Proceedings
Publication statusE-pub ahead of print - 2018

Cite this