AUTOMATIC MULTILINGUAL LEXICON GENERATION USING WIKIPEDIA AS A RESOURCE

Ahmad R. Shahid, Dirnitar Kazakov

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

This paper proposes a method for creating a multilingual dictionary by taking the titles of Wikipedia pages in English and then finding the titles of the corresponding articles in other languages. The creation of such multilingual dictionaries has become possible as a result of exponential increase in the size of multilingual information on the web. Wikipedia is a prime example of such multilingual source of information on any conceivable topic in the world, which is edited by the readers. Here, a web crawler has been used to traverse Wikipedia following the links on a given page. The crawler takes out the title along with the titles of the corresponding pages in other targeted languages. The result is a set of words and phrases that are translations of each other. For efficiency, the URLs are organized using hash tables. A lexicon has been constructed which contains 7-tuples corresponding to 7 different languages, namely: English, German, French, Polish, Bulgarian, Greek and Chinese.

Original languageEnglish
Title of host publicationICAART 2009: PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON AGENTS AND ARTIFICIAL INTELLIGENCE
EditorsA Fred
Place of PublicationSETUBAL
PublisherINSTICC-INST SYST TECHNOLOGIES INFORMATION CONTROL & COMMUNICATION
Pages357-360
Number of pages4
ISBN (Print)978-989-8111-66-1
Publication statusPublished - 2009
Event1st International Conference on Agents and Artificial Intelligence - Porto
Duration: 19 Jan 200921 Jan 2009

Conference

Conference1st International Conference on Agents and Artificial Intelligence
CityPorto
Period19/01/0921/01/09

Keywords

  • Multilingual Lexicons
  • Web Crawler
  • Wikipedia
  • Natural Language Processing
  • Web mining
  • Data mining

Cite this