Abstract
This paper proposes a method for creating a multilingual dictionary by taking the titles of Wikipedia pages in English and then finding the titles of the corresponding articles in other languages. The creation of such multilingual dictionaries has become possible as a result of exponential increase in the size of multilingual information on the web. Wikipedia is a prime example of such multilingual source of information on any conceivable topic in the world, which is edited by the readers. Here, a web crawler has been used to traverse Wikipedia following the links on a given page. The crawler takes out the title along with the titles of the corresponding pages in other targeted languages. The result is a set of words and phrases that are translations of each other. For efficiency, the URLs are organized using hash tables. A lexicon has been constructed which contains 7-tuples corresponding to 7 different languages, namely: English, German, French, Polish, Bulgarian, Greek and Chinese.
Original language | English |
---|---|
Title of host publication | ICAART 2009: PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON AGENTS AND ARTIFICIAL INTELLIGENCE |
Editors | A Fred |
Place of Publication | SETUBAL |
Publisher | INSTICC-INST SYST TECHNOLOGIES INFORMATION CONTROL & COMMUNICATION |
Pages | 357-360 |
Number of pages | 4 |
ISBN (Print) | 978-989-8111-66-1 |
Publication status | Published - 2009 |
Event | 1st International Conference on Agents and Artificial Intelligence - Porto Duration: 19 Jan 2009 → 21 Jan 2009 |
Conference
Conference | 1st International Conference on Agents and Artificial Intelligence |
---|---|
City | Porto |
Period | 19/01/09 → 21/01/09 |
Keywords
- Multilingual Lexicons
- Web Crawler
- Wikipedia
- Natural Language Processing
- Web mining
- Data mining