Abstract
Fake news is currently seen as a
possible risk with a harmful effect on
democracy, journalism, and economies that
comes mainly from social media and online
websites. To detect fake news, we propose two
models trying to check the factuality of a claim
against relevant pieces of evidence. In this
paper, the stance of each relevant evidence
toward a certain claim is detected, then the
result of factuality checking will be decided
based on the entire aggregation of all available
stances in addition to some salient syntactic
and semantic features.
In this paper, we propose two models help
distinguish fake news from reliable content.
The first model is multi-channel LSTM-CNN
with attention, where numeric features are
merged with syntactic and semantic features as
input. Concerning the second model, word-level
and clause-level attention networks are
implemented to capture the importance degrees
of words in each clause and all clauses for each
sentence in evidence. Other crucial features
will be used in this model to guide the model in
stance detection processes such as tree kernel
and semantic similarities metrics. In our work,
for stance detection evaluation, the
PERSPECRUM data set is used for stance
detection, while DLEF corpus is used for
factuality checking task evaluation. Our
empirical results show that merging stance
detection with factuality checking helps
maximize the utility of verifying the veracity of
an argument. The assessment demonstrates
that the accuracy improves when more focus is
given on each segment (clause) rather than
each sentence, so using the proposed word-level
and clause-level attention networks
demonstrate more effectiveness against multichannel LSTM-CNN.
possible risk with a harmful effect on
democracy, journalism, and economies that
comes mainly from social media and online
websites. To detect fake news, we propose two
models trying to check the factuality of a claim
against relevant pieces of evidence. In this
paper, the stance of each relevant evidence
toward a certain claim is detected, then the
result of factuality checking will be decided
based on the entire aggregation of all available
stances in addition to some salient syntactic
and semantic features.
In this paper, we propose two models help
distinguish fake news from reliable content.
The first model is multi-channel LSTM-CNN
with attention, where numeric features are
merged with syntactic and semantic features as
input. Concerning the second model, word-level
and clause-level attention networks are
implemented to capture the importance degrees
of words in each clause and all clauses for each
sentence in evidence. Other crucial features
will be used in this model to guide the model in
stance detection processes such as tree kernel
and semantic similarities metrics. In our work,
for stance detection evaluation, the
PERSPECRUM data set is used for stance
detection, while DLEF corpus is used for
factuality checking task evaluation. Our
empirical results show that merging stance
detection with factuality checking helps
maximize the utility of verifying the veracity of
an argument. The assessment demonstrates
that the accuracy improves when more focus is
given on each segment (clause) rather than
each sentence, so using the proposed word-level
and clause-level attention networks
demonstrate more effectiveness against multichannel LSTM-CNN.
Original language | English |
---|---|
Pages (from-to) | 1-17 |
Journal | International Journal of Advanced Studies in Computer Science and Engineering |
Volume | 9 |
Issue number | 3 |
Publication status | Published - 31 Mar 2020 |
Keywords
- stance detection
- factuality checking
- deep learning
- tree kernel
- semantic similarity