By the same authors

From the same journal

Reliability of Health Information on the Internet: An Examination of Experts' Ratings

Research output: Contribution to journalArticlepeer-review

Standard

Reliability of Health Information on the Internet: An Examination of Experts' Ratings. / Craigie, Mark; Loader, Brian; Burrows, Roger; Muncer, Steven.

In: JOURNAL OF MEDICAL INTERNET RESEARCH, Vol. 4, No. 1, 2002, p. -.

Research output: Contribution to journalArticlepeer-review

Harvard

Craigie, M, Loader, B, Burrows, R & Muncer, S 2002, 'Reliability of Health Information on the Internet: An Examination of Experts' Ratings', JOURNAL OF MEDICAL INTERNET RESEARCH, vol. 4, no. 1, pp. -. https://doi.org/10.2196/jmir.4.1.e2

APA

Craigie, M., Loader, B., Burrows, R., & Muncer, S. (2002). Reliability of Health Information on the Internet: An Examination of Experts' Ratings. JOURNAL OF MEDICAL INTERNET RESEARCH, 4(1), -. https://doi.org/10.2196/jmir.4.1.e2

Vancouver

Craigie M, Loader B, Burrows R, Muncer S. Reliability of Health Information on the Internet: An Examination of Experts' Ratings. JOURNAL OF MEDICAL INTERNET RESEARCH. 2002;4(1):-. https://doi.org/10.2196/jmir.4.1.e2

Author

Craigie, Mark ; Loader, Brian ; Burrows, Roger ; Muncer, Steven. / Reliability of Health Information on the Internet: An Examination of Experts' Ratings. In: JOURNAL OF MEDICAL INTERNET RESEARCH. 2002 ; Vol. 4, No. 1. pp. -.

Bibtex - Download

@article{bf3e138f51bc4fb0a0c535c23ce14b97,
title = "Reliability of Health Information on the Internet: An Examination of Experts' Ratings",
abstract = "Background: The use of medical experts in rating the content of health-related sites on the Internet has flourished in recent years. In this research, it has been common practice to use a single medical expert to rate the content of the Web sites. In many cases, the expert has rated the Internet health information as poor, and even potentially dangerous. However, one problem with this approach is that there is no guarantee that other medical experts will rate the sites in a similar manner.Objectives: The aim was to assess the reliability of medical experts' judgments of threads in an Internet newsgroup related to a common disease. A secondary aim was to show the limitations of commonly-used statistics for measuring reliability (eg, kappa).Method: The participants in this study were 5 medical doctors, who worked in a specialist unit dedicated to the treatment of the disease. They each rated the information contained in newsgroup threads using a 6-point scale designed by the experts themselves. Their ratings were analyzed for reliability using a number of statistics: Cohen's kappa, gamma, Kendall's W, and Cronbach's alpha.Results: Reliability was absent for ratings of questions, and low for ratings of responses. The various measures of reliability used gave conflicting results. No measure produced high reliability.Conclusions: The medical experts showed a low agreement when rating the postings from the newsgroup. Hence, it is important to test inter-rater reliability in research assessing the accuracy and quality of health-related information on the Internet. A discussion of the different measures of agreement that could be used reveals that the choice of statistic can be problematic. It is therefore important to consider the assumptions underlying a measure of reliability before using it. Often, more than one measure will be needed for {"}triangulation{"} purposes.",
keywords = "Newsgroup, Internet, rating information, reliability, reproducibility of results, statistics, quality control",
author = "Mark Craigie and Brian Loader and Roger Burrows and Steven Muncer",
year = "2002",
doi = "10.2196/jmir.4.1.e2",
language = "English",
volume = "4",
pages = "--",
journal = "JOURNAL OF MEDICAL INTERNET RESEARCH",
issn = "1438-8871",
publisher = "Journal of medical Internet Research",
number = "1",

}

RIS (suitable for import to EndNote) - Download

TY - JOUR

T1 - Reliability of Health Information on the Internet: An Examination of Experts' Ratings

AU - Craigie, Mark

AU - Loader, Brian

AU - Burrows, Roger

AU - Muncer, Steven

PY - 2002

Y1 - 2002

N2 - Background: The use of medical experts in rating the content of health-related sites on the Internet has flourished in recent years. In this research, it has been common practice to use a single medical expert to rate the content of the Web sites. In many cases, the expert has rated the Internet health information as poor, and even potentially dangerous. However, one problem with this approach is that there is no guarantee that other medical experts will rate the sites in a similar manner.Objectives: The aim was to assess the reliability of medical experts' judgments of threads in an Internet newsgroup related to a common disease. A secondary aim was to show the limitations of commonly-used statistics for measuring reliability (eg, kappa).Method: The participants in this study were 5 medical doctors, who worked in a specialist unit dedicated to the treatment of the disease. They each rated the information contained in newsgroup threads using a 6-point scale designed by the experts themselves. Their ratings were analyzed for reliability using a number of statistics: Cohen's kappa, gamma, Kendall's W, and Cronbach's alpha.Results: Reliability was absent for ratings of questions, and low for ratings of responses. The various measures of reliability used gave conflicting results. No measure produced high reliability.Conclusions: The medical experts showed a low agreement when rating the postings from the newsgroup. Hence, it is important to test inter-rater reliability in research assessing the accuracy and quality of health-related information on the Internet. A discussion of the different measures of agreement that could be used reveals that the choice of statistic can be problematic. It is therefore important to consider the assumptions underlying a measure of reliability before using it. Often, more than one measure will be needed for "triangulation" purposes.

AB - Background: The use of medical experts in rating the content of health-related sites on the Internet has flourished in recent years. In this research, it has been common practice to use a single medical expert to rate the content of the Web sites. In many cases, the expert has rated the Internet health information as poor, and even potentially dangerous. However, one problem with this approach is that there is no guarantee that other medical experts will rate the sites in a similar manner.Objectives: The aim was to assess the reliability of medical experts' judgments of threads in an Internet newsgroup related to a common disease. A secondary aim was to show the limitations of commonly-used statistics for measuring reliability (eg, kappa).Method: The participants in this study were 5 medical doctors, who worked in a specialist unit dedicated to the treatment of the disease. They each rated the information contained in newsgroup threads using a 6-point scale designed by the experts themselves. Their ratings were analyzed for reliability using a number of statistics: Cohen's kappa, gamma, Kendall's W, and Cronbach's alpha.Results: Reliability was absent for ratings of questions, and low for ratings of responses. The various measures of reliability used gave conflicting results. No measure produced high reliability.Conclusions: The medical experts showed a low agreement when rating the postings from the newsgroup. Hence, it is important to test inter-rater reliability in research assessing the accuracy and quality of health-related information on the Internet. A discussion of the different measures of agreement that could be used reveals that the choice of statistic can be problematic. It is therefore important to consider the assumptions underlying a measure of reliability before using it. Often, more than one measure will be needed for "triangulation" purposes.

KW - Newsgroup

KW - Internet

KW - rating information

KW - reliability

KW - reproducibility of results

KW - statistics

KW - quality control

U2 - 10.2196/jmir.4.1.e2

DO - 10.2196/jmir.4.1.e2

M3 - Article

VL - 4

SP - -

JO - JOURNAL OF MEDICAL INTERNET RESEARCH

JF - JOURNAL OF MEDICAL INTERNET RESEARCH

SN - 1438-8871

IS - 1

ER -