The Lay Users' Evaluations of Credibility of Medical Information on the Web
In the first study we asked carefully selected respondents with a varying level of Internet skills to evaluate credibility of over 140 Polish web pages related to 16 popular medical searches. We gathered respondents' declarations regarding topic expertise and measured their psychological characteristics. We compared behaviour in two situations: browsing the Web content and searching it for specific answers. We analysed how these evaluations differ and when they are more accurate i.e. more in line with expert evaluations for the same set of webpages.
In the second study we used platform Reconcile to gather credibility ratings for a set of 190 medical websites in English. We compared behaviour of the lay users in a situation in which they made supported, unsupported and partially supported decisions. The support offered consisted of suggested system rating based on expert evaluations and the distribution of community evaluations. We also studied the effect of a reversed support, where the suggestions made by the system were opposite to the expert evaluations.
We learned that lay users exhibit an evaluation bias. They make decisions heuristically relying strongly on preconceptions and using only a small subset of cues. They are easily persuaded by the webpage’s message even though they do show some resistance to inaccurate suggestions made by a support system. Moreover we note that a second order digital divide can be observed when it comes to credibility evaluations. This divide runs across other dimensions of social inequalities but can be tied to the general level of Internet skills.