↓ Skip to main content

Dove Medical Press

Diarrhea-predominant irritable bowel syndrome: creation of an electronic version of a patient-reported outcome instrument by conversion from a pen-and-paper version and evaluation of their equivalence

Overview of attention for article published in Patient related outcome measures, July 2017
Altmetric Badge

About this Attention Score

  • Average Attention Score compared to outputs of the same age

Mentioned by

twitter
2 X users
facebook
1 Facebook page

Citations

dimensions_citation
3 Dimensions

Readers on

mendeley
26 Mendeley
Title
Diarrhea-predominant irritable bowel syndrome: creation of an electronic version of a patient-reported outcome instrument by conversion from a pen-and-paper version and evaluation of their equivalence
Published in
Patient related outcome measures, July 2017
DOI 10.2147/prom.s126605
Pubmed ID
Authors

Leticia Delgado-Herrera, Benjamin Banderas, Oluwafunke Ojo, Ritesh Kothari, Bernhardt Zeiher

Abstract

Subjects with diarrhea-predominant irritable bowel syndrome (IBS-D) experience abdominal cramping, bloating, pressure, and pain. Due to an absence of clinical biomarkers for IBS-D severity, evaluation of clinical therapy benefits depends on valid and reliable symptom assessments. A patient-reported outcome (PRO) instrument has been developed, comprising of two questionnaires - the IBS-D Daily Symptom Diary and IBS-D Symptom Event Log - suitable for clinical trials and real-world settings. This program aimed to support instrument conversion from pen-and-paper to electronic format. Digital technology (Android/iOS) and a traditional mode of administration study in the target population were used to migrate or convert the validated PRO IBS-D pen-and-paper measure to an electronic format. Equivalence interviews, conducted in three waves, each had three parts: 1) conceptual equivalence testing between formats, 2) electronic-version report-history cognitive debriefing, and 3) electronic version usability evaluation. After each inter-view wave, preliminary analyses were conducted and modifications made to the electronic version, before the next wave. Final revisions were based on a full analysis of equivalence interviews. The final analysis evaluated subjects' ability to read, understand, and provide meaningful responses to the instruments across both formats. Responses were classified according to conceptual equivalence between formats and mobile-format usability assessed with a questionnaire and open-ended probes. Equivalence interviews (n=25) demonstrated conceptual equivalence between formats. Mobile-application cognitive debriefing showed some subjects experienced difficulty with font/screen visibility and understanding or reading some report-history charts and summary screens. To address difficulties, minor revisions/modifications were made and landscape orientation and zoom-in/zoom-out features incorporated. This study indicates that the two administration modes are conceptually equivalent. Since both formats are conceptually equivalent, both are psychometrically reliable, as established in the pen-and-paper version. Subjects found both mobile applications (Android/iOS) offered many advantages over the paper version, such as real-time assessment of their experience.

X Demographics

X Demographics

The data shown below were collected from the profiles of 2 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 26 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 26 100%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 8 31%
Student > Bachelor 5 19%
Student > Master 3 12%
Researcher 3 12%
Student > Doctoral Student 2 8%
Other 3 12%
Unknown 2 8%
Readers by discipline Count As %
Medicine and Dentistry 7 27%
Computer Science 5 19%
Nursing and Health Professions 3 12%
Psychology 3 12%
Arts and Humanities 1 4%
Other 3 12%
Unknown 4 15%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 3. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 13 February 2018.
All research outputs
#14,918,049
of 25,382,440 outputs
Outputs from Patient related outcome measures
#75
of 196 outputs
Outputs of similar age
#165,956
of 326,871 outputs
Outputs of similar age from Patient related outcome measures
#1
of 2 outputs
Altmetric has tracked 25,382,440 research outputs across all sources so far. This one is in the 40th percentile – i.e., 40% of other outputs scored the same or lower than it.
So far Altmetric has tracked 196 research outputs from this source. They typically receive more attention than average, with a mean Attention Score of 7.7. This one has gotten more attention than average, scoring higher than 60% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 326,871 tracked outputs that were published within six weeks on either side of this one in any source. This one is in the 48th percentile – i.e., 48% of its contemporaries scored the same or lower than it.
We're also able to compare this research output to 2 others from the same source and published within six weeks on either side of this one. This one has scored higher than all of them