Evidence Based Medicine on FHIR Implementation Guide, published by HL7 International / Clinical Decision Support. This guide is not an authorized publication; it is the continuous build for version 2.0.0-ballot built by the FHIR (HL7® FHIR® Standard) CI Build. This version is based on the current content of https://github.com/HL7/ebm/ and changes regularly. See the Directory of published versions
Official URL: https://fevir.net/resources/CodeSystem/181513 | Version: 2.0.0-ballot | |||
Active as of 2024-12-13 | Computable Name: Sevco_example_for_ebmonfhir_ig | |||
Other Identifiers: FEvIR Object Identifier: Uniform Resource Identifier (URI)#https://fevir.net/FOI/181513, OID:2.16.840.1.113883.4.642.40.44.16.3 | ||||
Copyright/Legal: https://creativecommons.org/licenses/by-sa/4.0/ copyright holder is Scientific Knowledge Accelerator Foundation |
This code system was copied as a snapshot from the version being used for active development of the Scientific Evidence Code System (SEVCO). This code system is not yet released for expected use and may not be stable. This resource may be used for supporting the examples in the EBMonFHIR Implementation Guide, and published versions of the code system (when ready) will be published as separate resources with stable identifiers.
Support of examples in the EBMonFHIR Implementation Guide, prior to final publication of the EBMonFHIR Implementation Guide
This Code system is referenced in the content logical definition of the following value sets:
Generated Narrative: CodeSystem 181513
version: 47; Last updated: 2024-12-16 14:13:46+0000
Properties
This code system defines the following properties for its concepts
Name | Code | Type | Description |
comment | comment | string | Comment for application |
editors | editors | string | Term/Definition Editors |
approval | approval | string | Expert Working Group Agreement |
negative-vote | negative-vote | string | Expert Working Group Disagreement |
expert-comments | expert-comments | string | Expert Working Group Comments |
external-definitions | external-definitions | string | Externally Mapped Definitions |
open-for-voting | open-for-voting | dateTime | Open for Voting |
change-for-vote | change-for-vote | string | Proposed Change for Future Vote |
multiple-parents | multiple-parents | string | Has more than one parent term (IS-A relationship) |
statistical-purpose | statistical-purpose | string | Statistical purpose |
deprecated | deprecated | string | Deprecated |
Concepts
This case-insensitive code system https://fevir.net/resources/CodeSystem/181513
defines the following codes in a Is-A hierarchy:
Lvl | Code | Display | Definition | comment | editors | approval | negative-vote | expert-comments | external-definitions | change-for-vote | multiple-parents | statistical-purpose | deprecated | Finnish (fi) |
1 | SEVCO:01000 | study design | A plan specification for how and what kinds of data will be gathered as part of an investigation which may produce testable explanations, conclusions and predictions or test a hypothesis. | Brian S. Alper, Joanne Dehnbostel, Harold Lehmann, Asiyah Lin, Mario Tristan, Neeraj Ojha | 9/9 as of 4/26/2021: Eric Harvey, Bhagvan Kommadi, KM Saif-Ur-Rahman, Paola Rosati, Jesús López-Alcalde, Tatyana Shamliyan, Sorana D. Bolboaca, Asiyah Lin, Eric Au | 2021-04-12 Vote 9-2 on "Study design=A plan specification for how and what kinds of data are gathered or used to generate or test a hypothesis", Bhagvan Kommadi, Jesús López-Alcalde, Sorana D. Bolboaca, Tatyana Shamliyan, Asiyah Lin, Philippe Rocca-Serra, Eric Au, Alejandro Piscoya, Harold Lehmann, KM Saif-Ur-Rahman, Eric Harvey 2021-04-06 vote 8-1 on "Study Design = A plan specification for how and what kinds of data will be gathered as part of an investigation to generate or test a hypothesis" by Tatyana Shamliyan, Paola Rosati, Mario Tristan, Bhagvan Kommadi, Jesús López-Alcalde, Eric Harvey, KM Saif-Ur-Rahman, Asiyah Lin, Brian S. Alper | ||||||||
2 | SEVCO:01001 | interventional research | A study design in which an independent variable (an exposure or intervention) is prospectively assigned or modified by the investigator to evaluate a response in the dependent variable (an effect or outcome). | We acknowledge that interventional study design and interventional study may not be exact synonyms of interventional research, but interventional research could be used to encompass both design and implementation of the design | Mario Tristan, Joanne Dehnbostel, Harold Lehmann, Khalid Shahin, Brian S. Alper | 12/12 as of 5/31/2021: Eric Harvey, Bhagvan Kommadi, Brian Alper, Sebastien Bailly, Alejandro Piscoya, Harold Lehmann, KM Saif-Ur-Rahman, Paola Rosati, Sorana D. Bolboaca, Asiyah Lin, Leo Orozco, Erfan Shamsoddin | 2021-05-17 vote 6-2 on "Interventional research = In a prospective study, an independent variable is manipulated or assigned by the investigator to evaluate a response or outcome (the dependent variable)." by Eric Harvey, Bhagvan Kommadi, Paola Rosati, KM Saif-Ur-Rahman, Ahmad Sofi-Mahmudi, Jesus Lopez-Alcalde, Sorana D. Bolboaca, Harold Lehmann, 2021-05-24 vote 10-1 on Interventional research="A study design in which an independent variable (an exposure or intervention) is prospectively assigned or modified by the investigator to evaluate a response in the dependent variable (an effect or outcome)." by Alejandro Piscoya, Philippe Rocca-Serra, KM Saif-Ur-Rahman, Eric Harvey, Harold Lehmann, Bhagvan Kommadi, Sorana D. Bolboaca, Jesús López-Alcalde, Paola Rosati, Tatyana Shamliyan, Brian Alper | I would avoid the term prospective study, as this term is ambiguous. Suggested change to "A study in whichi the independent variable is prospectively manipulated or assigned by the invesigator…" Manipulate = to control, manipulate or influence suggestion to delete "the dependent variable" which mixes language of analysis vs. design with "response" 5-24-2021 No major disagreement with the definition but uneasy to have 'intervention study' as (unspecified) synonym as doing so convey that a plan (the study design) is the same as the execution of the plan (the study). The same applies to 'Primary research...) I think that we need to clarify the goals: Experiments examine cause-and-effect relationship by measuring outcomes when a particular factor (exposure, intervention, independent variable) is manipulated and controlled during and after experiment (inference). I think that we should clarify the subjects of experiments: consent people or animals | ||||||
3 | SEVCO:01003 | randomized assignment | An interventional study design in which an independent variable (an exposure or intervention) is prospectively assigned or modified by random chance to separate groups. | Brian S. Alper, Joanne Dehnbostel, Mario Tristan, Kenneth Wilkins, Erfan Shamsoddin, Ellen Jepson | 8/8 as of 7/19/2021: Alejandro Piscoya, Harold Lehmann, Paola Rosati, Eric Harvey, Bhagvan Kommadi, Jesus Lopez-Alcalde, Sorana d. Bolboaca, Janice Tufte | |||||||||
4 | SEVCO:01006 | simple randomization | A randomized assignment in which each participant has the same prespecified likelihood of being assigned to a group as all other participants, independent of the assignment of any other participant. | Brian S. Alper, Joanne Dehnbostel, Mario Tristan, Kenneth Wilkins, Erfan Shamsoddin, Ellen Jepson | 8/8 as of 7/19/2021: Alejandro Piscoya, Harold Lehmann, Paola Rosati, Eric Harvey, Bhagvan Kommadi, Jesus Lopez-Alcalde, Sorana d. Bolboaca, Janice Tufte | |||||||||
4 | SEVCO:01007 | stratified randomization | A randomized assignment in which participants are stratified into groups based on prognostic variables and then randomized into balanced treatment groups | Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Khalid Shahin | 8/8 as of 7/19/2021: Alejandro Piscoya, Harold Lehmann, Paola Rosati, Eric Harvey, Bhagvan Kommadi, Jesus Lopez-Alcalde, Sorana d. Bolboaca, Janice Tufte | |||||||||
4 | SEVCO:01008 | block randomization | A randomized assignment in which a pre-specified number of subjects is assigned to a block containing the same pre-specified ratio of group assignments in random order. | Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Khalid Shahin | 7/7 as of 7/26/2021: Mario Tristan, Paola Rosati, Eric Harvey, Bhagvan Kommadi, Jesus Lopez-Alcalde, Leo Orozco, Janice Tufte | 2021-07-19 vote 7-1 on "A randomized assignment in which a pre-specified number of subjects is assigned to a block containing the same pre-specified number of balanced group assignments in random order" by Alejandro Piscoya, Harold Lehmann, Paola Rosati, Eric Harvey, Bhagvan Kommadi, Jesus Lopez-Alcalde, Sorana d. Bolboaca, Janice Tufte | I think I'm hung up on the word "balanced". Does allocation in block design need to be balanced? Couldn't a block design allocate subjects to treatment arms in a 2:1, or other "unbalanced" ratio? | |||||||
4 | SEVCO:01009 | adaptive randomization | A randomized assignment in which a participant’s group assignment probability is adjusted based on any factor such that the likelihood of assignment is not the same for all participants. | Brian S. Alper, Joanne Dehnbostel, Mario Tristan, Kenneth Wilkins, Erfan Shamsoddin, Ellen Jepson | 9/9 as of 8/9/2021: Erfan Shamsoddin, Paola Rosati, Eric Harvey, Bhagvan Kommadi, Brian S. Alper, Cheow Peng Ooi, Janice Tufte, Joanne Dehnbostel, Alejandro Piscoya | 2021-07-19 vote 7-1 on "A randomized assignment in which a participant’s group assignment probability is adjusted based on any factor such that the likelihood of assignment is not the same for all participants." by Alejandro Piscoya, Harold Lehmann, Paola Rosati, Eric Harvey, Bhagvan Kommadi, Jesus Lopez-Alcalde, Sorana d. Bolboaca, Janice Tufte, 2021-07-26 vote 6-1 by Paola Rosati, Eric Harvey, Bhagvan Kommadi, Jesus Lopez-Alcalde, Janice Tufte, Mario Tristan | I deem this kind of adaptation could determine conflict of interests or a new kind of bias. I disagree with adding an adaptive randomization as a new term 7-26-21 comment: Again, why and for what you wish to maintain this term? I think the term adaptive randomization risks a severe selection bias. In ethical terms, I deem there is no justification to proceed with such a methodology in clinical trials. | |||||||
3 | SEVCO:01005 | non-randomized assignment | An interventional study design in which an independent variable (an exposure or intervention) is prospectively assigned or modified by methods other than random chance to separate groups. | Brian S. Alper, Joanne Dehnbostel, Michael Panzer, Janice Tufte, Erfan Shamsoddin, Ellen Jepson, Khalid Shahin | 9/9 as of 8/9/2021: Erfan Shamsoddin, Paola Rosati, Eric Harvey, Bhagvan Kommadi, Brian S. Alper, Cheow Peng Ooi, Janice Tufte, Joanne Dehnbostel, Alejandro Piscoya | 2021-07-19 vote 6-2 on "An interventional study design in which an independent variable (an exposure or intervention) is prospectively assigned or modified by methods other than random chance to separate groups." by Alejandro Piscoya, Harold Lehmann, Paola Rosati, Eric Harvey, Bhagvan Kommadi, Jesus Lopez-Alcalde, Sorana D. Bolboaca, Janice Tufte, 2021-07-26 vote 6-1 by Paola Rosati, Eric Harvey, Bhagvan Kommadi, Jesus Lopez-Alcalde, Janice Tufte, Mario Tristan | In this case, if the patients choose which is the arm they want to be in it would beok to insert this term. I presumetherefore that if the choice is made bythe researchers they offer a clearjustification for it in the protocol As written, this category would include all quasi-randomized designs. If this is the intent, fine. If this was not the intent, perhaps we could change "..randomized.." to "..randomized or quasi-randomized.." 7-26-21 comment: We usually have started the definitions by saying "A xxx assignment that..." (see previous ones in this page). That is, we define the assigment. However, for"Non-Randomized Assignment" we start by saying "An interventional study design..." I propose to describe the "assignment" (avoid starting by defining the study design itself) | |||||||
4 | SEVCO:01004 | quasi-randomized assignment | An interventional study design with a method of allocation that is not limited to random chance but is intended to produce similar baseline groups for experimentation. | Quasi-random methods of allocation include allocation by alternate order of entry, date of birth, day of the week, month of the year, or medical record number | Brian S. Alper, Joanne Dehnbostel, Michael Panzer, Janice Tufte, Erfan Shamsoddin, Ellen Jepson, Khalid Shahin | 7/7 as of 7/26/2021: Mario Tristan, Paola Rosati, Eric Harvey, Bhagvan Kommadi, Jesus Lopez-Alcalde, Leo Orozco, Janice Tufte | 2021-07-19 vote 6-2 on "An interventional study design with a method of allocation that is not limited to random chance but is intended to produce similar baseline groups for experimentation." by Alejandro Piscoya, Harold Lehmann, Paola Rosati, Eric Harvey, Bhagvan Kommadi, Jesus Lopez-Alcalde, Sorana d. Bolboaca, Janice Tufte | Which is the difference between thisquasi-randomized assignment and theadaptive randomization? It is unclearwhy we should insert these two terms inthe glossary I would specify in the definition thatquasi-randomisation is a non-randommethod of allocation | ||||||
3 | SEVCO:01029 | clinical trial | Interventional research in which one or more healthcare-related actions (i.e., a diagnostic, prognostic, therapeutic, preventive or screening method or intervention) is evaluated for effects on health-related biomedical or behavioral processes and/or outcomes. | Some definitions for "clinical trial" include human subject research for effects on human health outcomes. The term "human" was not added to this definition because a study design with animal subjects for effects on animal health outcomes to inform veterinary care would be considered a clinical trial. However, a study design with animal subjects to inform human health outcomes would not be considered a clinical trial. | Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel, Janice Tufte, Harold Lehmann, Paul Whaley | 2021-12-14 vote 6-0 by Joanne Dehnbostel, Jesus Lopez-Alcalde, Robin Ann Yurk, Janice Tufte, Paul Whaley, Brian S. Alper | 2021-11-30 vote 7-1 by Alejandro Piscoya, Mario Tristan, Robin Ann Yurk, Muhammad Afzal, Paola Rosati, Paul Whaley, Janice Tufte, Jesus Lopez-Alcalde 2021-12-07 vote 4-1 by Mario Tristan, Robin Ann Yurk, Janice Tufte, Joanne Dehnbostel, CP Ooi | 2021-11-30 comments: (We should include the classical definition for Phase lV Field Trials of Health Interventions: A Toolbox. 3rd edition. Smith PG, Morrow RH, Ross DA, editors. Oxford (UK): OUP Oxford; 2015 Jun 1.https://www.ncbi.nlm.nih.gov/books/NBK305508/), Instead of "methods" I would use the term "interventions". I also miss the term "prognostic" as they are not diagnostic or screening. Besides, it would be important to highlight that the clinical trial is done in humans 2021-12-07 comment: A clinical trial is a type of research that studies new tests and treatments and evaluates their effects on human health outcomes. The medical intervention can be drugs, cells and other biological products, surgical procedures, radiological procedures, devices, behavioural treatments and preventive care. | NIH Clinical Trial Definition = A research study[1] in which one or more human subjects[2] are prospectively assigned[3] to one or more interventions[4] (which may include placebo or other control) to evaluate the effects of those interventions on health-related biomedical or behavioral outcomes.[5] [4]An intervention is defined as a manipulation of the subject or subject’s environment for the purpose of modifying one or more health-related biomedical or behavioral processes and/or endpoints. Examples include: drugs/small molecules/compounds; biologics; devices; procedures (e.g., surgical techniques); delivery systems (e.g., telemedicine, face-to-face interviews); strategies to change health-related behavior (e.g., diet, cognitive therapy, exercise, development of new habits); treatment strategies; prevention strategies; and, diagnostic strategies. from https://grants.nih.gov/grants/guide/notice-files/NOT-OD-15-015.html | |||||
4 | SEVCO:01041 | pragmatic clinical trial | A clinical trial conducted under conditions of routine clinical practice. | "Pragmatic trials are designed to evaluate the effectiveness of interventions in real-life routine practice conditions, whereas explanatory trials aim to test whether an intervention works under optimal situations. The pragmatic trial, on the other hand, is designed to test interventions in the full spectrum of everyday clinical settings in order to maximize applicability and generalizability. The research question under investigation is whether an intervention actually works in real life." (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3181997/) | Brian S. Alper, Joanne Dehnbostel, Paul Whaley, Janice Tufte, Kenneth Wilkins, Harold Lehmann | 2021-12-07 vote 5-0 by Mario Tristan, Robin Ann Yurk, Janice Tufte, CP Ooi, Joanne Dehnbostel | 2021-11-30 vote 5-1 by Alejandro Piscoya, Robin Ann Yurk, Muhammad Afzal, Paul Whaley, Janice Tufte, Jesus Lopez-Alcalde | 2021-11-30 comments: (The definition in the current form is fine however the last part may be thought like; where "everyday" means day-to-day clinical practice wherein the conditions are not modified for the conduct of the research.), Suggested alternative: = A clinical trial designed to test the effects of an intervention under everyday conditions, where "everyday conditions" means clinical conditions are not modified for the conduct of the research | NCIt: Pragmatic Trial = A study designed to test the effectiveness of an intervention in a broad routine clinical practice. Term used to describe a clinical study designed to examine the benefits of a product under real world conditions. UMLS: Works about randomized clinical trials that compare interventions in clinical settings and which look at a range of effectiveness outcomes and impacts. CDISC Glossary: pragmatic trial = Term used to describe a clinical study designed to examine the benefits of a product under real world conditions. EDDA: pragmatic clinical trial = Randomized clinical trials that compare interventions in clinical settings and which look at a range of effectiveness outcomes and impacts. [MeSH_2015] SCO: pragmatic trial = A study designed to test the effectiveness of an intervention in a broad routine clinical practice. "Pragmatic trials are designed to evaluate the effectiveness of interventions in real-life routine practice conditions, whereas explanatory trials aim to test whether an intervention works under optimal situations. The pragmatic trial, on the other hand, is designed to test interventions in the full spectrum of everyday clinical settings in order to maximize applicability and generalizability. The research question under investigation is whether an intervention actually works in real life." (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3181997/) | |||||
4 | SEVCO:01038 | expanded access study | A clinical trial that provides a means for obtaining an experimental drug or device for patients who are not adequately treated by existing therapy, who do not meet the eligibility criteria for enrollment, or who are otherwise unable to participate in another clinical study. | Expanded Access studies include individual-patient investigational new drug (IND), treatment IND, compassionate use, emergency use or continued access. | Brian S. Alper, Joanne Dehnbostel, Harold Lehmann | 2022-02-15 vote 10-0 by Paul Whaley, Andrew Beck, Brian S. Alper, Paola Rosati, Robin Ann Yurk, Janice Tufte, Jesus Lopez-Alcalde, Yasser Sami Amer, Harold Lehmann, Alejandro Piscoya | 2022-02-15 comment: Define IND acronym under comment for application under individual patient IND, treatment IND | from CTO: Expanded Access Study Studies that provide a means for obtaining an experimental drug or device for patients who are not adequately treated by existing therapy, who do not meet the eligibility criteria for enrollment, or who are otherwise unable to participate in another clinical study. Expanded Access studies include individual-patient IND, treatment IND, compassionate use, emergency use or continued access. An investigational drug product (including biological product) available through expanded access for patients who do not qualify for enrollment in a clinical trial. Expanded Access includes all expanded access types under section 561 of the Federal Food, Drug, and Cosmetic Act: (1) for individual patients, including emergency use; (2) for intermediate-size patient populations; and (3) under a treatment IND or treatment protocol. from NCIt: Expanded Access Study Studies that provide a means for obtaining an experimental drug or device for patients who are not adequately treated by existing therapy, who do not meet the eligibility criteria for enrollment, or who are otherwise unable to participate in another clinical study. Expanded Access studies include individual-patient IND, treatment IND, compassionate use, emergency use or continued access. also Compassionate Treatment (compassionate use trial, expanded access trial, pre-approval access) Providing experimental therapies to very sick individuals even though they don't meet the critera for inclusion in a trial. A way to provide an investigational therapy to a patient who is not eligible to receive that therapy in a clinical trial, but who has a serious or life-threatening illness for which other treatments are not available. Compassionate use trials allow patients to receive promising but not yet fully studied or approved cancer therapies when no other treatment option exists. A potential pathway for a patient with an immediately life-threatening condition or serious disease or condition to gain access to an investigational medical product (drug, biologic, or medical device) for treatment outside of clinical trials when no comparable or satisfactory alternative therapy options are available. NOTE: The intent is treatment, as opposed to research. Individual, Intermediate-size, and Widespread Use Expanded Access, also Emergency IND, are all programs administered under FDA guidelines. Additionally, the US Right-to-Try Act, which is independent of FDA, expands access. [FDA Expanded Access: Information for Physicians] from EDDA: compassionate use trial (expanded access trial, compassionate treatment) Providing experimental therapies to very sick individuals even though they don't meet the critera for inclusion in a trial. [NCI 2014_12E] Providing an investigational therapy to a patient who is not eligible to receive that therapy in a clinical trial, but who has a serious or life-threatening illness for which other treatments are not available. Compassionate use trials allow patients to receive promising but not yet fully studied or approved therapies when no other treatment option exists. Also called expanded access trial. [MeSH 2014_2014_02_10] shared as a comment: Expanded access is the use of an investigational new drug, biologics, and medical devices used to diagnose, monitor, or treat patients with serious diseases or conditions for which there are no comparable or satisfactory therapy options available outside of clinical trials. (USA FDA) | ||||||
4 | SEVCO:01030 | phase 1 trial | A clinical trial to gather initial evidence in humans to support further investigation of an intervention. | Phase 1 trials are often the first step in testing a new treatment in humans and may include safety assessment, measurement of metabolism and pharmacologic actions of a drug in humans, or the side effects associated with increasing doses. Phase 1 studies often include between 20 and 80 subjects, and often involve healthy subjects. | Brian S. Alper, Paul Whaley, Harold Lehmann, Joanne Dehnbostel | 2022-01-11 vote 7-0 by Harold Lehmann, Jesus Lopez-Alcalde, Mario Tristan, janice tufte, Paul Whaley, Andrew Beck, Robin Ann Yurk | 2022-01-04 vote 5-2 by Robin Ann Yurk, Harold Lehmann, janice tufte, Paola Rosati, C P Ooi, Paul Whaley, Joanne Dehnbostel | 2022-01-04 comments: Perhaps adding the following may improve the clarity "It may include testing the best way to give a new treatment (for example, by mouth, infusion into a vein, or injection)". "providing the initial investigation" sounds a bit vague compared to the other trial phase definitions. Also, can a trial really "provide an investigation"? Maybe suggest changing to "in which xxx is investigated", where "xxx" is a tighter definition of what "the initial" is referring to. 2022-01-11 comment: I would suggest not adding how many subjects are typically involved, maybe state that these usually have very small sample sizes. Unfortunately, sample sizes have decreased over time. https://bmjopen.bmj.com/content/11/12/e053377 | https://www.ecfr.gov/current/title-21/chapter-I/subchapter-D/part-312/subpart-B/section-312.21 is the US Code of Federal Regulations Title 21 (Food and Drugs) Chapter I Subchapter D Part 312 Subpart B § 312.21 and includes: § 312.21 Phases of an investigation. An IND may be submitted for one or more phases of an investigation. The clinical investigation of a previously untested drug is generally divided into three phases. Although in general the phases are conducted sequentially, they may overlap. These three phases of an investigation are a[sic] follows: .... Phase 1. (1) Phase 1 includes the initial introduction of an investigational new drug into humans. Phase 1 studies are typically closely monitored and may be conducted in patients or normal volunteer subjects. These studies are designed to determine the metabolism and pharmacologic actions of the drug in humans, the side effects associated with increasing doses, and, if possible, to gain early evidence on effectiveness. During Phase 1, sufficient information about the drug's pharmacokinetics and pharmacological effects should be obtained to permit the design of well-controlled, scientifically valid, Phase 2 studies. The total number of subjects and patients included in Phase 1 studies varies with the drug, but is generally in the range of 20 to 80. (2) Phase 1 studies also include studies of drug metabolism, structure-activity relationships, and mechanism of action in humans, as well as studies in which investigational drugs are used as research tools to explore biological phenomena or disease processes. from CTO: Phase I trial (phase I study, early-stage clinical trial, phase I protocol, phase I clinical trial, trial phase 1) A clinical research protocol designed to test a new biomedical intervention in a small group of people for the first time. A Phase I trial can be to establish the toxicity of a new treatment with escalating intensity of the treatment administered and/or to determine the side effects of a new treatment for a particular indication in subjects. Includes initial studies to determine the metabolism and pharmacologic actions of drugs in humans, the side effects associated with increasing doses, and to gain early evidence of effectiveness; may include healthy participants and/or patients. The initial introduction of an investigational new drug into humans. Phase 1 studies are typically closely monitored and may be conducted in patients or normal volunteer subjects. NOTE: These studies are designed to determine the metabolism and pharmacologic actions of the drug in humans, the side effects associated with increasing doses, and, if possible, to gain early evidence on effectiveness. During Phase 1, sufficient information about the drug's pharmacokinetics and pharmacological effects should be obtained to permit the design of well-controlled, scientifically valid, Phase 2 studies. The total number of subjects and patients included in Phase I studies varies with the drug, but is generally in the range of 20 to 80. Phase 1 studies also include studies of drug metabolism, structure-activity relationships, and mechanism of action in humans, as well as studies in which investigational drugs are used as research tools to explore biological phenomena or disease processes. [After FDA CDER Handbook, ICH E8] (CDISC glossary) The first step in testing a new treatment in humans. These studies test the best way to give a new treatment (for example, by mouth, intravenous infusion, or injection) and the best dose. The dose is usually increased a little at a time in order to find the highest dose that does not cause harmful side effects. Because little is known about the possible risks and benefits of the treatments being tested, phase I trials usually include only a small number of patients who have not been helped by other treatments. The initial introduction of an investigational new drug into humans. Phase 1 studies are typically closely monitored and may be conducted in patients or normal volunteer subjects. NOTE: These studies are designed to determine the metabolism and pharmacologic actions of the drug in humans, the side effects associated with increasing doses, and, if possible, to gain early evidence on effectiveness. During Phase 1, sufficient information about the drug's pharmacokinetics and pharmacological effects should be obtained to permit the design of well-controlled, scientifically valid Phase 2 studies. The total number of subjects and patients included in Phase 1 studies varies with the drug, but is generally in the range of 20 to 80. Phase 1 studies also include studies of drug metabolism, structure-activity relationships, and mechanism of action in humans, as well as studies in which investigational drugs are used as research tools to explore biological phenomena or disease processes. [after FDA CDER handbook, ICH E8] from SCO: phase I trial not independently defined from NCIt: same as CTO from OCRe: A Phase 1 trial includes initial studies to determine the metabolism and pharmacologic actions of drugs in humans, the side effects associated with increasing doses, and to gain early evidence of effectiveness; may include healthy participants and/or patients. from EDDA: A clinical research protocol designed to test a new biomedical intervention in a small group of people for the first time. A Phase I trial can be to establish the toxicity of a new treatment with escalating intensity of the treatment administered and/or to determine the side effects of a new treatment for a particular indication in subjects. [NCI 2014_12E] Studies performed to evaluate the safety of diagnostic, therapeutic, or prophylactic drugs, devices, or techniques in healthy subjects and to determine the safe dosage range (if appropriate). These tests also are used to determine pharmacologic and pharmacokinetic properties (toxicity, metabolism, absorption, elimination, and preferred route of administration). They involve a small number of persons and usually last about 1 year. This concept includes phase I studies conducted both in the U.S. and in other countries. [MeSH 2014_2014_02_10] from INTERNATIONAL COUNCIL FOR HARMONISATION OF TECHNICAL REQUIREMENTS FOR PHARMACEUTICALS FOR HUMAN USE (ICH HARMONISED GUIDELINE) GENERAL CONSIDERATIONS FOR CLINICAL STUDIES E8(R1) https://database.ich.org/sites/default/files/E8-R1_Guideline_Step4_2021_1006.pdf Adopted on 6 October 2021 4.3.1 Human Pharmacology The protection of study participants should always be the first priority when designing early clinical studies, especially for the initial administration of an investigational product to humans (usually referred to as phase 1). These studies may be conducted in healthy volunteer participants or in a selected population of patients who have the condition or the disease, depending on drug properties and the objectives of the development programme. These studies typically address one or a combination of the following aspects: 4.3.1.1 Estimation of Initial Safety and Tolerability The initial and subsequent administration of a drug to humans is usually intended to determine the tolerability of the dose range expected to be evaluated in later clinical studies and to determine the nature of adverse reactions that can be expected. These studies typically include both single and multiple dose administration. 4.3.1.2 Pharmacokinetics Characterisation of a drug's absorption, distribution, metabolism, and excretion continues throughout the development programme, but the preliminary characterisation is an essential early goal. Pharmacokinetic studies are particularly important to assess the clearance of the drug and to anticipate possible accumulation of parent drug or metabolites, interactions with metabolic enzymes and transporters, and potential drug-drug interactions. Some pharmacokinetic studies are commonly conducted in later phases to answer more specialised questions. For orally administered drugs, the study of food effects on bioavailability is important to inform the dosing instructions in relation to food. Obtaining pharmacokinetic information in sub-populations with potentially different metabolism or excretion, such as patients with renal or hepatic impairment, geriatric patients, children, and ethnic subgroups should be considered (ICH E4 Dose-Response Studies, E7 Clinical Trials in Geriatric Population, E11, and E5, respectively). 4.3.1.3 Pharmacodynamics & Early Measurement of Drug Activity Depending on the drug and the endpoint of interest, pharmacodynamic studies and studies relating drug levels to response (PK/PD studies) may be conducted in healthy volunteer participants or in patients with the condition or disease. If there is an appropriate measure, pharmacodynamic data can provide early estimates of activity and efficacy and may guide the dosage and dose regimen in later studies. from March 1998 https://www.ema.europa.eu/en/documents/scientific-guideline/ich-e-8-general-considerations-clinical-trials-step-5_en.pdf 3.1.3.1 Phase I (Most typical kind of study: Human Pharmacology) Phase I starts with the initial administration of an investigational new drug into humans. Although human pharmacology studies are typically identified with Phase I, they may also be indicated at other points in the development sequence. Studies in this phase of development usually have non-therapeutic objectives and may be conducted in healthy volunteer subjects or certain types of patients, e.g. patients with mild hypertension. Drugs with significant potential toxicity, e.g. cytotoxic drugs, are usually studied in patients. Studies in this phase can be open, baseline controlled or may use randomisation and blinding, to improve the validity of observations. Studies conducted in Phase I typically involve one or a combination of the following aspects: a) Estimation of Initial Safety and Tolerability The initial and subsequent administration of an investigational new drug into humans is usually intended to determine the tolerability of the dose range expected to be needed for later clinical studies and to determine the nature of adverse reactions that can be expected. These studies typically include both single and multiple dose administration. b) Pharmacokinetics Characterisation of a drug's absorption, distribution, metabolism, and excretion continues throughout the development plan. Their preliminary characterisation is an important goal of Phase I. Pharmacokinetics may be assessed via separate studies or as a part of efficacy, safety and tolerance studies. Pharmacokinetic studies are particularly important to assess the clearance of the drug and to anticipate possible accumulation of parent drug or metabolites and potential drug-drug interactions. Some pharmacokinetic studies are commonly conducted in later phases to answer more specialised questions. For many orally administered drugs, especially modified release products, the study of food effects on bioavailability is important. Obtaining pharmacokinetic information in sub-populations such as patients with impaired elimination (renal or hepatic failure), the elderly, children, women and ethnic subgroups should be considered. Drug-drug interaction studies are important for many drugs; these are generally performed in phases beyond Phase I but studies in animals and in vitro studies of metabolism and potential interactions may lead to doing such studies earlier. c) Assessment of Pharmacodynamics Depending on the drug and the endpoint studied, pharmacodynamic studies and studies relating drug blood levels to response (PK/PD studies) may be conducted in healthy volunteer subjects or in patients with the target disease. In patients, if there is an appropriate measure, pharmacodynamic data can provide early estimates of activity and potential efficacy and may guide the dosage and dose regimen in later studies. d) Early Measurement of Drug Activity Preliminary studies of activity or potential therapeutic benefit may be conducted in Phase I as a secondary objective. Such studies are generally performed in later phases but may be appropriate when drug activity is readily measurable with a short duration of drug exposure in patients at this early stage. | |||||
5 | SEVCO:01031 | exploratory investigational new drug study | A clinical trial that is conducted early in phase 1, involves very limited human exposure, and has no therapeutic or diagnostic intent (e.g., screening studies, microdose studies). | According to the original FDA guidance, such exploratory IND studies are conducted prior to the traditional dose escalation, safety, and tolerance studies that ordinarily initiate a clinical drug development program. The duration of dosing in an exploratory IND study is expected to be limited (e.g., 7 days). A type of clinical trial that involves low dosage and short duration of drug exposure for a limited number of study participants with the intent of gathering preliminary data on the mechanism of action, pharmacodynamics, pharmacokinetics, or bioavailability of promising therapeutic candidate agents in human subjects. Less official terms (phase 0 trial, pre-clinical trial) have been used to describe a clinical trial that uses an investigational agent that has never previously given to humans or for which there is extremely limited human experience. A Phase 0 study might not include any drug delivery but may be an exploration of human material from a study (e.g., tissue samples or biomarker determinations). | Brian S. Alper, Joanne Dehnbostel, Paul Whaley, Janice Tufte, Olga Vovk | 2022-02-01 vote 5-0 by Paola Rosati, Mario Tristan, Robin Ann Yurk, Janice Tufte, Brian S. Alper | 2022-01-25 vote 8-1 by Harold Lehmann, Alejandro Piscoya, Janice Tufte, Paola Rosati, Robin Ann Yurk, Philippe Rocca-Serra, Mario Tristan, Brian S. Alper, Jesus Lopez-Alcalde | 2022-01-25 comments: I had to read a couple of times and check the hierarchy to appreciate this definition, but I agree. For a later version of SEVCO, we probably should put citations ("original FDA guidance") into the documentation. concern over the use of the term 'phase 1' in the definition and the presence of an Alternative term 'phase 0 study`. | the original source at https://www.fda.gov/regulatory-information/search-fda-guidance-documents/exploratory-ind-studies FDA GUIDANCE DOCUMENT Exploratory IND Studies Guidance for Industry, Investigators, and Reviewers JANUARY 2006 investigational new drug (IND) For the purposes of this guidance the phrase exploratory IND study is intended to describe a clinical trial that is conducted early in phase 1, involves very limited human exposure, and has no therapeutic or diagnostic intent (e.g., screening studies, microdose studies). Such exploratory IND studies are conducted prior to the traditional dose escalation, safety, and tolerance studies that ordinarily initiate a clinical drug development program. The duration of dosing in an exploratory IND study is expected to be limited (e.g., 7 days). from CTO: Early Phase I clinical trial (Phase 0 trial, Phase 0 clinical trial, Pre-Clinical Trial) A clinical trial that is at an Early Phase i or Phase 0, which is designed to use an investigational agent that is available only in very limited quantities and which has never previously given to humans or for which there is extremely limited human experience. Phase 0 clinical trials are intended to enable researchers to understand the path of the drug in the body and its efficacy. Adverse event reporting in Phase 0 trials is expedited. [def-source: NCI] Exploratory trials, involving very limited human exposure, with no therapeutic or diagnostic intent (e.g., screening studies, microdose studies). (Formerly listed as "Phase 0") A clinical trial that is at Early Phase 1 or Phase 0 from SCO: not included from NCIt: Preferred Name: Exploratory Investigational New Drug Study Definition: A type of clinical trial that involves low dosage and short duration of drug exposure for a limited number of study participants with the intent of gathering preliminary data on the mechanism of action, pharmacodynamics, pharmacokinetics, or bioavailability of promising therapeutic candidate agents in human subjects. CDISC-GLOSS Definition: A clinical study that is conducted early in Phase 1; involves very limited human exposure and has no therapeutic or diagnostic intent (e.g., screening studies, microdose studies) [FDA Guidance for industry, investigators, and Reviewers: exploratory IND studies, January 2006] See also Phase 0. First-in-Human Study = A type of phase 1 clinical trial in which the test product is administered to human beings for the first time. Phase 0 Trial = Pre-Clinical Trial = A clinical trial that uses an investigational agent that is available only in very limited quantities and which has never previously given to humans or for which there is extremely limited human experience. Phase 0 clinical trials are intended to enable researchers to understand the path of the drug in the body and its efficacy. Adverse event reporting in Phase 0 trials is expedited. First-in-human trials, in a small number of subjects, that are conducted before Phase 1 trials and are intended to assess new candidate therapeutic and imaging agents. The study agent is administered at a low dose for a limited time, and there is no therapeutic or diagnostic intent. NOTE: FDA Guidance for Industry, Investigators, and Reviewers: Exploratory IND Studies, January 2006 classifies such studies as Phase 1. NOTE: A Phase 0 study might not include any drug delivery but may be an exploration of human material from a study (e.g., tissue samples or biomarker determinations). [Improving the Quality of Cancer Clinical Trials: Workshop summary-Proceedings of the National Cancer Policy Forum Workshop, improving the Quality of Cancer Clinical Trials (Washington, DC, Oct 2007)] (CDISC glossary) First-in-human trials, in a small number of subjects, that are conducted before Phase 1 trials and are intended to assess new candidate therapeutic and imaging agents. The study agent is administered at a low dose for a limited time, and there is no therapeutic or diagnostic intent. NOTE: FDA Guidance for Industry, Investigators, and Reviewers: Exploratory IND Studies, January 2006 classifies such studies as Phase 1. NOTE: A Phase 0 study might not include any drug delivery but may be an exploration of human material from a study (e.g., tissue samples or biomarker determinations). [Improving the Quality of Cancer Clinical Trials: Workshop summary-Proceedings of the National Cancer Policy Forum Workshop, improving the Quality of Cancer Clinical Trials (Washington, DC, Oct 2007)] ) from OCRe: Phase 0 = A Phase 0 trial is an exploratory trial involving very limited human exposure, with no therapeutic or diagnostic intent (e.g., screening study, microdose study). from EDDA: pre-clinical trial = phase 0 trial = A clinical trial that uses an investigational agent that is available only in very limited quantities and which has never previously given to humans or for which there is extremely limited human experience. Phase 0 clinical trials are intended to enable researchers to understand the path of the drug in the body and its efficacy. Adverse event reporting in Phase 0 trials is expedited. [NCI 2014_12E] | |||||
4 | SEVCO:01032 | phase 1/phase 2 trial | A clinical trial with a component meeting the definition of phase 1 trial and a component meeting the definition of phase 2 trial. | A phase 1 trial is a clinical trial to gather initial evidence in humans to support further investigation of an intervention. A phase 2 trial is a clinical trial to gather evidence of effectiveness and safety for an intervention in patients with the disease or condition under study, but not intended to provide an adequate basis for regulatory approval for clinical use. | Brian S. Alper, Paul Whaley, Joanne Dehnbostel, Janice Tufte | 2022-01-25 vote 9-0 by Harold Lehmann, Alejandro Piscoya, Janice Tufte, Paola Rosati, Robin Ann Yurk, Philippe Rocca-Serra, Mario Tristan, Brian S. Alper, Jesus Lopez-Alcalde | 2022-01-18 vote 3-2 by Harold Lehmann, Paul Harris, Robin Ann Yurk, raradhikaag@gmail.com, Paul Whaley | 2022-01-18 comments: Does it matter that the Term has Arabic numerals and the Definition, Roman? Consider adding a comment for application to improve definition interpretation with individual term definitions for Phase I/Phase 2 trial Not sure I quite understand what the "separate sets of design parameters with" phrase means here? | from CTO: phase I/II trial (trial phase 1/2, trial phase 1-2) Trials that are a combination of phases 1 and 2. A clinical research protocol designed to study the safety, dosage levels and response to new treatment. Phase I/II trials combine a Phase I and a Phase II trial of the same treatment into a single protocol. A class of clinical study that combines elements characteristic of traditional Phase I and Phase II trials. See also Phase I, Phase II. A trial to study the safety, dosage levels, and response to a new treatment. from SCO: phase I/II trial (trial phase 1/2, trial phase 1-2) A clinical research protocol designed to study the safety, dosage levels and response to new treatment. Phase I/II trials combine a Phase I and a Phase II trial of the same treatment into a single protocol. from NCIt: same as CTO from OCRe: not included from EDDA: phase I/II trial (trial phase 1/2, trial phase 1-2) A class of clinical study that combines elements characteristic of traditional Phase I and Phase II trials. See also Phase I, Phase II. [NCIT_14.08d] [Contributing_Source_CDISC] A clinical research protocol designed to study the safety, dosage levels and response to new treatment. Phase I/II trials combine a Phase I and a Phase II trial of the same treatment into a single protocol. [NCIT_14.08d] A trial to study the safety, dosage levels, and response to a new treatment. [NCIT_14.08d] | |||||
4 | SEVCO:01033 | phase 2 trial | A clinical trial to gather evidence of effectiveness and safety for an intervention in patients with the disease or condition under study, but not intended to provide an adequate basis for regulatory approval for clinical use. | Phase 2 trials are typically controlled clinical studies conducted to evaluate the effectiveness of the intervention for a particular indication and to determine the common short-term side effects and risks associated with the intervention. Phase 2 trials may have a goal of determining the dose(s) or regimen(s) for Phase 3 trials. Phase 2 studies usually include no more than several hundred subjects. | Brian S. Alper, Paul Whaley, Joanne Dehnbostel, Harold Lehmann | 2022-01-11 vote 7-0 by Jesus Lopez-Alcalde, Harold Lehmann, Mario Tristan, janice tufte, Paul Whaley, Andrew Beck, Robin Ann Yurk | 2021-01-04 vote 5-2 by Robin Ann Yurk, Harold Lehmann, janice tufte, Paola Rosati, C P Ooi, Joanne Dehnbostel, Paul Whaley | 2021-01-04 comments: The first part of the definition is ok. In the second part, I would suggest to change with "An insufficient evidence for the intervention tested or the desired patients' number failure could occur thus impeding regulatory approval for clinical use" Comment Suggestion to add to comment for term from extracted from notes-3.1.3.2: An important goal for this phase is to determine the dose(s) and regimen for Phase III trials. Early studies in this phase often utilize dose escalation designs (see ICH E4) to give an early estimate of dose response and later studies may confirm the dose response relationship for the indication in question by using recognized parallel dose-response designs (could also be deferred to phase III) Minor change - the phrasing is a little awkward, suggest "gather evidence about the effectiveness and safety of an intervention in patients with the disease or condition under study, but not sufficient...". I am not sure the comment for application is fully consistent with the definitions (what about safety?). 2022-01-11 comment: I would suggest not adding how many subjects are typically involved, maybe state that these usually have small sample sizes. Unfortunately, sample sizes have decreased over time. https://bmjopen.bmj.com/content/11/12/e053377 | https://www.ecfr.gov/current/title-21/chapter-I/subchapter-D/part-312/subpart-B/section-312.21 is the US Code of Federal Regulations Title 21 (Food and Drugs) Chapter I Subchapter D Part 312 Subpart B § 312.21 and includes: § 312.21 Phases of an investigation. An IND may be submitted for one or more phases of an investigation. The clinical investigation of a previously untested drug is generally divided into three phases. Although in general the phases are conducted sequentially, they may overlap. These three phases of an investigation are a[sic] follows: .... Phase 2. Phase 2 includes the controlled clinical studies conducted to evaluate the effectiveness of the drug for a particular indication or indications in patients with the disease or condition under study and to determine the common short-term side effects and risks associated with the drug. Phase 2 studies are typically well controlled, closely monitored, and conducted in a relatively small number of patients, usually involving no more than several hundred subjects. from CTO: Phase II trial A clinical research protocol designed to study a biomedical or behavioral intervention in a larger group of people (several hundred), to evaluate the drug's effectiveness for a particular indication in patients with the disease or condition under study, and to determine the common short-term side effects and risks associated with the intervention. Includes controlled clinical studies conducted to evaluate the effectiveness of the drug for a particular indication or indications in participants with the disease or condition under study and to determine the common short-term side effects and risks. Phase 2. Controlled clinical studies conducted to evaluate the effectiveness of the drug for a particular indication or indications in patients with the disease or condition under study and to determine the common short-term side effects and risks associated with the drug. NOTE: Phase 2 studies are typically well controlled, closely monitored, and conducted in a relatively small number of patients, usually involving no more than several hundred subjects. [After FDA CDER Handbook, ICH E8] (CDISC glossary) A study to test whether a new treatment has an anticancer effect (for example, whether it shrinks a tumor or improves blood test results) and whether it works against a certain type of cancer. Controlled clinical studies conducted to evaluate the effectiveness of the drug for a particular indication or indications in patients with the disease or condition under study and to determine the common short-term side effects and risks associated with the drug. NOTE: Phase 2 studies are typically well controlled, closely monitored, and conducted in a relatively small number of patients, usually involving no more than several hundred subjects. [after FDA CDER handbook, ICH E8] from SCO: phase II trial not independently defined from NCIt: same as CTO from OCRe: A Phase 2 trial includes controlled clinical studies conducted to evaluate the effectiveness of the drug for a particular indication or indications in patients with the disease or condition under study and to determine the common short-term side effects and risks. from EDDA: A clinical research protocol designed to study a biomedical or behavioral intervention in a larger group of people (several hundred), to evaluate the drug's effectiveness for a particular indication in patients with the disease or condition under study, and to determine the common short-term side effects and risks associated with the intervention. [NCI 2014_12E] Studies that are usually controlled to assess the effectiveness and dosage (if appropriate) of diagnostic, therapeutic, or prophylactic drugs, devices, or techniques. These studies are performed on several hundred volunteers, including a limited number of patients with the target disease or disorder, and last about two years. This concept includes phase II studies conducted in both the U.S. and in other countries. [MeSH 2014_2014_02_10] A clinical research protocol designed to study a biomedical or behavioral intervention in a larger group of people (several hundred), to evaluate the drug's effectiveness for a particular indication in patients with the disease or condition under study, and to determine the common short-term side effects and risks associated with the intervention. [NCI 2014_12E] Studies that are usually controlled to assess the effectiveness and dosage (if appropriate) of diagnostic, therapeutic, or prophylactic drugs, devices, or techniques. These studies are performed on several hundred volunteers, including a limited number of patients with the target disease or disorder, and last about two years. This concept includes phase II studies conducted in both the U.S. and in other countries. [MeSH 2014_2014_02_10] from INTERNATIONAL COUNCIL FOR HARMONISATION OF TECHNICAL REQUIREMENTS FOR PHARMACEUTICALS FOR HUMAN USE (ICH HARMONISED GUIDELINE) GENERAL CONSIDERATIONS FOR CLINICAL STUDIES E8(R1) https://database.ich.org/sites/default/files/E8-R1_Guideline_Step4_2021_1006.pdf Adopted on 6 October 2021 After initial clinical studies provide sufficient information on safety, clinical pharmacology and dose, exploratory and confirmatory studies (usually referred to as phases 2 and 3, respectively) are conducted to further evaluate both the safety and efficacy of the drug. Exploratory studies are designed to investigate safety and efficacy in a selected population of patients for whom the drug is intended. Additionally, these studies aim to refine the effective dose(s) and regimen, refine the definition of the targeted population, provide a more robust safety profile for the drug, and include evaluation of potential study endpoints for subsequent studies. Exploratory studies may provide information on the identification and determination of factors that affect the treatment effect and, possibly combined with modelling and simulation, serve to support the design of later confirmatory studies. from March 1998 https://www.ema.europa.eu/en/documents/scientific-guideline/ich-e-8-general-considerations-clinical-trials-step-5_en.pdf 3.1.3.2 Phase II (Most typical kind of study: Therapeutic Exploratory) Phase II is usually considered to start with the initiation of studies in which the primary objective is to explore therapeutic efficacy in patients. Initial therapeutic exploratory studies may use a variety of study designs, including concurrent controls and comparisons with baseline status. Subsequent trials are usually randomised and concurrently controlled to evaluate the efficacy of the drug and its safety for a particular therapeutic indication. Studies in Phase II are typically conducted in a group of patients who are selected by relatively narrow criteria, leading to a relatively homogeneous population and are closely monitored. An important goal for this phase is to determine the dose(s) and regimen for Phase III trials. Early studies in this phase often utilise dose escalation designs (see ICH E4) to give an early estimate of dose response and later studies may confirm the dose response relationship for the indication in question by using recognised parallel dose-response designs (could also be deferred to phase III). Confirmatory dose response studies may be conducted in Phase II or left for Phase III. Doses used in Phase II are usually but not always less than the highest doses used in Phase†I. Additional objectives of clinical trials conducted in Phase II may include evaluation of potential study endpoints, therapeutic regimens (including concomitant medications) and target populations (e.g. mild versus severe disease) for further study in Phase II or III. These objectives may be served by exploratory analyses, examining subsets of data and by including multiple endpoints in trials. | |||||
4 | SEVCO:01034 | phase 2/phase 3 trial | A clinical trial with a component meeting the definition of phase 2 trial and a component meeting the definition of phase 3 trial. | A phase 2 trial is a clinical trial to gather evidence of effectiveness and safety for an intervention in patients with the disease or condition under study, but not intended to provide an adequate basis for regulatory approval for clinical use. A phase 3 trial is a clinical trial to gather the evidence of effectiveness and safety of an intervention, intended to provide an adequate basis for regulatory approval for clinical use. | Brian S. Alper, Paul Whaley, Joanne Dehnbostel, Janice Tufte | 2022-02-08 vote 7-0 by Paola Rosati, Mario Tristan, Robin Ann Yurk, Janice Tufte, Brian S. Alper, Paul Whaley, Sunu Alice Cherian | 2022-01-18 vote 2-3 by Harold Lehmann, Paul Harris, Robin Ann Yurk, raradhikaag@gmail.com, Paul Whaley 2022-01-25 vote 9-1 by Harold Lehmann, Alejandro Piscoya, Janice Tufte, Paola Rosati, Robin Ann Yurk, Philippe Rocca-Serra, Mario Tristan, Brian S. Alper, Jesus Lopez-Alcalde, Paul Whaley 2022-02-01 vote 4-1 by Paola Rosati, Mario Tristan, Robin Ann Yurk, Janice Tufte, Brian S. Alper | 2022-01-18 comments: Phase 2/3 trials determine efficacy of a new biomedical intervention i.e. whether it works as intended in a larger group of study participants, and monitor adverse effects so that the intervention may be used safely. Consider adding a comment for application to improve definition interpretation with individual term definitions for Phase 2/Phase 3 trial Not sure I quite understand what the "separate sets of design parameters with" phrase means here? 2022-01-25 comment: As already pointed out, to me these definitions seem incongruent and lack specification of the outcomes used, namely core clinical outcomes relevant for patients. Are pahse 2 and phase 3 trials designed to gather evidence of 'effectiveness' and safety or 'efficacy' and monitor adverse effects of a new biomedical intervention? For what outcome? The three sentences proposed in the comment for application of this code seem overlapping the two terms (i.e. is it still efficacy the right term used for trials or is it effectiveness, commonly used for prospective observational studies?). I think it is important to justify why the two terms are used for clinical trial designs. 2022-02-01 comment: To me this definition has no clear meaning. As your are working and struggling so hard to define and clarify the scientific evidence code system, I wish to participate to the meeting to discuss with you this tricky definition. If you agree, please, let me know. | from CTO: phase II/III trial (trial phase 2/3, trial phase 2-3) Trials that are a combination of phases 2 and 3. A type of clinical study that combines elements characteristic of traditional Phase II and Phase III trials. A trial to study response to a new treatment and the effectiveness of the treatment compared with the standard treatment regimen. A class of clinical study that combines elements characteristic of traditional Phase II and Phase III trials. from SCO: not included from NCIt: phase II/III trial (trial phase 2/3, trial phase 2-3) A type of clinical study that combines elements characteristic of traditional Phase II and Phase III trials. A trial to study response to a new treatment and the effectiveness of the treatment compared with the standard treatment regimen. A class of clinical study that combines elements characteristic of traditional Phase II and Phase III trials. from OCRe: not included from EDDA: phase II/III trial (trial phase 2/3, trial phase 2-3) A type of clinical study that combines elements characteristic of traditional Phase II and Phase III trials. [NCIT_14.08d] A class of clinical study that combines elements characteristic of traditional Phase II and Phase III trials. [NCIT_14.08d] [Contributing_Source_CDISC] A trial to study response to a new treatment and the effectiveness of the treatment compared with the standard treatment regimen. [NCIT_14.08d] "Designs that combine phase II and III functions (ie, phase II/III designs) have separate sets of design parameters that correspond to their phase II and III components." -- Korn EL et al. Design Issues in Randomized Phase II/III Trials. J Clin Oncol 2012 https://ascopubs.org/doi/full/10.1200/JCO.2011.38.5732. https://europepmc.org/backend/ptpmcrender.fcgi?accid=PMC3295562&blobtype=pdf | |||||
4 | SEVCO:01035 | phase 3 trial | A clinical trial to gather the evidence of effectiveness and safety of an intervention, intended to provide an adequate basis for regulatory approval for clinical use. | Phase 3 trials are typically conducted after preliminary evidence suggests effectiveness and usually have the primary objective to demonstrate or confirm therapeutic benefit compared to placebo or a standard treatment. Phase 3 studies usually include from several hundred to several thousand subjects. Study endpoints for phase 3 trials should be clinically relevant or of adequate surrogacy for predicting clinical effects. | Brian S. Alper, Paul Whaley, Joanne Dehnbostel, Janice Tufte, Kenneth Wilkins, Harold Lehmann | 2022-01-18 vote 6-0 by Harold Lehmann, Paul Harris, Robin Ann Yurk, Paola Rosati, raradhikaag@gmail.com, Paul Whaley | 2021-12-21 vote 2-2 by Robin Ann Yurk, C P Ooi, Janice Tufte, Paul Whaley 2022-01-04 vote 5-2 by Robin Ann Yurk, Harold Lehmann, Janice Tufte, Paola Rosati, C P Ooi, Joanne Dehnbostel, Paul Whaley 2022-01-11 vote 6-1 by Jesus Lopez-Alcalde, Harold Lehmann, Mario Tristan, janice tufte, Paul Whaley, Andrew Beck, Robin Ann Yurk | 2021-12-21 comments: Note: consider adding the following comments from comments from previous reviewers to improve interpretation. 3.1.3.3 “Phase III usually is considered to begin with the initiation of studies in which the primary objective is to demonstrate, or confirm therapeutic benefit.” EDDA: “Comparative studies to verify the effectiveness of diagnostic, therapeutic, or prophylactic drugs, devices, or techniques determined in phase II studies…. A clinical research protocol designed to investigate the efficacy of the biomedical or behavioral intervention in large groups of human subjects (from several hundred to several thousand), to confirm efficacy, to monitor adverse reactions to the new medication or treatment regimen with respect to long-term use and by comparing the intervention to other standard or experimental interventions as well as to a placebo” .................. I think the pieces are there but the phrasing is difficult to parse. 2022-01-04 comments: Perhaps adding "compared with a standard treatment" may improve the clarity. Minor change - the phrasing is a little awkward, suggest "gather evidence about the effectiveness and safety of an intervention that is needed...". 2022-01-11 comments: I would suggest not adding how many subjects are typically involved. Unfortunately, sample sizes have decreased over time. https://bmjopen.bmj.com/content/11/12/e053377 Minor change for consistency with other trial definitions: "A clinical trial to gather evidence of effectiveness and safety of an intervention, that is intended to provide an adequate basis for regulatory approval for clinical use." | https://www.ecfr.gov/current/title-21/chapter-I/subchapter-D/part-312/subpart-B/section-312.21 is the US Code of Federal Regulations Title 21 (Food and Drugs) Chapter I Subchapter D Part 312 Subpart B § 312.21 and includes: § 312.21 Phases of an investigation. An IND may be submitted for one or more phases of an investigation. The clinical investigation of a previously untested drug is generally divided into three phases. Although in general the phases are conducted sequentially, they may overlap. These three phases of an investigation are a[sic] follows: .... Phase 3. Phase 3 studies are expanded controlled and uncontrolled trials. They are performed after preliminary evidence suggesting effectiveness of the drug has been obtained, and are intended to gather the additional information about effectiveness and safety that is needed to evaluate the overall benefit-risk relationship of the drug and to provide an adequate basis for physician labeling. Phase 3 studies usually include from several hundred to several thousand subjects. from CTO: Phase III trial Includes trials conducted after preliminary evidence suggesting effectiveness of the drug has been obtained, and are intended to gather additional information to evaluate the overall benefit-risk relationship of the drug. A clinical research protocol designed to investigate the efficacy of the biomedical or behavioral intervention in large groups of human subjects (from several hundred to several thousand), to confirm efficacy, to monitor adverse reactions to the new medication or treatment regimen with respect to long-term use and by comparing the intervention to other standard or experimental interventions as well as to a placebo. Phase 3. Studies are expanded controlled and uncontrolled trials. They are performed after preliminary evidence suggesting effectiveness of the drug has been obtained, and are intended to gather the additional information about effectiveness and safety that is needed to confirm efficacy and evaluate the overall benefit-risk relationship of the drug and to provide an adequate basis for physician labeling. NOTE: Phase 3 studies usually include from several hundred to several thousand subjects. [After FDA CDER Handbook, ICH E8] (CDISC glossary) A study to compare the results of people taking a new treatment with the results of people taking the standard treatment (for example, which group has better survival rates or fewer side effects). In most cases, studies move into phase III only after a treatment seems to work in phases I and II. Phase III trials may include hundreds of people. Studies are expanded controlled and uncontrolled trials. They are performed after preliminary evidence suggesting effectiveness of the drug has been obtained and are intended to gather the additional information about effectiveness and safety that is needed to confirm efficacy and evaluate the overall benefit-risk relationship of the drug and to provide an adequate basis for physician labeling. NOTE: Phase 3 studies usually include from several hundred to several thousand subjects. [after FDA CDER handbook, ICH E8] from SCO: A clinical research protocol designed to investigate the efficacy of the biomedical or behavioral intervention in large groups of human subjects (from several hundred to several thousand), to confirm efficacy, to monitor adverse reactions to the new medication or treatment regimen with respect to long-term use and by comparing the intervention to other standard or experimental interventions as well as to a placebo. from NCIt: Phase III trial (Phase III Clinical Trial; Phase III Trial; phase 3; Trial Phase 3; PHASE III TRIAL; phase III trial; Phase III Trials; 3; Phase 3 Study; Clinical Trials, Phase III; Phase III Study; Phase III Protocol) A clinical research protocol designed to investigate the efficacy of the biomedical or behavioral intervention in large groups of human subjects (from several hundred to several thousand), to confirm efficacy, to monitor adverse reactions to the new medication or treatment regimen with respect to long-term use and by comparing the intervention to other standard or experimental interventions as well as to a placebo. from OCRe: A Phase 3 trial includes expanded controlled and uncontrolled trials after preliminary evidence suggesting effectiveness of the drug has been obtained, and are intended to gather additional information to evaluate the overall benefit-risk relationship of the drug and provide an adequate basis for physician labeling. from EDDA: Comparative studies to verify the effectiveness of diagnostic, therapeutic, or prophylactic drugs, devices, or techniques determined in phase II studies. During these trials, patients are monitored closely by physicians to identify any adverse reactions from long-term use. These studies are performed on groups of patients large enough to identify clinically significant responses and usually last about three years. This concept includes phase III studies conducted in both the U.S. and in other countries. [MeSH 2014_2014_02_10] A clinical research protocol designed to investigate the efficacy of the biomedical or behavioral intervention in large groups of human subjects (from several hundred to several thousand), to confirm efficacy, to monitor adverse reactions to the new medication or treatment regimen with respect to long-term use and by comparing the intervention to other standard or experimental interventions as well as to a placebo. [NCI 2014_12E] from INTERNATIONAL COUNCIL FOR HARMONISATION OF TECHNICAL REQUIREMENTS FOR PHARMACEUTICALS FOR HUMAN USE (ICH HARMONISED GUIDELINE) GENERAL CONSIDERATIONS FOR CLINICAL STUDIES E8(R1) https://database.ich.org/sites/default/files/E8-R1_Guideline_Step4_2021_1006.pdf Adopted on 6 October 2021 After initial clinical studies provide sufficient information on safety, clinical pharmacology and dose, exploratory and confirmatory studies (usually referred to as phases 2 and 3, respectively) are conducted to further evaluate both the safety and efficacy of the drug. Confirmatory studies are designed to confirm the preliminary evidence accumulated in earlier clinical studies that a drug is safe and effective for use for the intended indication and recipient population. These studies are often intended to provide an adequate basis for marketing approval, and to support adequate instructions for use of the drug and official product information. They aim to evaluate the drug in participants with or at risk of the condition or disease who represent those who will receive the drug once approved. This may include investigating subgroups of patients with frequently occurring or potentially relevant comorbidities (e.g., cardiovascular disease, diabetes, hepatic and renal impairment) to characterise the safe and effective use of the drug in patients with these conditions. Confirmatory studies may evaluate the efficacy and safety of more than one dose or the use of the drug in different stages of disease or in combination with one or more other drugs. If the intent is to administer a drug for a long period of time, then studies involving extended exposure to the drug should be conducted (ICH E1 Clinical Safety for Drugs used in Long-Term Treatment). Irrespective of the intended duration of administration, the duration of effect of the drug will also inform the duration of follow-up. Study endpoints selected for confirmatory studies should be clinically relevant and reflect disease burden or be of adequate surrogacy for predicting disease burden or sequelae. from March 1998 https://www.ema.europa.eu/en/documents/scientific-guideline/ich-e-8-general-considerations-clinical-trials-step-5_en.pdf 3.1.3.3 Phase III (Most typical kind of study: Therapeutic Confirmatory) Phase III usually is considered to begin with the initiation of studies in which the primary objective is to demonstrate, or confirm therapeutic benefit. Studies in Phase III are designed to confirm the preliminary evidence accumulated in Phase II that a drug is safe and effective for use in the intended indication and recipient population. These studies are intended to provide an adequate basis for marketing approval. Studies in Phase III may also further explore the dose-response relationship, or explore the drug's use in wider populations, in different stages of disease, or in combination with another drug. For drugs intended to be administered for long periods, trials involving extended exposure to the drug are ordinarily conducted in Phase III, although they may be started in Phase II (see ICH E1). ICH E1 and ICH E7 describe the overall clinical safety database considerations for chronically administered drugs and drugs used in the elderly. These studies carried out in Phase III complete the information needed to support adequate instructions for use of the drug (official product information). | |||||
4 | SEVCO:01036 | post-marketing study | A clinical trial to gather additional evidence of effectiveness and safety of an intervention for an already approved clinical use. | Post-marketing studies (phase IV trials) are often used to evaluate adverse effects that were not apparent in phase III trials, and may involve thousands of patients. Postmarketing (Phase 4) studies to delineate additional information about the drug's risks, benefits, and optimal use that may be requested by regulatory authorities in conjunction with marketing approval. | Brian S. Alper, Joanne Dehnbostel, Harold Lehmann, Paul Whaley | 2022-02-15 vote 10-0 by Paul Whaley, Andrew Beck, Paola Rosati, Robin Ann Yurk, Janice Tufte, Jesus Lopez-Alcalde, Yasser Sami Amer, Harold Lehmann, Alejandro Piscoya, Joanne Dehnbostel | 2022-02-15 comment: Maybe add hyphen between "already" and "approved" | from CTO: Phase IV Trial (Phase IV Study, Phase IV clinical trial, phase 4 study, phase 4 trial, trial phase 4) Studies of FDA-approved drugs to delineate additional information including the drug's risks, benefits, and optimal use. A randomized, controlled trial that is designed to evaluate the long-term safety and efficacy of a drug for a given indication. Often they are designed to study side effects that may have become apparent after the phase III study was completed. After a treatment has been approved and is being marketed, it is studied in a phase IV trial to evaluate side effects that were not apparent in the phase III trial. Thousands of people are involved in a phase IV trial. Post approval studies to delineate additional information about the drug's risks, benefits, and optimal use that may be requested by regulatory authorities in conjunction with marketing approval. NOTE: These studies could include, but would not be limited to, studying different doses or schedules of administration than were used in Phase 2 studies, use of the drug in other patient populations or other stages of the disease, or use of the drug over a longer period of time. [after FDA CDER handbook, ICH E8] Phase 4. Postmarketing (Phase 4) studies to delineate additional information about the drug's risks, benefits, and optimal use that may be requested by regulatory authorities in conjunction with marketing approval. NOTE: These studies could include, but would not be limited to, studying different doses or schedules of administration than were used in Phase 2 studies, use of the drug in other patient populations or other stages of the disease, or use of the drug over a longer period of time. [After FDA CDER Handbook, ICH E8] (CDISC glossary) from SCO: not included from NCIt: Phase IV Trial (Phase IV Study, Phase IV clinical trial, phase 4 study, phase 4 trial, trial phase 4) A randomized, controlled trial that is designed to evaluate the long-term safety and efficacy of a drug for a given indication. Often they are designed to study side effects that may have become apparent after the phase III study was completed. After a treatment has been approved and is being marketed, it is studied in a phase IV trial to evaluate side effects that were not apparent in the phase III trial. Thousands of people are involved in a phase IV trial. Post approval studies to delineate additional information about the drug's risks, benefits, and optimal use that may be requested by regulatory authorities in conjunction with marketing approval. NOTE: These studies could include, but would not be limited to, studying different doses or schedules of administration than were used in Phase 2 studies, use of the drug in other patient populations or other stages of the disease, or use of the drug over a longer period of time. [after FDA CDER handbook, ICH E8] Phase 4. Postmarketing (Phase 4) studies to delineate additional information about the drug's risks, benefits, and optimal use that may be requested by regulatory authorities in conjunction with marketing approval. NOTE: These studies could include, but would not be limited to, studying different doses or schedules of administration than were used in Phase 2 studies, use of the drug in other patient populations or other stages of the disease, or use of the drug over a longer period of time. [After FDA CDER Handbook, ICH E8] (CDISC glossary) from OCRe: A Phase 4 study monitors FDA-approved drug to delineate additional information including the drug's risks, benefits, and optimal use. from EDDA: A randomized, controlled trial that is designed to evaluate the long-term safety and efficacy of a drug for a given indication. Often they are designed to study side effects that may have become apparent after the phase III study was completed. [NCIT_14.08d] After a treatment has been approved and is being marketed, it is studied in a phase IV trial to evaluate side effects that were not apparent in the phase III trial. Thousands of people are involved in a phase IV trial. [NCIT_14.08d] Phase 4. Postmarketing (Phase 4) studies to delineate additional information about the drug's risks, benefits, and optimal use that may be requested by regulatory authorities in conjunction with marketing approval. NOTE: These studies could include, but would not be limited to, studying different doses or schedules of administration than were used in Phase 2 studies, use of the drug in other patient populations or other stages of the disease, or use of the drug over a longer period of time. [After FDA CDER Handbook, ICH E8] [Contributing Source_CDISC] [NCIT_14.08d] | ||||||
2 | SEVCO:01002 | observational research | A study design in which the independent variables (exposures or interventions) are not prospectively assigned or modified by the investigator. | We acknowledge that observational study design and observational study may not be exact synonyms of observational research, but observational research could be used to encompass both design and implementation of the design. In the context of coding study design factors, observational research is commonly used to denote non-interventional research. | Mario Tristan, Joanne Dehnbostel, Harold Lehmann, Khalid Shahin, Brian S. Alper | 8/8 as of 6/7/2021: Asiyah Lin, KM Saif-Ur-Rahman, Harold Lehmann, Sebastien Bailly, Bhagvan Kommadi, Mario Tristan, Leo Orozco, Ahmad Sofi-Mahmudi | 2021-05-17 vote 5-3 on "Observational research = In a prospective or retrospective study, an independent variable is measured but not manipulated by the investigator to evaluate a response or outcome (the dependent variable)." by Eric Harvey, Bhagvan Kommadi, Paola Rosati, KM Saif-Ur-Rahman, Ahmad Sofi-Mahmudi, Jesus Lopez-Alcalde, Sorana D. Bolboaca, Harold Lehmann, 2021-05-24 vote 8-3 on Observational research="A study design in which the variables (exposures, interventions, and outcomes) are not prospectively assigned or modified by the investigator." by Alejandro Piscoya, Philippe Rocca-Serra, KM Saif-Ur-Rahman, Eric Harvey, Harold Lehmann, Bhagvan Kommadi, Sorana D. Bolboaca, Jesús López-Alcalde, Paola Rosati, Tatyana Shamliyan, Brian Alper, , 2021-05-31 vote 11-1 on Observational research="A study design in which the independent variables (exposures or interventions) are not prospectively assigned or modified by the investigator." by Eric Harvey, Bhagvan Kommadi, Brian Alper, Sebastien Bailly, Alejandro Piscoya, Harold Lehmann, KM Saif-Ur-Rahman, Paola Rosati, Sorana D. Bolboaca, Asiyah Lin, Leo Orozco, Erfan Shamsoddin | I dislike the term "manipulated" in the definition -- suggest change to: In a prospective or retrospective study, without any specific intervention assigned to participants, an investigator observes and measures an intervention or procedure (the independent variable) to assess or learn more about an effect or outcome (the dependent variable). "In a prospective or retrospective study, an independent variable (a predictor) is obeserved or measured by the investigator to evaluate a response or an outcome (the dependent variable)." I would delete "in a prospective or retrospective study" as it could be ambispective 5-24-2021 similar comment about the synonyms assigned the class (conflating plan/design) with the object realised by executing a plan I think that the outcomes are never assigned or modified by the investigator (they are measured). Thus, to be consistent with the definition of interve…suggest to remove "outcomes" from ( ) (is there a semantic difference between "are not" and "none is"?) I suggest to clarify the goal as drawing causal inferences from the observed association between exposure and outcomes 5-31-2021 comment The suggested definition is a non-interventional study definition. Not sure if a non-interventional is fully equivalent to observational studies | ||||||
3 | SEVCO:01037 | post-marketing surveillance study | An observational study to identify adverse events related to the use of an approved clinical intervention. | Brian S. Alper, Joanne Dehnbostel, Paul Whaley, Harold Lehmann | 2022-02-15 vote 10-0 by Paul Whaley, Andrew Beck, Brian S. Alper, Paola Rosati, Janice Tufte, Jesus Lopez-Alcalde, Yasser Sami Amer, Harold Lehmann, Alejandro Piscoya, Joanne Dehnbostel | 2022-02-15 comments: Alternative terms could be: Post-marketing evaluation study, (Do we need to connect the "approval" to an indication? | from CTO: not included from SCO: not included from NCIt: Postmarketing Surveillance Programs to identify adverse events that did not appear during the drug approval process. Ongoing safety monitoring of marketed drugs. See also Phase 4 studies, Phase 5 studies. also Phase V Trial (phase 5, trial phase 5) Postmarketing surveillance is sometimes referred to as Phase V. See outcomes research. from OCRe: not included from EDDA: postmarketing evaluation study (post-marketing product surveillance) Surveillance of drugs, devices, appliances, etc., for efficacy or adverse effects, after they have been released for general sale. [MeSH 2014_2014_02_10] | |||||||
2 | SEVCO:01010 | comparative study design | A study design in which two or more groups are compared. | Brian S. Alper, Joanne Dehnbostel, Ellen Jepson, Kenneth Wilkins, Mario Tristan | 9/9 as of 8/9/2021: Erfan Shamsoddin, Paola Rosati, Eric Harvey, Bhagvan Kommadi, Brian S. Alper, Cheow Peng Ooi, Janice Tufte, Joanne Dehnbostel, Alejandro Piscoya | |||||||||
3 | SEVCO:01011 | parallel cohort design | A comparative study design in which the groups are compared concurrently and participants are expected to remain in the groups being compared for the entire duration of participation in the study. | Brian S. Alper, Joanne Dehnbostel, Ellen Jepson, Kenneth Wilkins, Mario Tristan, Harold Lehmann | 9/9 as of 8/9/2021: Erfan Shamsoddin, Paola Rosati, Eric Harvey, Bhagvan Kommadi, Brian S. Alper, Cheow Peng Ooi, Janice Tufte, Joanne Dehnbostel, Alejandro Piscoya | |||||||||
3 | SEVCO:01012 | crossover cohort design | A comparative study design in which participants receive two or more alternative exposures during separate periods of time. | Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins, Harold Lehmann, Janice Tufte, Michael Panzer | 7/7 as of 8/23/2021: Eric Harvey, Bhagvan Kommadi, Brian S. Alper, Cheow Peng Ooi, Janice Tufte, Leo Orozco, Alejandro Piscoya | 8/9 as of 8/9/2021: voting on "A comparative study design in which participants receive two or more alternative exposures during separate periods of time." by Erfan Shamsoddin, Paola Rosati, Eric Harvey, Bhagvan Kommadi, Brian S. Alper, Cheow Peng Ooi, Janice Tufte, Joanne Dehnbostel, Alejandro Piscoya | 8/9/21 comment: It's not clear from this definition that each group of participants receives the same 2 or more exposures, but not in the same time sequence | |||||||
4 | SEVCO:01024 | controlled crossover cohort design | A crossover cohort design in which two or more cohorts have different orders of exposures. | Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins, Harold Lehmann, Janice Tufte, Michael Panzer | 7/7 as of 8/23/2021: Eric Harvey, Bhagvan Kommadi, Brian S. Alper, Cheow Peng Ooi, Janice Tufte, Leo Orozco, Alejandro Piscoya | |||||||||
4 | SEVCO:01025 | single-arm crossover design | A crossover cohort design in which all participants are in a single cohort with the same order of exposures. | Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins, Harold Lehmann, Janice Tufte, Michael Panzer | 7/7 as of 8/23/2021: Eric Harvey, Bhagvan Kommadi, Brian S. Alper, Cheow Peng Ooi, Janice Tufte, Leo Orozco, Alejandro Piscoya | |||||||||
3 | SEVCO:01013 | case control design | A comparative study design in which the groups being compared are defined by outcome presence (case) or absence (control). | Brian S. Alper, Joanne Dehnbostel, Harold Lehmann, Janice Tufte, Michael Panzer | 7/7 as of 8/23/2021: Eric Harvey, Bhagvan Kommadi, Brian S. Alper, Cheow Peng Ooi, Janice Tufte, Leo Orozco, Alejandro Piscoya | |||||||||
3 | SEVCO:01014 | matching for comparison | A comparative study design in which individual participants in different groups being compared are paired or matched into sets based on selected attributes for within-set analysis. | Brian S. Alper, Joanne Dehnbostel, Harold Lehmann, Janice Tufte, Michael Panzer | 7/7 as of 8/23/2021: Eric Harvey, Bhagvan Kommadi, Brian S. Alper, Cheow Peng Ooi, Janice Tufte, Leo Orozco, Alejandro Piscoya | |||||||||
4 | SEVCO:01020 | family study design | A matched study design in which related or non-related family members are compared. | Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins | 8/8 as of 9/12/2021: Eric Harvey, Bhagvan Kommadi, Brian S. Alper, Cheow Peng Ooi, Janice Tufte, Paola Rosati, Robin Ann Yurk, MD, MPH, Mario Tristan | |||||||||
5 | SEVCO:01021 | twin study design | A family study design in which twin siblings are compared. | Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins | 8/8 as of 9/12/2021: Eric Harvey, Bhagvan Kommadi, Brian S. Alper, Cheow Peng Ooi, Janice Tufte, Paola Rosati, Robin Ann Yurk, MD, MPH, Mario Tristan | |||||||||
3 | SEVCO:01015 | cluster as unit of allocation | A comparative study design in which participants are allocated to exposures (interventions) by their membership in groups (called clusters) rather than by individualized assignments. | Brian S. Alper, Joanne Dehnbostel, Harold Lehmann, Janice Tufte, Michael Panzer | 7/7 as of 8/23/2021: Eric Harvey, Bhagvan Kommadi, Brian S. Alper, Cheow Peng Ooi, Janice Tufte, Leo Orozco, Alejandro Piscoya | |||||||||
2 | SEVCO:01023 | non-comparative study design | A study design with no comparisons between groups with different exposures and no comparisons between groups with different outcomes. | Brian S. Alper, Joanne Dehnbostel, Harold Lehmann, Michael Panzer | 7/7 as of 8/23/2021: Eric Harvey, Bhagvan Kommadi, Brian S. Alper, Cheow Peng Ooi, Janice Tufte, Leo Orozco, Alejandro Piscoya | |||||||||
3 | SEVCO:01016 | uncontrolled cohort design | A non-comparative study design in which two or more participants are evaluated in a single group (or cohort). | Brian S. Alper, Joanne Dehnbostel, Harold Lehmann, Michael Panzer | 7/7 as of 8/23/2021: Eric Harvey, Bhagvan Kommadi, Brian S. Alper, Cheow Peng Ooi, Janice Tufte, Leo Orozco, Alejandro Piscoya | |||||||||
3 | SEVCO:01017 | case report | A non-comparative study design in which a single participant is evaluated. | Brian S. Alper, Joanne Dehnbostel, Harold Lehmann, Michael Panzer | 7/7 as of 8/23/2021: Eric Harvey, Bhagvan Kommadi, Brian S. Alper, Cheow Peng Ooi, Janice Tufte, Leo Orozco, Alejandro Piscoya | |||||||||
2 | SEVCO:01022 | population-based design | A study design in which the unit of observation is a population or community. | The term ‘population-based study’ is generally used for an observational comparative study design in which populations are compared. | Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Harold Lehmann | 5/5 as of 10/18/2021: Cheow Peng Ooi, Janice Tufte, Robin Ann Yurk, Eric Harvey, Joanne Dehnbostel | ||||||||
3 | SEVCO:01044 | ecological design | A study design in which the unit of observation is a population or community defined by social relationships or physical surroundings. | Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Harold Lehmann | 5/5 as of 10/18/2021: Cheow Peng Ooi, Janice Tufte, Robin Ann Yurk, Eric Harvey, Joanne Dehnbostel | 6 to 1 in 2021-09-20 vote with 7 participants (Ecological design = A comparative study design in which populations are compared. An ecologic study is a non individual-human study in which the unit of observation is a population or community.) - Robin Ann Yurk, Janice Tufte, Eric Harvey, Jesus Lopez-Alcalde, Mario Tristan, Sorana D Bolboaca, Paola Rosati, 8 to 1 vote on 2021-09-27 with 9 participants (Ecological design [Population-based design, Ecologic study, Population study] = A comparative study design in which populations are compared. An ecologic study is a non-individual study in which the unit of observation is a population or community.) - Jesus Lopez-Alcalde, Asiyah Lin, Eric Harvey, Bhagvan Kommadi, Alejandro Piscoya, Robin Ann Yurk, Mario Tristan, Paola Rosati, Janice Tufte | 2021-09-20 comment: I miss here the explicit declaration that ecological studies are observational. A cluster trial can randomise communities and is not an ecological study. Besides, and I may be worng, but an ecological study may include non-humans, for example, ecological study of air contamination levels in Spain compared to Italy. 2021-09-27 comment: The differences of ecologic studies and other population based studies are not reflected. consider adding "Variables in an ecologic analysis may be aggregate measures, environmental measures, or global measures." | |||||||
1 | SEVCO:00998 | study design process | A specification of a sequence of actions for a component or part of a study design. | Study design is defined as a plan specification for how and what kinds of data will be gathered as part of an investigation which may produce testable explanations, conclusions and predictions or test a hypothesis. | Brian S. Alper, Harold Lehmann, Kenneth Wilkins, Joanne Dehnbostel | 2022-03-22 vote 5-0 by Joanne Dehnbostel, Jesus Lopez-Alcalde, Philippe Rocca-Serra, Mario Tristan, Robin Ann Yurk | ||||||||
2 | SEVCO:01027 | cross sectional data collection | A study design process in which data is collected at a single point in time. | Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins | 8/8 as of 9/12/2021: Eric Harvey, Bhagvan Kommadi, Brian S. Alper, Cheow Peng Ooi, Janice Tufte, Paola Rosati, Robin Ann Yurk, MD, MPH, Mario Tristan | The word "feature" was added to the definition on March 7, 2022 to match the change in hierarchical terms. | ||||||||
2 | SEVCO:01028 | longitudinal data collection | A study design process in which data is collected at two or more points in time. | Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins | 8/8 as of 9/12/2021: Eric Harvey, Bhagvan Kommadi, Brian S. Alper, Cheow Peng Ooi, Janice Tufte, Paola Rosati, Robin Ann Yurk, MD, MPH, Mario Tristan | The word "feature" was added to the definition on March 7, 2022 to match the change in hierarchical terms. | ||||||||
3 | SEVCO:01018 | time series design | A longitudinal data collection which includes a set of time-ordered observations. | Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins | 8/8 as of 9/12/2021: Eric Harvey, Bhagvan Kommadi, Brian S. Alper, Cheow Peng Ooi, Janice Tufte, Paola Rosati, Robin Ann Yurk, MD, MPH, Mario Tristan | |||||||||
4 | SEVCO:01019 | before and after comparison | A time series design which includes comparisons of observations before and after an event or exposure. | Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins | 8/8 as of 9/12/2021: Eric Harvey, Bhagvan Kommadi, Brian S. Alper, Cheow Peng Ooi, Janice Tufte, Paola Rosati, Robin Ann Yurk, MD, MPH, Mario Tristan | |||||||||
2 | SEVCO:01045 | primary data collection | A study design process in which the data are recorded and collected during the study for the purpose of the same study. | The study design process includes the source and method for data collection. When the data are collected for original research to answer the original research questions, this is called primary data collection. | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Paul Whaley, Mario Tristan | 2022-03-29 vote 6-0 by Paul Whaley, Robin Ann Yurk, Mario Tristan, Jesus Lopez-Alcalde, Brian S. Alper, Cauê Monaco | 2022-02-22 vote 7-1 by Paola Rosati, Sunu Alice Cherian, Joanne Dehnbostel, Sumalatha A, Rebecca Baker, Robin Ann Yurk, Janice Tufte, Harold Lehmann 2022-03-01 vote 3-3 by Joanne Dehnbostel, Robin Ann Yurk, Paul Whaley, Nisha Mathew, Paola Rosati, Sunu Alice Cherian 2022-03-15 vote 5-1 by Mario Tristan, Paul Whaley, nisha mathew, Philippe Rocca-Serra, Harold Lehmann, Janice Tufte 2022-03-22 vote 5-1 by Joanne Dehnbostel, Jesus Lopez-Alcalde, Philippe Rocca-Serra, Mario Tristan, Robin Ann Yurk, nelle.stocquart | 2022-02-22 comments: Definition: Data recorded and collected during the study. For parallelism with "secondary data collection," perhaps write, "for the purpose of the current study."2022-03-01 comments: A data collection technique in which the data are collected and recorded during the study for the purpose of the same study. For the term definition---I would edit so it reads...A study design in which the data are collected and recorded to answer a new research question. Data collection is not study design, it can called as a technique A data collection technique in which data is recorded and collected during the study for the purpose of the same study. 2022-03-15 comments: ok, if 'study design feature' is understood as 'part specified by study design plan/protocol', but sounds like a process "in which" sounds strange for a "feature." ("Color is a feature in which..." does not sound right.) Perhaps a...feature regarding how data are recorded..."? 2022-03-22 comment: Suggest modify definition or create a comment for application so it reads: A Study design method in which the data are collected for original research to answer new research questions. | ||||||
2 | SEVCO:01026 | real world data collection | A study design process in which the study data are obtained from a source of data collected during a routine process in the natural environment rather than using a process designed or controlled by the researcher. | Real world data collection occurs when the study uses data obtained from a source that was not created for research as a primary purpose. A study can involve both primary data collection (with some data collected by a process created for the purpose of the study investigation) and real world data collection (with some data collected from a process created for a routine business or operational purpose). If a study involves both primary data collection and real world data collection, both terms can be applied. | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Paul Whaley, Mario Tristan | 2022-05-06 vote 7-0 by Mario Tristan, Robin Ann Yurk, Eric M Harvey, nisha mathew, Paola Rosati, Harold Lehmann, Janice Tufte | 2022-02-22 vote 7-1 by Paola Rosati, Sunu Alice Cherian, Joanne Dehnbostel, Sumalatha A, Rebecca Baker, Robin Ann Yurk, Janice Tufte, Harold Lehmann 2022-03-15 vote 5-1 by Mario Tristan, Paul Whaley, nisha mathew, Philippe Rocca-Serra, Harold Lehmann, Janice Tufte 2022-03-22 vote 4-2 by Joanne Dehnbostel, Jesus Lopez-Alcalde, Philippe Rocca-Serra, Mario Tristan, Robin Ann Yurk, nelle.stocquart 2022-03-29 vote 4-1 by Paul Whaley, Robin Ann Yurk, Mario Tristan, Jesus Lopez-Alcalde, Cauê Monaco 2022-04-05 vote 6-0 by Cauê Monaco, Harold Lehmann, Mario Tristan, Robin Ann Yurk, Jesus Lopez-Alcalde, Nisha Mathew THEN THE TERM CHANGED to Real World Data Collection 2022-04-19 vote 3-1 by Cauê Monaco, Robin Ann Yurk, Jesus Lopez-Alcalde, Harold Lehmann | 2022-02-22 comment: Definition: Data gathered from studies, surveys, experiments that have been done by other people for other studies 2022-03-15 comment: ok, if 'study design feature' is understood as 'part specified by study design plan/protocol', but sounds like a process 2022-03-22 comments: The proposed definition only details the source of the data but not the data collection itself. I propose: "A study design process in which the data are collected from data collected for a purpose other than the current study". Suggest modify definition or create a comment for application so it reads. A study design method in which the previously collected data is used to answer new and additional research questions. Some example of the types of studies are retrospective study etc.. 2022-03-29 comment: In the comment for application--suggestion. Delete phrase When data are collected. I would combined sentence When data are used in the form of analysis and interpretation from original research to answer additional research questions separate from the original research. 2022-04-12 comments: For Term definition: Suggest revising definition to A study design process in which the study data are obtained from data collected for recording data for business purposes. Comment for application: Add this statement, There are different categories of research such as business research, marketing research, insurance research etc. "data are obtained from data collected" may be changed to "data are obtained from a source for data collection" 2022-04-19 comment: Suggest edit the term definition. The Alternative term and comment for application are fine. There are different kinds of research business research that can be classified as real world data. The term definition should read....A study design in which the study data processes are obtained from a natural environment rather than controlled research. | ||||||
3 | SEVCO:01039 | real world data collection from healthcare records | Real world data collection from data obtained routinely for a purpose of recording healthcare delivery in a record controlled by a healthcare professional. | This term is used when the original data collection (primary data collection) is done for the purpose of delivering professional healthcare services. The secondary use of this data (sometimes called 'real world data') for research is then called secondary data collection. | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Ilkka Kunnamo | 2022-04-19 vote 5-0 by Cauê Monaco, Jesus Lopez-Alcalde, Harold Lehmann, Robin Ann Yurk, Muhammad Afzal | 2022-03-15 vote 4-2 by Mario Tristan, Paul Whaley, Nisha Mathew, Philippe Rocca-Serra, Harold Lehmann, Janice Tufte 2022-03-22 vote 4-1 by Joanne Dehnbostel, Jesus Lopez-Alcalde, Philippe Rocca-Serra, Mario Tristan, nelle.stocquart 2022-03-29 vote 3-1 by Paul Whaley, Mario Tristan, Jesus Lopez-Alcalde, Cauê Monaco 2022-04-05 vote 6-1 by Cauê Monaco, Paola Rosati, Harold Lehmann, Mario Tristan, Robin Ann Yurk, Jesus Lopez-Alcalde, Nisha Mathew | 2022-03-15 comments: ok, if 'study design feature' is understood as 'part specified by study design plan/protocol', but sounds like a process would proposed, "...for the purpose..." (as in primary data collection) Seems like we should add that the original data is then used for a secondary research purpose in the definition, not only explain in Alternative terms 2022-03-22 comment: ídem: Proposal: "A study design process in which the data are collected from data collected for a purpose of recording healthcare delivery in a record controlled by a healthcare professional."2022-03-29 comment: "medical records" and "health records" seem to be much more widely used expressions than "healthcare delivery records"2022-04-05 comments: Suggest make a comment or distinction in the term definition that the primary data collected is categorized as real world data for the purpose of delivering professional healthcare services. The data set can be used for secondary data collection. | ||||||
3 | SEVCO:01050 | real world data collection from personal health records | Real world data collection from data obtained routinely for a purpose of recording data related to personal health in a record controlled by the person, guardian, or caretaker. | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Ilkka Kunnamo | 2022-04-19 vote 5-0 by Cauê Monaco, Jesus Lopez-Alcalde, Harold Lehmann, Brian S. Alper, Muhammad Afzal | 2022-03-15 vote 4-2 by Mario Tristan, Paul Whaley, nisha mathew, Philippe Rocca-Serra, Harold Lehmann, Janice Tufte 2022-03-22 vote 4-1 by Joanne Dehnbostel, Jesus Lopez-Alcalde, Philippe Rocca-Serra, Mario Tristan, nelle.stocquart 2022-03-29 vote 5-0 by Paul Whaley, Mario Tristan, Jesus Lopez-Alcalde, Cauê Monaco, Brian S. Alper THEN TERM CHANGED 2022-04-05 | 2022-03-15 comments: ok, if 'study design feature' is understood as 'part specified by study design plan/protocol', but sounds like a process Might suggest "the purpose," again add in the definition that the original ddata is then used for a seconday purpose 2022-03-22 comment: dem: Proposal: "A study design process in which the data are collected from data collected for a purpose of recording data related to personal health in a record controlled by the person, guardian, or caretaker." | |||||||
3 | SEVCO:01040 | real world data collection from healthcare financing records | Real world data collection from data obtained routinely for a purpose of recording healthcare financing. | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte | 2022-04-19 vote 5-0 by Cauê Monaco, Jesus Lopez-Alcalde, Harold Lehmann, Brian S. Alper, Muhammad Afzal | 2022-03-15 vote 4-2 by Mario Tristan, Paul Whaley, nisha mathew, Philippe Rocca-Serra, Harold Lehmann, Janice Tufte 2022-03-22 vote 4-1 by Joanne Dehnbostel, Jesus Lopez-Alcalde, Philippe Rocca-Serra, Mario Tristan, nelle.stocquart 2022-03-29 vote 5-0 by Paul Whaley, Mario Tristan, Jesus Lopez-Alcalde, Cauê Monaco, Brian S. Alper THEN TERM CHANGED 2022-04-05 | 2022-03-15 comments: ok, if 'study design feature' is understood as 'part specified by study design plan/protocol', but sounds like a process "the purpose"add original financial data is then used for secondary analysis etc 2022-03-22 comment: Ídem. "A study design process in which the data are collected from data collected for a purpose of recording healthcare financing" | |||||||
3 | SEVCO:01048 | real world data collection from testing procedures | Real world data collection from data obtained routinely for a purpose of testing, such as diagnostic testing or screening examination. | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte | 2022-04-19 vote 5-0 by Cauê Monaco, Jesus Lopez-Alcalde, Harold Lehmann, Brian S. Alper, Muhammad Afzal | 2022-03-15 vote 4-2 by Mario Tristan, Paul Whaley, nisha mathew, Philippe Rocca-Serra, Harold Lehmann, Janice Tufte 2022-03-22 vote 4-1 by Joanne Dehnbostel, Jesus Lopez-Alcalde, Philippe Rocca-Serra, Mario Tristan, nelle.stocquart 2022-03-29 vote 5-0 by Paul Whaley, Mario Tristan, Jesus Lopez-Alcalde, Cauê Monaco, Brian S. Alper THEN TERM CHANGED 2022-04-05 | 2022-03-15 comments: ok, if 'study design feature' is understood as 'part specified by study design plan/protocol', but sounds like a process "the purpose"and then used for secondary research purposes 2022-03-22 comment: Ídem. "A study design process in which the data are collected from data collected for a purpose of testing, such as diagnostic testing or screening examination" | |||||||
4 | SEVCO:01046 | real world data collection from monitoring procedures | Real world data collection from data obtained routinely for a purpose of repeated testing. | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte | 2022-04-19 vote 5-0 by Cauê Monaco, Jesus Lopez-Alcalde, Harold Lehmann, Brian S. Alper, Muhammad Afzal | 2022-03-15 vote 5-1 by Mario Tristan, Paul Whaley, nisha mathew, Philippe Rocca-Serra, Harold Lehmann, Janice Tufte 2022-03-22 vote 4-1 by Joanne Dehnbostel, Jesus Lopez-Alcalde, Philippe Rocca-Serra, Mario Tristan, nelle.stocquart 2022-03-29 vote 5-0 by Paul Whaley, Mario Tristan, Jesus Lopez-Alcalde, Cauê Monaco, Brian S. Alper THEN TERM CHANGED 2022-04-05 | 2022-03-15 comments: ok, if 'study design feature' is understood as 'part specified by study design plan/protocol', but sounds like a process "the purpose"2022-03-22 comment: Ídem. "A study design process in which the data are collected from data collected for a purpose of repeated testing." | |||||||
2 | SEVCO:01049 | secondary data collection from prior research | A study design process in which the data are collected from data obtained during a different study than the current study. | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Mario Tristan | 2022-03-29 vote 5-0 by Mario Tristan, Paul Whaley, Cauê Monaco, Joanne Dehnbostel, Harold Lehmann | 2022-03-15 vote 5-1 by Mario Tristan, Paul Whaley, nisha mathew, Philippe Rocca-Serra, Harold Lehmann, Janice Tufte 2022-03-22 vote 4-1 by Joanne Dehnbostel, Jesus Lopez-Alcalde, Philippe Rocca-Serra, Mario Tristan, nelle.stocquart | 2022-03-15 comments: ok, if 'study design feature' is understood as 'part specified by study design plan/protocol', but sounds like a process 2022-03-22 comments: Ídem. "A study design process in which the data are collected from data collected during a different study than the current study"When does this recording happen? | |||||||
2 | SEVCO:01042 | secondary data collection from a registry | A study design process in which the data are collected from a system organized to obtain and maintain uniform data for discovery and analysis, and this system is organized prior to the current study. | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Ilkka Kunnamo | 2022-03-29 vote 5-0 by Paul Whaley, Mario Tristan, Jesus Lopez-Alcalde, Cauê Monaco, Brian S. Alper | 2022-03-15 vote 3-2 by Mario Tristan, Paul Whaley, nisha mathew, Philippe Rocca-Serra, Harold Lehmann 2022-03-22 vote 4-1 by Joanne Dehnbostel, Jesus Lopez-Alcalde, Philippe Rocca-Serra, Mario Tristan, nelle.stocquart | 2022-03-15 comments: the term discovery is not suitable. Can we have some other term? ok, if 'study design feature' is understood as 'part specified by study design plan/protocol', but sounds like a process 2022-03-22 comments: Ídem. "A study design process in which the data are collected from data collected in a system organized to obtain and maintain uniform data for discovery and analysis"The definition needs to be more, When did this happen? Before the study starts? | "For the purposes of this guide, a patient registry is an organized system that uses observational study methods to collect uniform data (clinical and other) to evaluate specified outcomes for a population defined by a particular disease, condition, or exposure, and that serves one or more predetermined scientific, clinical, or policy purposes" -- in https://effectivehealthcare.ahrq.gov/sites/default/files/pdf/registries-guide-3rd-edition_research.pdf | ||||||
2 | SEVCO:01047 | DEPRECATED: mixed primary and secondary data collection | A study design process in which some data are originally recorded and collected for the purpose of the study and some data are originally recorded and collected for a purpose other than the study | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Paul Whaley, Khalid Shahin | DEPRECATED: We decided 2022-03-08 to drop the term as it can be handled by coding 2 or more other terms. | |||||||||
2 | SEVCO:01051 | multisite data collection | A study design process in which data are collected from two or more geographic locations. | For studies conducted across multiple contexts (administrative or logistical) that are distinct from geographic locations, potentially introducing greater variability beyond multisite data collection, use the term Multicentric. | Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins, Mario Tristan | 2022-05-10 vote 7-0 by Mario Tristan, Robin Ann Yurk, Eric M Harvey, nisha mathew, Paola Rosati, Harold Lehmann, Janice Tufte | 2022-05-06 vote 6-1 by Mario Tristan, Robin Ann Yurk, Eric M Harvey, nisha mathew, Paola Rosati, Harold Lehmann, Janice Tufte | 2022-04-26 comment: As stated, this term has too much overlap with "Multicentric" Why do we need this term? | ||||||
2 | SEVCO:01086 | quantitative analysis | A study design process in which data are analyzed with mathematical or statistical methods and formulas. | The distinction of quantitative vs. qualitative analysis refers to whether mathematical processing is involved, whether or not the analysis includes numerical variables. Processing a categorical variable (e.g. values of happy, sad, or jealous as a response to "How are you feeling?") to produce numerical results (e.g. 30% happy, 50% sad, 20% surprised) would be classified as a Quantitative analysis. | Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel, Khalid Shahin | 2022-06-07 vote 5-0 by Brian S. Alper, Paola Rosati, Eric M Harvey, Mario Tristan, Harold Lehmann | 2022-05-17 vote 8-1 by Paul Whaley, Harold Lehmann, Robin Ann Yurk, Janice Tufte, Paola Rosati, Eric Harvey, raradhikaag@gmail.com, Cauê Monaco, Jesus Lopez-Alcalde 2022-05-24 vote 5-1 by Robin Ann Yurk, nelle.stocquart@kce.fgov.be, Eric M Harvey, Mario Tristan, Harold Lehmann, Jesus Lopez-Alcalde | 2022-05-17 comments: Suggest including examples of quantitative analysis so as to improve your definition as there are many categories of quantitative methods: ie survey methods, logistic regression,...etc Quantitative and qualitative have categorical results I believe 2022-05-24 comment: An analytic approach using statistical methods and formulas to report the data for interpretation 2022-05-26 comment: I would leave the description of a qualitative analysis out of the comment for application | ||||||
2 | SEVCO:01087 | qualitative analysis | A study design process in which data are analyzed, without primary reliance on mathematical or statistical techniques, by coding and organizing data to provide interpretation or understanding of experiences or hypotheses. | The distinction of quantitative vs. qualitative analysis refers to whether mathematical processing is involved, whether or not the analysis includes numerical variables. Processing a categorical variable (e.g. values of happy, sad, or jealous as a response to "How are you feeling?") to produce numerical results (e.g. 30% happy, 50% sad, 20% surprised) would be classified as a Quantitative analysis. Processing the transcripts of interviews to categorize phrases and report themes identified across interviews would be classified as a Qualitative analysis. Qualitative analysis techniques may include phenomenology development from categorical codes, and may result in discovery or creation of theories that are unattainable through quantitative analysis. | Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel, Khalid Shahin | 2022-06-07 vote 5-0 by Brian S. Alper, Paola Rosati, Eric M Harvey, Mario Tristan, Harold Lehmann | 2022-05-17 vote 5-4 by Paul Whaley, Harold Lehmann, Robin Ann Yurk, Janice Tufte, Paola Rosati, Eric Harvey, raradhikaag@gmail.com, Cauê Monaco, Jesus Lopez-Alcalde 2022-05-24 vote 4-1 by Robin Ann Yurk, Eric M Harvey, Mario Tristan, Harold Lehmann, Jesus Lopez-Alcalde | 2022-05-17 comments: What about ordinal data such as low/medium/high? I think I would view that as qualitative. Suggest revise definition to include examples of the analysis methods as Alternative terms or comment for applications: ie focus groups. There are many new software tools which apply quantitative methods to qualitative studies. Quantitative and qualitative have categorical results I believe Disagree - (Sorry, maybe you already know my comment here ;>). From my experience, qualitative analysis produces more than descriptive or categorical results, and uses a range of essential complex methodologies for producing unattainable results from trials. Some methods are inductive, others are deductive, or a mix of both. This modify the results achievable. For example, phenomenology from categorical codes produces new understanding of people's lived experiences (deemed robust, even from a small but convenient sample of people), whereas grounded theory, from descriptive and categorical data results, discovers or creates novel theories, crucial for subsequent research scrutiny, even for a trial. I would suggest to define qualitative analysis differently = A study design process in which data, analysed and coded to produce descriptive and categorical results, lead to new understanding of people's lived experiences or new theories, unattainable from quantitative studies, essential for future trials. In my opinion, descriptive numerical results come from quantitative analysis also (for example, incidence of SARS-COV2 per 100.000 habitants). I am not an expert in qualitative research but I guess it tackles phenomenons which can be observed but not measured. 2022-05-24 comment: Qualitative analysis provide a description or summary to understand exploratory experiences and patterns, themes in the data which can provide the framework for additional data interpretation through other analysis such as quantitative analysis. An example of a qualitative method is focus groups. Technology exists such as natural language processing or other software to report the analysis. 2022-05-26 comment: I would leave the description for a quantitative analysis out of the definition. I would also delete the example of feelings as this can be quantified through satisfaction research which is a quantitative analysis. I would give an example of focus groups or nature language processing. The method involves identifying themes in narrative text. | ||||||
2 | SEVCO:01060 | blinding of study participants | A study design process in which study participants are not informed of their intervention assignment. | Masking of study participants involves actions to conceal information that could lead to their awareness of their intervention assignment, such as provision of placebo or simulated interventions that mimic the target interventions. | Brian S. Alper, Joanne Dehnbostel, Harold Lehmann, Janice Tufte | 2022-08-23 vote 6-0 by Mario Tristan, Cauê Monaco, Janice Tufte, Philippe Rocca-Serra, Eric Harvey, Robin Ann Yurk | ||||||||
2 | SEVCO:01061 | blinding of intervention providers | A study design process in which the people administering the intervention are not informed of the intervention assignment. | Masking of intervention providers involves actions to conceal information that could lead to their awareness of the intervention assigned to individual study participants, such as provision of placebo interventions that mimic the target interventions. The terms 'double-blinding' and 'triple-blinding' are not clearly and consistently defined terms but typically suggest blinding of intervention providers. | Brian S. Alper, Joanne Dehnbostel, Harold Lehmann, Janice Tufte, Kenneth Wilkins | 2022-08-23 vote 6-0 by Mario Tristan, Cauê Monaco, Janice Tufte, Philippe Rocca-Serra, Eric Harvey, Robin Ann Yurk | ||||||||
2 | SEVCO:01062 | blinding of outcome assessors | A study design process in which the people determining the outcome are not informed of the intervention assignment. | Masking of outcome assessors involves actions to conceal information that could lead to their awareness of the intervention assigned to individual study participants to minimize the influence of such awareness on the determination of outcome measurement values. The terms 'triple-blinding' and 'quadruple-blinding' are not clearly and consistently defined terms but may suggest blinding of outcome assessors. | Brian S. Alper, Joanne Dehnbostel, Harold Lehmann, Janice Tufte, Kenneth Wilkins | 2022-08-23 vote 5-0 by Mario Tristan, Cauê Monaco, Janice Tufte, Philippe Rocca-Serra, Eric Harvey | ||||||||
2 | SEVCO:01063 | blinding of data analysts | A study design process in which the people managing or processing the data and statistical analysis are not informed of the intervention assignment. | The term 'data analysts' is meant to include any person who works with the data at any point between data collection and the reporting of analyzed results. Masking of data analysts involves actions to conceal information that could lead to their awareness of the intervention assigned to individual study participants, such as noninformative labeling used to represent the study groups. The terms 'triple-blinding' and 'quadruple-blinding' are not clearly and consistently defined terms but may suggest blinding of data analysts. | Brian S. Alper, Joanne Dehnbostel, Harold Lehmann, Janice Tufte, Kenneth Wilkins | 2022-08-23 vote 5-0 by Mario Tristan, Cauê Monaco, Janice Tufte, Philippe Rocca-Serra, Eric Harvey | ||||||||
2 | SEVCO:01064 | allocation concealment | A study design process in which all parties influencing study enrollment and allocation to study groups are unaware of the group assignment for the study participant at the time of enrollment and allocation. | Allocation concealment occurs before and during the enrollment process and refers to limiting awareness of assignment during the process of recruitment and assignment to groups. Other blinding and masking terms refer to limiting awareness of the assignment during and after enrollment. | Brian S. Alper, Joanne Dehnbostel, Harold Lehmann | 2022-08-30 vote 8-0 by Janice Tufte, nisha mathew,: Philippe Rocca-Serra, Jesus Lopez-Alcalde, Harold Lehmann, Muhammad Afzal, Cauê Monaco, Eric Harvey | ||||||||
1 | SEVCO:00999 | study design feature | An aspect or characteristic of a study design. | Study design is defined as a plan specification for how and what kinds of data will be gathered as part of an investigation which may produce testable explanations, conclusions and predictions or test a hypothesis. | Brian S. Alper, Joanne Dehnbostel, Paul Whaley, Janice Tufte | 2022-03-29 vote 8-0 by Paul Whaley, Mario Tristan, Jesus Lopez-Alcalde, Cauê Monaco, Joanne Dehnbostel, Philippe Rocca-Serra, Robin Ann Yurk, nelle.stocquart | 2022-03-15 vote 7-0 by Mario Tristan, Paul Whaley, nisha mathew, Philippe Rocca-Serra, Harold Lehmann, Janice Tufte, Robin Ann Yurk (but then the definition changed with the creation of Study Design Process) 2022-03-22 vote 5-1 by Joanne Dehnbostel, Jesus Lopez-Alcalde, Philippe Rocca-Serra, Mario Tristan, nelle.stocquart, Robin Ann Yurk | 2022-03-15 comments: I agree with the definition but feel this is meta-vocabulary that helps us talk about elements of study design that are not part of the code system itself. So I don't know if it should be included in the code system as a code, or if we should be considering some other means for defining these terms (e.g. in documentation or guidance about SEVCO). not a fan of the synonym "study design factor" as it could cause confusion with 'Study Factor", Independent Variable. How different Study Design is from Study Protocol? "Study design planned process" could cover the following subtypes For the comment for application include ...as a technical plan specification.... 2022-03-22 comment: The definition of "Study design" seems to exclude the "statistical analysis". Am I right? | ||||||
2 | SEVCO:01043 | multicentric | A study design feature in which two or more institutions are responsible for the conduct of the study. | This term may be used for studies conducted across multiple contexts (administrative or logistical) that are distinct from geographic locations, potentially introducing greater variability beyond multisite data collection. | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte | 2022-05-06 vote 6-0 by Mario Tristan, Robin Ann Yurk, Eric M Harvey, nisha mathew, Paola Rosati, Harold Lehmann | 2022-03-15 vote 5-1 by Mario Tristan, Paul Whaley, nisha mathew, Philippe Rocca-Serra, Harold Lehmann, Robin Ann Yurk 2022-04-26 vote 3-0 by Eric M Harvey, Robin Ann Yurk, Mario Tristan | 2022-03-15 comments: Suggest add to multiple contexts (reserach) a multicenter study is_a study. 'multicentric' would be a subtype of study_design_feature. a concern here is that the current definition conflates 2 entities: a study and a characteristic of that study. At the end of the day, it depends on how the modeling will be made, e.g <study> <has_some_study_design_feature> <type of study_design_feature> Or should it be "Multicenter data collection" ? | ||||||
2 | SEVCO:01052 | includes patient-reported outcome | A study design feature in which one or more outcomes are reported directly from the patient without interpretation by a clinician or researcher. | Examples of patient-reported outcomes include symptoms, pain, quality of life, satisfaction with care, adherence to treatment, and perceived value of treatment. Data collection methods including surveys and interviews may obtain patient-reported outcomes. Reports derived from wearable devices would not typically include patient-reported outcomes. Such data may be coded with 'Real world data collection from monitoring procedures' (SEVCO:01046). | Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel | 2022-05-06 vote 7-0 by Mario Tristan, Robin Ann Yurk, Eric M Harvey, nisha mathew, Paola Rosati, Harold Lehmann, Janice Tufte | 2022-04-26 vote 3-0 by Eric M Harvey, Robin Ann Yurk, Mario Tristan | 2022-04-26 comment: Suggest adding to the comment for application: data methods to collect Patient Reported Outcomes such as survey data. 2022-05-06 comments: Perhaps direct the reader to "Patient generated health data" or whatever else is the SEVCO term for "wearables" or other data sources (e.g., bluetooth scale). Suggest adding to the comment for application: data methods to collect Patient Reported Outcomes such as survey data. 2022-06-07 preferred term changed from "Patient-reported outcome" to "Includes patient-reported outcome" to maintain consistency with sibling concepts | The U.S. Food and Drug Administration (FDA) defines a patient-reported outcome (PRO) as “any report of the status of a patient’s health condition that comes directly from the patient, without interpretation of the patient’s response by a clinician or anyone else [1].” -- from https://dcricollab.dcri.duke.edu/sites/NIHKR/KR/PRO%20Resource%20Chapter.pdf | |||||
2 | SEVCO:01053 | includes patient-centered outcome | A study design feature in which one or more measures are outcomes that patients directly care about, i.e. outcomes that are directly related to patients' experience of their life. | In healthcare research, outcomes are effects on patients or populations, including changes to health status, behavior, or knowledge as well as patient satisfaction and quality of life. A patient-centered outcome qualifies the type of outcome as that which patients directly care about, i.e. outcomes that are directly related to patients' experience of their life. Examples of patient-centered outcomes include mortality, morbidity, symptoms, and quality of life. Some use 'clinical outcome' as synonymous with 'patient-centered outcome' while some use 'clinical outcome' to represent outcomes that would assessed as part of healthcare practice. | Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel, Mario Tristan, Khalid Shahin | 2022-06-07 vote 5-0 by Brian S. Alper, Paola Rosati, Eric M Harvey, Mario Tristan, Harold Lehmann | 2022-05-06 vote 5-2 by Mario Tristan, Robin Ann Yurk, Eric M Harvey, nisha mathew, Paola Rosati, Harold Lehmann, Janice Tufte 2022-05-17 vote 7-2 by Paul Whaley, Harold Lehmann, Robin Ann Yurk, Paola Rosati, Eric Harvey, raradhikaag@gmail.com, Cauê Monaco, Jesus Lopez-Alcalde, Janice Tufte | 2022-04-26 comment: Suggest Adding to comment for application: Population Statistics such as mortality, morbidity. Development of clinical outcomes is based on using a framework such as the Donabedian model: Structure, Process, Outcomes where outcomes have some relationship to structural or process measures in clinical care..... 2022-05-06 comment: I understand the goal of "quantity or quality of life," but I think it's too abstract--and limiting ("Quantity of life" is limited to life expectancy). I haven't reviewed other definitions, but the flavor is, "outcomes that patients care about." ("Function" is left off the list of "examples", albeit there is a large overlap with "morbidity," "symptoms," and "quality of life.") (See the Comments for Application for Surrogate Outcome!) 2022-05-17 comments: The definition seems to be in the comment: "A clinical outcome qualifies the type of outcome as that which patients directly care about." The definition as proposed doesn't really make sense to me. The definition, Alternative terms and comment for application are correct. However, it is more specific to patient reported outcomes. Clinical outcomes are more broad and also includes: physiologic measures, condition specific measures.....etc. Clinical outcomes can be structural, process or outcomes in the donabedian framework and or combined as composite outcomes. While patient centered outcomes are typically considered clinical outcomes, they also indicate the observed outcomes by the clinician but not so much by the patient. 2022-05-26 comment: Suggest revise term definition so it is more inclusive or all healthcare or clinical outcomes, such as mortality, morbidity, physiologic measures, symptoms, experiences. The term is not a study design but a measure. For example: A healthcare measure which captures results from healthcare populations, settings structures, processes, and patients directly related to their care with healthcare settings, people, providers, and interventions. Insert other Alternative terms: Morbidity, Mortality, Symptoms, Experience of Care, Health Status, Quality of life. Suggest delete Patient Oriented Outcome, Patient Important Outcome, Patient Relevant Outcome, Patient Centered OUtcome, Includes clinical outcomes, | ||||||
2 | SEVCO:01054 | includes disease-oriented outcome | A study design feature in which one or more measures are outcomes that relate to a health or illness condition but are not outcomes which patients directly care about. | In healthcare research, outcomes are effects on patients or populations, including changes to health status, behavior, or knowledge as well as patient satisfaction and quality of life. A patient-centered outcome qualifies the type of outcome as that which patients directly care about. Examples of patient-centered outcomes include mortality, morbidity, symptoms, and quality of life. A disease-oriented outcome qualifies the type of outcome as that which patients do not directly care about. Examples of disease-oriented outcomes include laboratory test measurements, imaging study findings, and calculated risk estimates. In this context, disease-oriented outcomes may be used as surrogate outcomes or proxy outcomes for ultimate effects on patient-centered outcomes, but do not provide direct evidence of effects on patient-centered outcomes. | Brian S. Alper, Mario Tristan, Joanne Dehnbostel, Kenneth Wilkins, Khalid Shahin | 2022-06-07 vote 5-0 by Brian S. Alper, Paola Rosati, Eric M Harvey, Mario Tristan, Harold Lehmann | 2022-05-06 vote 4-2 by Mario Tristan, Robin Ann Yurk, nisha mathew, Paola Rosati, Harold Lehmann, Janice Tufte 2022-05-17 vote 6-2 by Paul Whaley, Harold Lehmann, Robin Ann Yurk, Paola Rosati, Eric Harvey, raradhikaag@gmail.com, Cauê Monaco, Jesus Lopez-Alcalde | 2022-05-06 comments: I would refer to whatever "clinical outcome" is defined as: "...indirect measures of clinical outcomes". The Comment for Application is redefining Clinical Outcome. I would spend that space pointing out that some surrogates are predictive (e.g., cholesterol levels, for MIs) and others are after the fact (e.g., sales of orange juice for treating the flu). Suggestion--look at the wikipedia definition, then explore other mapping definitions. The current term definition and comment for application need improvement. "In clinical trials, a surrogate endpoint is a measure of effect of a specific treatment that may correlate with a real clinical endpoint but does not necessarily have a guaranteed relationship. The National Institutes of Health defines surrogate endpoint as "a biomarker intended to substitute for a clinical endpoint". wikipedia... 2022-05-17 comments: Maybe edit to "An indirect measure of quantity or quality of life, presumed or believed to have an effect on clinical outcomes."I would focus on revising and define surrogate first and then include a broad definition, not just specific to clinical outcomes. 2022-05-26 comment: Surrogate Outcome is a proxy measure for capturing the outcome of interest. Alternative Terms: delete disease oriented and surrogate outcome measure. Suggest add: Proxy Outcome Measure. Comment for application: Delete first 3 sentences. Edit the last sentence so it reads: A surrogate outcome is a measure which captures an approximate measure. Examples of surrogate outcomes includes survey measures rating scales for a child by the parent or teacher. Geriatric rating scales from paid or professional caregivers for a seriously ill or geriatric patient are other examples. | ||||||
2 | SEVCO:01085 | includes process measure | A study design feature in which one or more outcomes are actions or behaviors of a healthcare professional or care team. | A process outcome measure is a measure of change in actions or behaviors conducted in the process of healthcare delivery or clinical care, such as obtaining laboratory tests or referrals for follow-up care. | Brian S. Alper, Joanne Dehnbostel, Khalid Shahin, Kenneth Wilkins | 2022-06-07 vote 5-0 by Brian S. Alper, Paola Rosati, Eric M Harvey, Mario Tristan, Harold Lehmann | 2022-05-17 vote 7-1 by Paul Whaley, Harold Lehmann, Robin Ann Yurk, Paola Rosati, Eric Harvey, raradhikaag@gmail.com, Cauê Monaco, Jesus Lopez-Alcalde 2022-05-17 vote 8-2 by Paul Whaley, Harold Lehmann, Robin Ann Yurk, Paola Rosati, Eric Harvey, raradhikaag@gmail.com, Cauê Monaco, Jesus Lopez-Alcalde, nelle.stocquart@kce.fgov.be, Mario Tristan 2022-05-31 vote 8-2 by Paul Whaley, Harold Lehmann, Robin Ann Yurk, Paola Rosati, Eric Harvey, raradhikaag@gmail.com, Cauê Monaco, Jesus Lopez-Alcalde, nelle.stocquart@kce.fgov.be, Mario Tristan | 2022-05-17 comment: Process Measure is included in the donabedian framework of structure, process, outcomes. Do you want to define just for healthcare process measure versus keep the definition broad to include such as a series of steps or tasks providing a measurement pathway for any industry and the examples in healthcare processes are.... 2022-05-24 comments: repeat 2022-05-17 comment plus: you need to provide more info, it is not clear as such 2022-05-31 comments: you need to provide more info, it is not clear as such Add comment for application with examples: A process measures captures the steps to care such as Lab test orders, Referrals....The literature defines a process measure in the donabedian framework of structure, process, outcomes. | ||||||
2 | SEVCO:01089 | study goal | A study design feature specifying the intent of the study. | Brian S. Alper, Harold Lehmann, Kenneth Wilkins, Joanne Dehnbostel | 2022-06-21 vote 5-0 by Janice Tufte, Louis Leff, Mario Tristan, Eric M Harvey, Muhammad Afzal | 2022-06-21 comment: Another alternate term could be "Study Objective" | ||||||||
3 | SEVCO:01096 | evaluation goal | A study goal to assess the efficiency, effectiveness, and impact of a given program, process, person or piece of equipment. | Intended to include all forms of evaluation study. (Child concepts for program, process, personnel and equipment evaluations may be added later.) | Kenneth Wilkins, Joanne Dehnbostel | 2022-07-12 vote 6-0 by Philippe Rocca-Serra, Jesus Lopez-Alcalde, Paola Rosati, Harold Lehmann, Eric Harvey, Janice Tufte | Medical Subject Heading (MESH): this heading is used as a Publication Type; for original report of the conduct or results of a specific evaluation study; a different heading EVALUATION STUDIES AS TOPIC is used for general design, methodology, economics, etc. of evaluation studies Scope Note Works consisting of studies determining the effectiveness or utility of processes, personnel, and equipment. https://meshb.nlm.nih.gov/record/ui?ui=D023362 | |||||||
3 | SEVCO:01097 | derivation goal | A study goal with the intent to generate a predictive algorithm. | Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins | 2022-07-19 vote 8-0 by Paola Rosati, Cauê Monaco, Paul Whaley, Philippe Rocca-Serra, Harold Lehmann, Mario Tristan, Jesus Lopez-Alcalde, Eric Harvey | |||||||||
3 | SEVCO:01098 | validation goal | A study goal with the intent to determine the reliability and/or performance of a procedure for a specific predictive, classification, measurement, or communication purpose. | Procedures that may be assessed in validation studies include predictive algorithms, measurement instruments, and educational materials. Internal validation is tested in populations from the source used for derivation of the procedure. External validation is tested in populations that differ from the source used for derivation of the procedure. | Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins, Harold Lehmann | 2022-07-26 vote 6-0 by Jesus Lopez-Alcalde, Harold Lehmann, Paola Rosati, Eric Harvey, Janice Tufte, Mario Tristan | 2022-07-19 vote 8-1 by Paola Rosati, Cauê Monaco, Paul Whaley, Philippe Rocca-Serra, Harold Lehmann, Mario Tristan, Jesus Lopez-Alcalde, Eric Harvey, Janice Tufte | 2022-07-19 comment: "Relevance" is a value judgment that is not the hallmark of a validation study. (It requires elicitation of this judgment from experts or potential users.) Accuracy, while difficult to measure, is certainly a validation aspiration (goal). Thus, validation of instruments assesses their sensitivity and specificity (measures of "accuracy"). Perhaps a broader goal is "performance", which would include accuracy but also applicability across sites or other external contexts. Also, typo: "*from* the source used..." | https://meshb.nlm.nih.gov/record/ui?ui=D023361 MeSH Heading: Validation Study Annotation: This heading is used as a Publication Type for original report of the conduct or results of a specific validation study. A different heading VALIDATION STUDIES AS TOPIC is used for general design, methodology, economics, etc. of validation studies. CATALOGER: Do not use Scope Note: Works consisting of research using processes by which the reliability and relevance of a procedure for a specific purpose are established. Entry Term(s): Validation Studies | |||||
3 | SEVCO:01088 | comparison goal | A study design feature in which the study intent is to compare two or more interventions or exposures. | Brian S. Alper, Harold Lehmann, Kenneth Wilkins, Joanne Dehnbostel | 2022-06-21 vote 5-0 by Janice Tufte, Louis Leff, Mario Tristan, Eric M Harvey, Muhammad Afzal | MeSH term "Equivalence Trial" https://www.ncbi.nlm.nih.gov/mesh/2023172 Trial that aims to show a new treatment is no better and no worse than the standard treatment. Year introduced: 2018 Do not include MeSH terms found below this term in the MeSH hierarchy. Tree Number(s): V03.175.250.500.500.125 MeSH Unique ID: D000073843 Entry Terms: Non-Inferiority Trial Noninferiority Trial Superiority Trial Equivalence Clinical Trial | ||||||||
4 | SEVCO:01091 | comparative effectiveness goal | A study design feature in which the study intent is to compare two or more interventions with respect to benefits and/or harms. | In 2009, the Institute of Medicine committee defined comparative effectiveness research (CER) as: "Comparative effectiveness research is the generation and synthesis of evidence that compares the benefits and harms of alternative methods to prevent, diagnose, treat, and monitor a clinical condition or to improve the delivery of care. The purpose of CER is to assist consumers, clinicians, purchasers, and policy makers to make informed decisions that will improve health care at both the individual and population levels." | Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel, Khalid Shahin | 2022-06-21 vote 5-0 by Janice Tufte, Louis Leff, Mario Tristan, Eric M Harvey, Muhammad Afzal | Defining comparative effectiveness research (CER) was the first order of business for the Institute of Medicine Committee on Initial Priorities for CER. The Institute of Medicine committee approached the task of defining CER by identifying the common theme in the 6 extant definitions. The definition follows: "Comparative effectiveness research is the generation and synthesis of evidence that compares the benefits and harms of alternative methods to prevent, diagnose, treat, and monitor a clinical condition or to improve the delivery of care. The purpose of CER is to assist consumers, clinicians, purchasers, and policy makers to make informed decisions that will improve health care at both the individual and population levels." https://pubmed.ncbi.nlm.nih.gov/20473202/ | |||||||
5 | SEVCO:01090 | comparative efficacy goal | A study design feature in which the study intent is to compare two or more interventions with respect to effectiveness in ideal conditions. | Efficacy is defined as effectiveness in ideal conditions. In this context, an efficacy goal is a type of effectiveness goal. Efficacy is used to distinguish the context from effectiveness in 'real-world' settings. | Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel, Khalid Shahin | 2022-06-28 vote 8-0 by Jesus Lopez-Alcalde, Philippe Rocca-Serra, Harold Lehmann, Muhammad Afzal, Janice Tufte, Louis Leff, Mario Tristan, Eric M Harvey | 2022-06-28 comment: what does "in ideal conditions" really mean? is it necessary ? | |||||||
5 | SEVCO:01092 | comparative safety goal | A study design feature in which the study intent is to compare two or more interventions with respect to harms. | Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel, Khalid Shahin | 2022-06-28 vote 8-0 by Jesus Lopez-Alcalde, Philippe Rocca-Serra, Harold Lehmann, Muhammad Afzal, Janice Tufte, Louis Leff, Mario Tristan, Eric M Harvey | 2022-06-28 comment: no need to be "in ideal conditions" ? see related comment on. "comparative efficacy goal" class textual definition | ||||||||
4 | SEVCO:01093 | equivalence goal | A study goal with the intent to compare two or more interventions or exposures and determine that any difference in effects is within a prespecified range representing absence of a meaningful difference. | An Equivalence Goal is only applicable with a Comparative study design. The prespecified range representing absence of a meaningful difference may be defined with an equivalence margin. | Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel | 2022-07-19 vote 8-0 by Paola Rosati, Cauê Monaco, Paul Whaley, Philippe Rocca-Serra, Harold Lehmann, Mario Tristan, Jesus Lopez-Alcalde, Eric Harvey | 2022-07-12 vote 4-2 by Philippe Rocca-Serra, Jesus Lopez-Alcalde, Paola Rosati, Janice Tufte, Harold Lehmann, Eric Harvey | 2022-07-12 comments: harmonize the definition to match the pattern used for the other terms. e.g. "evaluation goal" is a study goal in which the objective is to assess the efficience, effectivement and impact of a given process, process, person or piece of equipment' so Equivalence Goal is a study goal in which the study intent is to compare two or more interventions or exposures and determine that any difference in effects is within a prespecified range representing absence of a meaningful difference I think this definition is unclear. Is the equivalence goal an aim of a study? Which kind of study? My understanding is: Given a prespecified range (of results?) showing an absence of a meaningful (for which kind of subjects/previous research?) difference between two interventions/exposures, the equivalence goal assesses that there is no difference in effects. Is this the meaning of this definition? Which kind of study could give a valid result in terms of equivalence? An RCT? | MeSH term "Equivalence Trial" https://www.ncbi.nlm.nih.gov/mesh/2023172 Trial that aims to show a new treatment is no better and no worse than the standard treatment. Year introduced: 2018 Do not include MeSH terms found below this term in the MeSH hierarchy. Tree Number(s): V03.175.250.500.500.125 MeSH Unique ID: D000073843 Entry Terms: Non-Inferiority Trial Noninferiority Trial Superiority Trial Equivalence Clinical Trial | |||||
4 | SEVCO:01094 | non-inferiority goal | A study goal with the intent to compare two or more interventions or exposures and determine that any difference in effects is below a prespecified value representing a threshold between a meaningful difference and absence of a meaningful difference. | A Non-inferiority Goal is only applicable with a Comparative study design. The threshold between a meaningful difference and absence of a meaningful difference may be called a non-inferiority margin. | Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel | 2022-07-19 vote 8-0 by Paola Rosati, Cauê Monaco, Paul Whaley, Philippe Rocca-Serra, Harold Lehmann, Mario Tristan, Jesus Lopez-Alcalde, Eric Harvey | 2022-07-12 vote 3-2 by Philippe Rocca-Serra, Jesus Lopez-Alcalde, Paola Rosati, Harold Lehmann, Eric Harvey | 2022-07-12 comments: so Non-Inferiorty Goal is a study goal in which.... I have the same doubts already given for the equivalence goal | MeSH term "Equivalence Trial" https://www.ncbi.nlm.nih.gov/mesh/2023172 Trial that aims to show a new treatment is no better and no worse than the standard treatment. Year introduced: 2018 Do not include MeSH terms found below this term in the MeSH hierarchy. Tree Number(s): V03.175.250.500.500.125 MeSH Unique ID: D000073843 Entry Terms: Non-Inferiority Trial Noninferiority Trial Superiority Trial Equivalence Clinical Trial | |||||
4 | SEVCO:01095 | superiority goal | A study goal with the intent to compare two or more interventions or exposures and detect a difference in effects. | A Superiority Goal is only applicable with a Comparative study design. A superiority study goal may be exploratory (to detect a difference) or confirmatory (to establish that a difference exists with a degree of certainty). A superiority goal is not the opposite of a non-inferiority goal. A superiority goal uses a threshold of zero difference while an inferiority goal uses a threshold of a meaningful difference. Some superiority comparisons are conducted following determination of non-inferiority. Placebo-controlled trials are typically superiority studies. Superiority, as commonly used, is 'statistical superiority,' with null used as the threshold of effect. An approach representing 'clinical superiority' would use the non-inferiority margin as the threshold of effect. | Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel, Harold Lehmann | 2022-07-19 vote 9-0 by Paola Rosati, Cauê Monaco, Paul Whaley, Philippe Rocca-Serra, Harold Lehmann, Mario Tristan, Jesus Lopez-Alcalde, Eric Harvey, Janice Tufte | 2022-07-12 vote 4-1 by Philippe Rocca-Serra, Jesus Lopez-Alcalde, Paola Rosati, Harold Lehmann, Eric Harvey | 2022-07-12 comment: so Superiority Goal is a study goal in which... 2022-07-19 comment: alter definition to "...and detect *meaningful* difference in effects" (in order to be consistent with Equivalence and Non-inferiority Study Goals') | MeSH term "Equivalence Trial" https://www.ncbi.nlm.nih.gov/mesh/2023172 Trial that aims to show a new treatment is no better and no worse than the standard treatment. Year introduced: 2018 Do not include MeSH terms found below this term in the MeSH hierarchy. Tree Number(s): V03.175.250.500.500.125 MeSH Unique ID: D000073843 Entry Terms: Non-Inferiority Trial Noninferiority Trial Superiority Trial Equivalence Clinical Trial | |||||
2 | SEVCO:01100 | allocation ratio | A study design feature describing the intended relative proportion of assignment across groups. | The allocation ratio may be expressed as Treatment:Control, e.g., 2:1, or, in the case of two treatment arms and one control, e.g. 2:2:1. | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins | 2023-04-10 vote 5-0 by Harold Lehmann, Joanne Dehnbostel, Eric Harvey, Janice Tufte, Jesus Lopez-Alcalde | 2023-04-10 comment Perhaps add to Comment for Application something like, "The allocation ratio is usually expressed as Treatment:Control, e.g., 2:1 or 2:2:1, in the case of two treatment arms." | |||||||
1 | SEVCO:00001 | bias | A systematic distortion in research results (estimation of effect, association, or inference). Distortions in research results means differences between the reported results (findings, conclusions, effect estimates) and the actuality (the truth, the estimand [the quantity targeted for estimation]). | Brian S. Alper, Philippe Rocca-Serra, Joanne Dehnbostel, Mario Tristan, Harold Lehmann; Kenneth Wilkins, Janice Tufte, Muhammad Afzal | 8/8 as of 2021-02-26: , Harold Lehmann, Khalid Shahin, Eric Harvey, Jesús López-Alcalde, Joanne Dehnbostel, Muhammad Afzal, Paola Rosati, Eric Au, 5/5 for second sentence as of 8/30/2021: Eric Harvey, Harold Lehmann, Mario Tristan, Bhagvan Kommadi, Janice Tufte | |||||||||
2 | SEVCO:00002 | selection bias | A bias resulting from methods used to select subjects or data, factors that influence initial study participation, or differences between the study sample and the population of interest | Selection bias can occur before the study starts (inherent in the study protocol) or after the study starts (during study execution). | Brian S. Alper, Tatyana Shamliyan, Bhagvan Kommadi, Muhammad Afzal, Khalid Shahin, Harold Lehmann, Philippe Rocca-Serra, Asiyah Yu Lin, Joanne Dehnbostel | 8/8 as of 3/5/2021 Eric Au, Alejandro Piscoya, Mario Tristan, Brian Alper, Zbys Fedorowicz, Bhagvan Kommadi, Eric Harvey, Muhammad Afzal | ||||||||
3 | SEVCO:00003 | participant selection bias | A selection bias resulting from methods used to select participating subjects, factors that influence initial study participation, or differences between the study participants and the population of interest | Brian S. Alper, Tatyana Shamliyan, Bhagvan Kommadi, Muhammad Afzal, Khalid Shahin, Harold Lehmann, Philippe Rocca-Serra, Asiyah Yu Lin, Joanne Dehnbostel | 10/10 as of 3/22/2021 Harold Lehmann, Eric Harvey, Bhagvan Kommadi, Jesus Lopez-Alcalde, Ahmad Sofi-Mahmudi, Tatyana Shamliyan, Muhammad Afzal, Paola Rosati, Joanne Dehnbostel, Marc Duteau | 2021-03-08 vote 7-2 on "A selection bias where key characteristics of the participants differ systematically from the population of interest." by Harold Lehmann, Philippe Rocca-Serra, Joanne Dehnbostel, 2021-03-19 vote 10-1 on "A bias resulting from methods used to select participating subjects, factors that influence initial study participation, or differences between the study participants and the population of interest" by Brian S. Alper, Tatyana Shamliyan, Bhagvan Kommadi, Muhammad Afzal, Khalid Shahin, Harold Lehmann, Philippe Rocca-Serra, Asiyah Yu Lin | ||||||||
4 | SEVCO:00004 | inappropriate selection criteria | A selection bias resulting from inclusion and exclusion criteria used to select participating subjects that could result in differences between the study participants and the population of interest. | Brian S. Alper, Tatyana Shamliyan, Bhagvan Kommadi, Muhammad Afzal, Khalid Shahin, Harold Lehmann, Philippe Rocca-Serra, Asiyah Yu Lin, Joanne Dehnbostel | 10/10 as of 3/22/2021 Harold Lehmann, Eric Harvey, Bhagvan Kommadi, Jesus Lopez-Alcalde, Ahmad Sofi-Mahmudi, Tatyana Shamliyan, Muhammad Afzal, Paola Rosati, Joanne Dehnbostel, Marc Duteau | 2021-03-19 vote 9-2 on "A bias resulting from inclusion and exclusion criteria used to select participating subjects that could make the included participants unrepresentative of the population of interest." by Brian S. Alper, Tatyana Shamliyan, Bhagvan Kommadi, Muhammad Afzal, Khalid Shahin, Harold Lehmann, Philippe Rocca-Serra, Asiyah Yu Lin | ||||||||
4 | SEVCO:00005 | inappropriate sampling strategy | A selection bias resulting from the sampling frame, sampling procedure, or methods used to recruit participating subjects that could result in differences between the study participants and the population of interest. | Brian S. Alper, Tatyana Shamliyan, Bhagvan Kommadi, Muhammad Afzal, Khalid Shahin, Harold Lehmann, Philippe Rocca-Serra, Joanne Dehnbostel | 10/10 as of 3/22/2021 Harold Lehmann, Eric Harvey, Bhagvan Kommadi, Jesus Lopez-Alcalde, Ahmad Sofi-Mahmudi, Tatyana Shamliyan, Muhammad Afzal, Paola Rosati, Joanne Dehnbostel, Marc Duteau | 2021-03-19 vote 9-2 on "A bias resulting from the sample frame, sampling procedure, or methods used to recruit participating subjects that could make the included participants unrepresentative of the population of interest." by Brian S. Alper, Tatyana Shamliyan, Bhagvan Kommadi, Muhammad Afzal, Khalid Shahin, Harold Lehmann, Philippe Rocca-Serra | ||||||||
5 | SEVCO:00014 | inappropriate data source for participant selection | Participant selection bias due to inappropriate data source for sampling frame. | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Khalid Shahin, Muhammad Afzal, Bhagvan Kommadi | 6/6 as of 4/12/2021: KM Saif-Ur-Rahman, Bhagvan Kommadi, Joanne Dehnbostel, Paola Rosati, Jesús López-Alcalde, Tatyana Shamliyan | |||||||||
4 | SEVCO:00006 | non-representative sample | A selection bias due to differences between the included participants and the population of interest that distorts the research results (estimation of effect, association, or inference), limiting external validity or applicability. | Brian S. Alper, Tatyana Shamliyan, Bhagvan Kommadi, Muhammad Afzal, Khalid Shahin, Harold Lehmann, Philippe Rocca-Serra, Joanne Dehnbostel | 10/10 as of 3/22/2021 Harold Lehmann, Eric Harvey, Bhagvan Kommadi, Jesus Lopez-Alcalde, Ahmad Sofi-Mahmudi, Tatyana Shamliyan, Muhammad Afzal, Paola Rosati, Joanne Dehnbostel, Marc Duteau | 2021-03-19 vote 10-1 on "Differences between the included participants and the population of interest that distorts the research results (estimation of effect, association, or inference), limiting external validity or applicability." by Brian S. Alper, Tatyana Shamliyan, Bhagvan Kommadi, Muhammad Afzal, Khalid Shahin, Harold Lehmann, Philippe Rocca-Serra | ||||||||
5 | SEVCO:00008 | inadequate enrollment of eligible subjects | A selection bias in which insufficient enrollment of eligible subjects results in differences (recognized or unrecognized) between the included participants and the population of interest that distorts the research results. | Brian S. Alper, Joanne Dehnbostel, Philippe Rocca-Serra, Marc Duteau, Khalid Shahin, Asiyah Yu Lin, Muhammad Afzal, Tatyana Shamliyan | 11/11 as of 3/29/2021: Alejandro Piscoya, Eric Harvey, Bhagvan Kommadi, Ahmad Sofi-Mahmudi, Eric Au, Joanne Dehnbostel, Marc Duteau, Brian S. Alper, Jesús López-Alcalde, Tatyana Shamliyan, Paola Rosati | 2021-03-26 vote 8-2 on "Inadequate enrollment = A selection bias due to a rate of study entry among eligible subjects that is not sufficient for the included sample to be considered representative of the population of interest." by Harold Lehmann, Tatyana Shamliyan, Muhammad Afzal, Eric Au, Paola Rosati, Mario Tristan, Alejandro Piscoya, Bhagvan Kommadi, Jesús López-Alcalde, Eric Harvey | ||||||||
5 | SEVCO:00012 | non-representative sample due to timing or duration of exposure | A selection bias in which the timing or duration of exposure influences the outcome, and the timing or duration of exposure in the sample does not represent that of the population of interest. This selection bias may occur when the selection for study participation is not coincident with the initiation of the exposure or intervention under investigation. | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Khalid Shahin | 9/9 as of 4/9/2021: Alejandro Piscoya, KM Saif-Ur-Rahman, Bhagvan Kommadi, Eric Harvey, Joanne Dehnbostel, Paola Rosati, Harold Lehmann, Jesús López-Alcalde, Tatyana Shamliyan | |||||||||
6 | SEVCO:00013 | depletion of susceptibles | A non-representative sample due to exclusion of susceptible participants who have already had an outcome due to prior exposure. For example, the inclusion of prevalent users of a medication misrepresents the initial adverse effects rate by excluding persons who do not tolerate the medication. | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Khalid Shahin | 9/9 as of 4/9/2021: Alejandro Piscoya, KM Saif-Ur-Rahman, Bhagvan Kommadi, Eric Harvey, Joanne Dehnbostel, Paola Rosati, Harold Lehmann, Jesús López-Alcalde, Tatyana Shamliyan | |||||||||
4 | SEVCO:00009 | post-baseline factors influence enrollment selection | A selection bias in which factors observed after study entry, baseline, or start of follow-up influence enrollment | Brian S. Alper, Joanne Dehnbostel, Philippe Rocca-Serra, Marc Duteau, Khalid Shahin, Asiyah Yu Lin, Harold Lehmann, Mario Tristan | 9/9 as of 4/5/2021: Alejandro Piscoya, KM Saif-Ur-Rahman, Bhagvan Kommadi, Eric Harvey, Joanne Dehnbostel, Mario Tristan, Harold Lehmann, Jesús López-Alcalde, Tatyana Shamliyan | |||||||||
5 | SEVCO:00212 | participant selection bias due to early study termination | A selection bias due to premature closure of study enrollment. | 'Early termination bias affecting enrollment' is a type of 'Post-baseline factors influence enrollment selection' which is defined as 'A selection bias in which factors observed after study entry, baseline, or start of follow-up influence enrollment.' To express bias related to making the decision to terminate a study, use 'Early Study Termination Bias'. | Brian S. Alper, Harold Lehmann, Paul Whaley, Kenneth Wilkins, Muhammad Afzal | 2022-04-08 vote 12-0 by Muhammad Afzal, Paul Whaley, Mario Tristan, Joanne Dehnbostel, Jesus Lopez-Alcalde, Philippe Rocca-Serra, Paola Rosati, Robin Ann Yurk, nelle.stocquart, nisha mathew, Harold Lehmann, Cauê Monaco | 2022-03-25 vote 7-1 by Muhammad Afzal, Paul Whaley, Mario Tristan, Joanne Dehnbostel, Jesus Lopez-Alcalde, Philippe Rocca-Serra, Paola Rosati, Robin Ann Yurk | 2022-03-25 comment: Recommend simplifying the term and then add your test to the term definition. For example edit term to Early Study Termination Bias. Term definition should read. Selection Bias due to premature closing of a study enrollment for the participants.... | ||||||
4 | SEVCO:00010 | factor associated with exposure influences enrollment selection | A selection bias in which a factor associated with the exposure under investigation influences study enrollment | Brian S. Alper, Joanne Dehnbostel, Khalid Shahin, Harold Lehmann, Mario Tristan, Bhagvan Kommadi, Muhammad Afzal | 9/9 as of 4/5/2021: Alejandro Piscoya, KM Saif-Ur-Rahman, Bhagvan Kommadi, Eric Harvey, Joanne Dehnbostel, Mario Tristan, Harold Lehmann, Jesús López-Alcalde, Tatyana Shamliyan | |||||||||
4 | SEVCO:00011 | factor associated with outcome influences enrollment selection | A selection bias in which a factor associated with the outcome under investigation influences study enrollment | Brian S. Alper, Joanne Dehnbostel, Khalid Shahin, Harold Lehmann, Mario Tristan, Bhagvan Kommadi, Muhammad Afzal | 9/9 as of 4/5/2021: Alejandro Piscoya, KM Saif-Ur-Rahman, Bhagvan Kommadi, Eric Harvey, Joanne Dehnbostel, Mario Tristan, Harold Lehmann, Jesús López-Alcalde, Tatyana Shamliyan | |||||||||
3 | SEVCO:00015 | study selection bias | A selection bias resulting from factors that influence study selection, from methods used to include or exclude studies for evidence synthesis, or from differences between the study sample and the population of interest | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Khalid Shahin, Muhammad Afzal, Philippe Rocca-Serra | 6/6 as of 4/26/2021: Eric Harvey, Bhagvan Kommadi, Harold Lehmann, Mario Tristan, Jesús López-Alcalde, Tatyana Shamliyan 2024-04-19 vote 8-0 by Cauê Monaco, Janice Tufte, Homa Keshavarz, Sheyu Li, Khalid Shahin, Lenny Vasanthan, Harold Lehmann, Eric Harvey | |||||||||
4 | SEVCO:00262 | bias in study eligibility criteria | A study selection bias specific to the inclusion and exclusion criteria. | If the study eligibility criteria (inclusion and exclusion criteria for study selection) result in a dataset that is non-representative of the population of interest, then the criteria introduce systematic error. A study selection bias is a selection bias resulting from factors that influence study selection, from methods used to include or exclude studies for evidence synthesis, or from differences between the study sample and the population of interest. | Brian S. Alper, Joanne Dehnbostel, Harold Lehmann | 2024-02-02 vote 5-0 by Brian S. Alper, Harold Lehmann, Xing Song, Cauê Monaco, Eric Harvey | ||||||||
5 | SEVCO:00273 | study eligibility criteria not prespecified | A bias in study eligibility criteria in which the criteria are not stated before the study selection process occurs. | Failure to specify the study eligibility criteria before evaluating studies for selection can lead to a situation in which the data discovered during the study selection process influences the criteria for selection in a way that introduces systematic error. | Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins, Homa Keshavarz | 2024-02-16 vote 6-0 by Javier Bracchiglione, Lenny Vasanthan, Xing Son, Brian S. Alper, Harold Lehmann, Eric Harvey | from ROBIS 1.1 Did the review adhere to pre-defined objectives and eligibility criteria? A systematic review should begin with a clearly focused question or objective which is reflected in the criteria used for deciding whether studies are eligible for inclusion. Details that should be specified a priori in a review protocol vary according to review type, but should generally include the study designs, study participants, and types of interventions/exposures that are eligible. If outcomes or outcome domains are to form part of the eligibility criteria, this should be stated clearly. Any exclusions should also be pre-specified. Where a protocol providing this information is available, the answer to this question would be “Yes”. Where no protocol is available but information about pre-defined objectives and detailed eligibility criteria are supplied, and there is good reason to believe that these were specified in advance and adhered to throughout the review, assessors can consider answer this question “Probably Yes”. Any post hoc changes to the eligibility criteria or outcomes must keep faith with the objectives of the review, and be properly justified and documented. In the absence of a pre-published protocol, where information about pre-defined objectives and eligibility criteria are only available post hoc in the review publication, unless there is some reason to believe that these details were specified in advance and adhered to from the start of the review, this question should be answered “Probably No”. Where all or some of these details are missing, this question should be answered “No”. | |||||||
5 | SEVCO:00274 | study eligibility criteria not appropriate for review question | A bias in study eligibility criteria due to a mismatch between the inclusion and exclusion criteria and the research question. | The mismatch between the inclusion and exclusion criteria and the research question may relate to differences in the population, exposures, or outcomes studies from those of interest. | Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins, Homa Keshavarz, Xing Song | 2024-03-08 vote 5-0 :Lenny Vasanthan, Harold Lehmann, Janice Tufte, Javier Bracchiglione, Eric Harvey | 2024-02-16 vote 5-1 by Javier Bracchiglione, Lenny Vasanthan, Xing Song, Brian S. Alper, Harold Lehmann, Eric Harvey | 2024-02-16 comment: I would simplify the definition to: "A bias in study eligibility criteria derived from a mismatch between the inclusion and exclusion criteria, and the research question, that could result in an inappropriate selection of studies" | from ROBIS 1.2 Were the eligibility criteria appropriate for the review question? The eligibility criteria should stem from the review question and should provide sufficient detail to enable judgement about whether the studies that are included are appropriate to the question. The information required is likely to vary by topic. For example, in order to judge appropriateness, the assessor might need a clear description of the population in terms of the age range and diagnosis of the study participants, the setting in which the study was conducted, the dose of a drug, or the frequency of exposure. To answer this question the assessor is likely to require some content knowledge. | |||||
5 | SEVCO:00275 | study eligibility criteria ambiguous | A bias in study eligibility criteria due to unclear specification. | Eligibility criteria that are not sufficiently described to enable reproduction of study selection can introduce systematic error. | Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins, Homa Keshavarz | 2024-02-16 vote 5-0 by Lenny Vasanthan, Xing Son, Brian S. Alper, Harold Lehmann, Eric Harvey | from ROBIS 1.3 Were eligibility criteria unambiguous? Specific information about the characteristics of eligible studies must be provided, as far as possible avoiding any ambiguities about the types of study, population, interventions, comparators and outcomes. Criteria should be sufficiently detailed that the review could be replicated using the criteria specified. A number of important details are commonly missing from the eligibility criteria in systematic reviews. For example, details about the diagnosis of study participants. Diagnosis might be made using a number of different methods, some of which might be more valid or accurate than others. Review authors should have decided in advance which diagnostic methods are appropriate to their review question in order to avoid introducing potential biases during the review process. Similarly, specific details about interventions/exposures and comparators must be provided, including characteristics such as medication dose, frequency of administration, concurrent treatments, and so on. The assessor is likely to require some content knowledge to answer this question, but where specific queries remain about the stated eligibility criteria, “No” or “Probably No” judgements can usually be made. | |||||||
5 | SEVCO:00276 | study eligibility criteria limits for study characteristics not appropriate | A bias in study eligibility criteria that is specific to restrictions based on characteristics of the study design, conduct, or findings. | Any restrictions applied on the basis of study characteristics should not introduce a bias in study eligibility criteria. Examples of such restrictions may include criteria based on study size, study design, study quality, or date when the study was conducted. In the ROBIS tool used for risk of bias assessment of systematic reviews, there is a question (1.4 Were all restrictions in eligibility criteria based on study characteristics appropriate?) that is different from the one which refers to whether the eligibility criteria are appropriate to the review question. Therefore, a separate term is available in SEVCO. | Brian S. Alper, Harold Lehmann, Xing Song, Joanne Dehnbostel, Kenneth Wilkins, Homa Keshavarz | 2024-03-08 vote 5-0 Lenny Vasanhan, Harold Lehmann, Eric Harvey, Janice Tufte, Homa Kashavarz | 2024-02-23 vote 5-1 by Homa Keshavarz, Harold Lehmann, Javier Bracchiglione, Lenny Vasanthan, Xing Song, Eric Harvey | 2024-02-23 comment: As it is, study characteristics could refer to clinical characteristics (e.g. age), which could indeed be appropriate. I would state that it refers to methodological characteristics. | from ROBIS 1.4 Were all restrictions in eligibility criteria based on study characteristics appropriate? Any restrictions applied on the basis of study characteristics must be clearly described and a sound rationale provided. These details will enable assessors to judge whether such restrictions were appropriate. Examples might be the study design, the date the study was published, the size of the study, some measure of study quality, and available outcomes measures. This question is different from the one above which refers to whether the eligibility criteria are appropriate to the review question. Where sufficient information is available, and the assessor is reasonably satisfied that the restrictions are appropriate, this question can be answered “Yes or “Probably Yes”. Where restrictions around study characteristics are not justified and there is insufficient information to judge whether these restrictions are appropriate, this question should be answered “Probably No” or “No”. Where eligibility criteria are sufficiently detailed, and no restrictions around study characteristics are explicitly reported, it can be assumed that none were imposed, and the question should be answered “Yes”. | |||||
5 | SEVCO:00277 | study eligibility criteria limits for study report characteristics not appropriate | A bias in study eligibility criteria that is specific to restrictions on the status, structure, language, or accessibility of the study report. | Examples of study report characteristics include publication status (including preprints and unpublished data), format, language, and availability of data, as well as the date of publication. The ROBIS tool used for assessing risk of bias of systematic reviews includes a signaling question '1.5 Were any restrictions in eligibility criteria based on sources of information appropriate?' | Brian S. Alper, Harold Lehmann, Kenneth Wilkins, Homa Keshavarz | 2024-02-23 vote 5-0 by Homa Keshavarz, Harold Lehmann, Javier Bracchiglione, Lenny Vasanthan, Eric Harvey | 2024-02-16 comment: (in comments for application, there is a dot after the parenthesis that needs to be removed) | from ROBIS 1.5 Were any restrictions in eligibility criteria based on sources of information appropriate? Any restrictions applied on the basis of sources of information must be clearly described and a sound rationale provided. These details will enable assessors to judge whether such restrictions were appropriate. Examples might be the publication status or format, language, and availability of data. This question is different from the question in domain 2 which is about restricting searches. Where eligibility criteria are sufficiently detailed, but no restrictions based on sources of information are explicitly reported, it must be assumed that none were imposed, and the question should be answered “Yes”. | ||||||
4 | SEVCO:00269 | language bias | A bias in search strategy or study eligibility criteria that results from restrictions regarding the language of the study report. | Limiting the study reports included in a systematic review by language may result in an incomplete view of the truly available evidence. The terms <a href="https://fevir.net/resources/CodeSystem/27270#SEVCO:00266" target="_blank">search strategy limits for study report characteristics not appropriate</a> and <a href="https://fevir.net/resources/CodeSystem/27270#SEVCO:00277" target="_blank">study eligibility criteria limits for study report characteristics not appropriate</a> are used to describe types of study selection bias that results from restrictions regarding the study report, distinguishing different steps in the search and selection process. | Brian S. Alper, Harold Lehmann, Muhammad Afzal, Kenneth Wilkins | 05-03-2024 5-0 by Homa Keshavarz, Eric Harvey, Janice Tufte, Lenny Vasanthan, Harold Lehmann | 05-03-2024 "what is interesting about this term is that language in terms of terms changes and actually can begin to mean something other than the original intent- later original term might mean something else down the road" | https://catalogofbias.org/biases/language-bias/ | ||||||
4 | SEVCO:00270 | geography bias | A bias in search strategy or study eligibility criteria that results from restrictions regarding the geographic origin of the research. | Limiting the study reports included in a systematic review by country of origin may result in an incomplete view of the truly available evidence. The geographic origin of the research may refer to the location of the research participants, the investigators, or their associated organizations. The terms <a href="https://fevir.net/resources/CodeSystem/27270#SEVCO:00266" target="_blank">search strategy limits for study report characteristics not appropriate</a> , <a href="https://fevir.net/resources/CodeSystem/27270#SEVCO:00276" target="_blank">study eligibility criteria limits for study characteristics not appropriate</a> and <a href="https://fevir.net/resources/CodeSystem/27270#SEVCO:00277" target="_blank">study eligibility criteria limits for study report characteristics not appropriate</a> are used to describe types of study selection bias that results from restrictions regarding the study report, distinguishing different steps in the search and selection process. | Brian S. Alper, Harold Lehmann, Muhammad Afzal, Joanne Dehnbostel | 05-03-2024 5-0 by Homa Keshavarz, Eric Harvey, Janice Tufte, Lenny Vasanthan, Harold Lehmann | The role of geographic bias in knowledge diffusion: a systematic review and narrative synthesis (https://researchintegrityjournal.biomedcentral.com/articles/10.1186/s41073-019-0088-0) includes "geographic bias...are biased by the geographic origin of the research" and "geographic bias, such as the role of institutional affiliation, country of origin..." and "geographic bias, i.e., local, regional, national, or international" | |||||||
4 | SEVCO:00272 | publication bias | A study selection bias in which the publicly available studies are not representative of all conducted studies. | Publication bias arises from the failure to identify all studies that have been conducted, either published (i.e., publicly available) or unpublished. The term 'studies' means evidence or research results in any form where such studies would meet the study eligibility criteria without consideration of criteria regarding the form of publication. The phrase 'publicly available studies' means the studies are available to the broad academic community and the public through established distribution channels in any form, including forms with restricted access. Established distribution channels include peer-reviewed journals, books, conference proceedings, dissertations, reports by governmental or research organizations, preprints, and study registries. Publication bias often leads to an overestimate in the effect in favor of the study hypothesis, because studies with statistically significant positive results are more likely to be publicly available. The terms <a href="https://fevir.net/resources/CodeSystem/27270#SEVCO:00023" target="_blank">reporting bias</a> and <a href="https://fevir.net/resources/CodeSystem/27270#SEVCO:00024" target="_blank">selective reporting bias</a> are used to describe biases in study reports, i.e., reporting bias. To avoid confusion between biases in study reports and biases in study selection, when either 'reporting bias' or 'selective reporting bias' are used as alternative for 'publication bias', the term is appended to 'study selection bias due to' as shown below: 	• study selection bias due to reporting bias 	• study selection bias due to selective reporting bias | Brian S. Alper, Kenneth Wilkins, Muhammad Afzal, Joanne Dehnbostel, Khalid Shahin | 2024-05-10 vote 10-0 by Saphia Mokrane, Sheyu Li, Harold Lehmann, Brian S. Alper, Homa Keshavarz, Lenny Vasanthan, Cauê Monaco, Jennifer Hunter, Eric Harvey, Khalid Shahin | 2024-03-29 vote 7-1 by Philippe Rocca-Serra, Sheyu Li, Harold Lehmann, Javier Bracchiglione, Lenny Vasanthan, Cauê Monaco, Eric Harvey, Jennifer Hunter 2024-04-12 vote 6-1 by Sheyu Li, Jennifer Hunter, Janice Tufte, Eric Harvey, Homa Keshavarz, Lenny Vasanthan, Harold Lehmann 2024-04-19 vote 8-1 by Janice Tufte, Homa Keshavarz, Sheyu Li, Khalid Shahin, Lenny Vasanthan, Harold Lehmann, Eric Harvey, Jennifer Hunter, Cauê Monaco 2024-04-26 vote 4-1 by Homa Keshavarz, Sean Grant, Lenny Vasanthan, Jennifer Hunter, Eric Harvey 2024-05-03 vote 6-1 by Harold Lehmann, Lenny Vasanthan, Jennifer Hunter, janice Tufte, Eric Harvey, Homa Keshavarz, Khalid Shahin | 2024-03-29 comment: The current definition does not reflect the information of 'publication', which is the core of the term. I understand that it is not good to use the same word in the definition and the term itself. Nevertheless, the word available could be vague - studies could be available in registration website only but not published. For different authors, the 'availability' of the studies are different. A suggested definition: A study selection bias in which the published studies are not representative of the conducted studies. alternatively: A study selection bias in which the studies available in the literature database are not representative of all conducted studies. 2024-04-12 comment: Publication bias arises from the failure to identify all studies that have been conducted, either published (i.e., publicly available) or unpublished. Typically, publication bias leads to an overestimate in the effect in favour of the study hypothesis. This is because studies with statistically significant positive results are more likely to be publicly available or published in English-language journals. 2024-04-19 comment: Consider adding some alternate terms used by Cochrane: non-reporting bias; bias due to missing results. https://training.cochrane.org/handbook/current/chapter-07#section-7-1 2024-04-26 comments: 1) Suggest revising second paragraph slightly to "Publication bias often (but not always) leads to..."2N) The last paragraph/sentence in application is long and difficult to read. Here is a suggestion: The terms reporting bias and selective reporting bias are used to describe biases in study reports. To avoid ambiguous use as alternative terms for publication bias, the terms are appended to 'study selection bias due to'. 2024-05-03 comments: The last paragraph is still very difficult to read, and I understand what we are trying to communicate. What about this suggestion? The terms reporting bias and selective reporting bias are used to describe biases in study reports. To avoid confusion between biases in study reports and biases in study selection, when either 'reporting bias' or 'selective reporting bias' are used as alternative for 'publication bias', the term is appended to 'study selection bias due to'. | ||||||
4 | SEVCO:00395 | bias in search strategy | A study selection bias specific to the strategy used to identify potentially eligible studies. | Brian S. Alper, Harold Lehmann, Kenneth Wilkins, Homa Keshavarz, Muhammad Afzal, Joanne Dehnbostel | 2024-02-23 vote 6-0 by Khalid Shahin, Homa Keshavarz, Harold Lehmann, Javier Bracchiglione, Lenny Vasanthan, Eric Harvey | 2024-02-23 comment: Although I am not 100% sure if this qualifies as "study selection" bias | ||||||||
5 | SEVCO:00263 | database search sources inadequate | A bias in search strategy in which the electronic sources are not sufficient to find the studies available in electronic sources. | The set of databases (electronic sources) expected to include the studies of interest will vary with the review topic. | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Caue Monaco, Homa Keshavarz | Expert working group agreement 2024-03-01 5-0 vote by Eric Harvey, Harold Lehmann, Khalid Shahin, Javier Bracchiglione, Homa Keshavarz | from ROBIS 2.1 Did the search include an appropriate range of databases/electronic sources for published and unpublished reports? The assessor needs to judge what constitutes an appropriate range of databases. This will vary according to review topic. It is anticipated that at a minimum a MEDLINE and EMBASE search would be conducted. Searches of material published as conference reports should also be considered along with a search of research registers. Guidance on the appropriate range of databases can be found in SR guidance such as the Cochrane Handbook,5 or from the Centre for Reviews and Dissemination (CRD) website (http://www.york.ac.uk/inst/crd/finding_studies_systematic_reviews.htm) | |||||||
5 | SEVCO:00264 | non-database search sources inadequate | A bias in search strategy in which the sources other than electronic database sources are not sufficient to find the studies available. | The set of sources other than databases (electronic sources) expected to include the studies of interest will vary with the review topic. | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Caue Monaco, Homa Keshavarz | 2024-03-08 6-0 Eric Harvey, Harold Lehmann, Javier Bracchiglione, Janice Tufte, Homa Kashavarz, Lenny Vasanthan | 2024-03-08 "I wonder about globally and developing nations where CHW workers are collecting data perhaps not electronically" , "Similar to other comments global data might be collected oin paper- qualitative - observational studies" | from ROBIS 2.2 Were methods additional to database searching used to identify relevant reports? Additional methods such as citation searches, contacting experts, reference checking, handsearching etc. should have been performed. | ||||||
5 | SEVCO:00265 | search strategy not sensitive | A bias in search strategy in which the search terms and combinations of search terms are not sufficient to find the studies available. | The search terms and combinations of search terms expected to include the studies of interest will vary with the review topic. | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Caue Monaco, Homa Keshavarz | 2024-03-08 vote 5-0 Eric Harvey, Harold Lehmann, Javier Bracchiglione, Homa Kasharvarz, Lenny Vasanthan | from ROBIS 2.3 Were the terms and structure of the search strategy likely to retrieve as many eligible studies as possible? A full search strategy showing all the search terms used, in sufficient detail to replicate the search, is required to be able to fully judge this question. If only limited details are provided, such as a list of search terms with no indication of how these are combined, assessors may be able to make a “Probably Yes” or “Probably No” judgment. Assessors should consider whether the search strategy included an appropriate range of terms for the topic, whether a combination of controlled terms (such as Medical Subject Headings (MeSH) for Medline) and words in the title and abstract were used, and whether any filters applied were appropriate. For example, for DTA reviews the use of filters has been shown to miss relevant studies and so this question should be answered as No for a strategy that includes such filters. Guidance on the critical appraisal of search strategies can be found in the PRESS Evidence-Based Checklist (http://ejournals.library.ualberta.ca/index.php/EBLIP/article/view/7402). | |||||||
5 | SEVCO:00266 | search strategy limits for study report characteristics not appropriate | A bias resulting from search strategy criteria implemented due to practical considerations for implementation, including properties of a report or its accessibility. | A bias in search strategy that is specific to restrictions on the date, status, structure, language, or accessibility of the study report. Accessibility refers to where a resource is available and how one gains access to it. | Harold Lehmann, Kenneth Wilkins, Homa Keshavarz, Khalid Shahin, Joanne Dehnbostel | 2024-03-15 vote 5-0 by Xing Song, Lenny Vasanthan, Harold Lehmann, Homa Keshavarz, Eric Harvey | from ROBIS 2.4 Were restrictions based on date, publication format, or language appropriate? If no restrictions were applied to the search strategy then this question should be answered as Yes. This is different from the question in domain 1 (1.5) which is about restriction to selection criteria. Information is required on all three components of this question (i.e. date, publication format and language) to be able to fully judge this item. Restriction of papers based on language (e.g. restriction to English language articles) or publication format (e.g. restriction to full text published studies) is rarely (if ever) appropriate, and so if any such restrictions were applied then this question should usually be answered as “No”. Restrictions on date may be appropriate but should be supported by a clearly described rationale for this question to be answered as “Yes”. For example, if a medication or test was not available before a certain date then it is reasonable to only start searches from the date at which the medication or test first became available. | |||||||
4 | SEVCO:00267 | misapplication of study eligibility criteria | A study selection bias due to inappropriate implementation of the study inclusion and exclusion criteria. | Sheyu Li, Ken Wilkins, Muhammad Afzal, Homa Keshavarz, Joanne Dehnbostel | 2024-04-12 vote 6-0 by Harold Lehmann, Sheyu Li, Jennifer Hunter, Janice Tufte, Homa Keshavarz, Eric Harvey | 2024-03-29 vote 7-1 by Philippe Rocca-Serra, Sheyu Li, Harold Lehmann, Javier Bracchiglione, Jennifer Hunter, Homa Keshavarz, Lenny Vasanthan, Eric Harvey | 2024-03-29 comment: The title/name needs work. Maybe something like: Bias due to non-adherence to eligibility criteria? | |||||||
4 | SEVCO:00345 | bias related to selection of the studies for synthesis | A study selection bias due to inappropriate choice of studies included in the synthesis. | A potential use of this term is when studies meeting the review criteria were available but not included in the review. Another potential use of this term is when studies were included in the overall review but were not included in a specific meta-analysis or specific synthesis. If the studies selected for synthesis do not match the available studies, there is a risk of distorted results which constitutes bias. The term 'bias related to selection of the studies for synthesis' matches the ROBIS signaling question 4.1 'Did the synthesis include all studies that it should?' If a study was selected for the overall review but the data was not extracted from the study for a specific analysis or synthesis, then use [bias related to selection of the data for synthesis](https://fevir.net/resources/CodeSystem/27270#SEVCO:00352) | Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins, Airton Stein, Harold Lehmann | 2024-08-23 vote 9-0 by Carlos Alva-Diaz, Elma OMERAGIC, Lenny Vasanthan, Harold Lehmann, Philippe Rocca-Serra, Eric Harvey, Sean Grant, Airton Tetelbom Stein, Homa Keshavarz | 2024-05-17 vote 5-0 by Saphia Mokrane, Lenny Vasanthan, Sheyu Li, Eric Harvey, Harold Lehmann 2024-05-24 vote 7-0 by Homa Keshavarz, Sheyu Li, Eric Harvey, Lenny Vasanthan, Harold Lehmann, Janice Tufte, Saphia Mokrane 2024-06-07 vote 6-2 by Sean Grant, Saphia Mokrane, Sheyu Li, Lenny Vasanthan, Harold Lehmann, Eric Harvey, Carlos Alva-Diaz, Kailei Nong 2024-06-14 vote 7-1 by Yaowaluk Ngoenwiwatkul, Homa Keshavarz, Sean Grant, Sheyu Li, Eric Harvey, Lenny Vasanthan, Harold Lehmann, Janice Tufte 2024-06-21 vote 8-0 by Cauê Monaco, Lenny Vasanthan, Homa Keshavarz, Yaowaluk Ngoenwiwatkul, Harold Lehmann, Sean Grant, Eric Harvey, Carlos Alva-Diaz BUT THEN THE TERM CHANGED from 'data' to 'studies' 2024-08-09 vote 7-1 by Brian S. Alper, Harold Lehmann, Homa Keshavarz, Sheyu Li, Sean Grant, Eric Harvey, Lenny Vasanthan, Airton Tetelbom Stein 2024-08-16 vote 5-1 by Cauê Monaco, Bhagvan Kommadi, Jennifer Hunter, Eric Harvey, Harold Lehmann, Airton Tetelbom Stein | 2025-05-17 comment: (There could be a link to "Selection bias", making the point that this term is the equivalent for synthesis studies.) 2024-05-24 comment: Hmm. Maybe change, "typical use" to "typical risk"? 2024-06-07 comments re: "synthesis missing eligible studies" = "A synthesis bias in which eligible studies were not included in the evidence synthesis."1N) The term itself is new to me: is there a more established term? re: "synthesis missing eligible studies" = "A synthesis bias in which eligible studies were not included in the evidence synthesis."2N) It is confusing regarding synthesis. Evidence synthesis includes quantitative synthesis (meta-analysis typically) and qualitative synthesis. The comments may mean that a study is included in a qualitative synthesis but not a quantitative synthesis. It can be true, but why? Typically it can be some synthestic gap, e.g., the unavailability of zero events in a meta-analysis. The definition can be revised as A synthesis bias in which eligible studies were not included in the some parts of evidence syntheses. 2024-06-14 comment re: "bias related to selection of the data for synthesis" = "A synthesis bias due to inappropriate choice of data included in the synthesis before the synthesis is applied."1N) I understand the concept though I am not clear how this is a "bias"? "Inappropriate" choice just sounds like poor study execution rather than a bias. 2024-08-09 comment re: "bias related to selection of the studies for synthesis" = "A synthesis bias due to inappropriate choice of studies included in the synthesis before the synthesis is applied."1N) What is the difference between this bias and selection bias? Is it is related to some technical barriers such as zero even issue? The synthesis have to opt out the zero event trials because of the restriction of the statistical methods. 2024-08-16 comment re: "bias related to selection of the studies for synthesis" = "A study selection bias due to inappropriate choice of studies included in the synthesis."1N) I fail to see how the comments for application differ from bias related to the selection of the data for synthesis. There seems to be considerable overlap. Referring to a "specific meta-analysis" may be the issue, as this requires the selection of specific results from the included studies. Additionally, a meta-analysis is only one example of how results can be synthesized, and it does not account for other data (e.g., qualitative). Perhaps something like the following might work: "A potential use of this term is when studies were included in the overall review, yet none of the study results or findings (quantitative or qualitative) were analyzed or synthesized. I'm unsure that "and appropriate for synthesis" is correct. For instance, an included study may have measured the outcome of interest but not reported the results in a way that can be used in the planned meta-analysis (e.g., only the p value is reported, or no SDM/SE is reported). Even though the results cannot be used (i.e., are not appropriate) for the meta-analysis, there is still a risk of distorted results which constitutes bias. Perhaps I am mistaken, and this bias only refers to errors of judgement by the reviewers. 2024-08-23 comment re: "bias related to selection of the studies for synthesis" = "A study selection bias due to inappropriate choice of studies included in the synthesis."1Y) I Could suggest a petit change in 'comment for application'..."If the selected for synthesis do not match the available studies, there is a risk of distorted results which constitutes bias." | ROBIS 4.1 Did the synthesis include all studies that it should? | |||||
4 | SEVCO:00268 | bias in study selection process | A study selection bias due to an inadequate process for screening and/or evaluating potentially eligible studies. | An adequate process for screening and evaluating potentially eligible studies should generally include at least two independent reviewers for any steps that involve subjective judgment. Any step involving subjective judgment may introduce systematic distortions into the research findings. | Brian S. Alper, Kenneth Wilkins, Muhammad Afzal, Joanne Dehnbostel, Khalid Shahin | 2024-04-19 vote 9-0 by Cauê Monaco, Sheyu Li, Jennifer Hunter, Janice Tufte, Eric Harvey, Homa Keshavarz, Lenny Vasanthan, Harold Lehmann, Khalid Shahin | 2024-03-29 vote 6-1 by Philippe Rocca-Serra, Sheyu Li, Harold Lehmann, Javier Bracchiglione, Lenny Vasanthan, Eric Harvey, Jennifer Hunter 2024-04-12 vote 6-2 by Cauê Monaco, Sheyu Li, Jennifer Hunter, Janice Tufte, Eric Harvey, Homa Keshavarz, Lenny Vasanthan, Harold Lehmann | 2024-03-29 comments: 1) Although the comment for application may be questionable for some specific cases. 2N) The title/name needs more work. Perhaps something like: Study screening bias? Or just: Screening bias? 2024-04-12 comments: 1N) The term 'bias' refers to systematic error but back-to-back check by different reviewers reduce only random error. I do not think the definition and comment are in line. For my own experience, there is little room for bias during the study selection. 2N) Suggest replace "and" with "or", as I am assuming that screening refers to T&A screening and evaluating refers to full text inclusion assessment. The bias could arise from one step only. Consider adding some alternative terms e.g., bias in study selection, bias in selection of studies (that maps to ROBIS), screening bias, and/or study screening bias. 3) Re Comment for Application: There are other strategies, besides 2 independent readers, but I suppose the word, "generally," addresses my concern. | from ROBIS 2.5 Were efforts made to minimise errors in selection of studies? Both the process of screening titles and abstracts and of assessing full text studies for inclusion are covered by this question. Information on both are required to be able to fully judge this item. For an answer of “Yes”, titles and abstracts should be screened independently by at least two reviewers and full text inclusion assessment should involve at least two reviewers (either independently or with one performing the assessment and the second checking the decision). | |||||
2 | SEVCO:00016 | confounding covariate bias | A situation in which the effect or association between an exposure and outcome is distorted by another variable. For confounding covariate bias to occur the distorting variable must be (1) associated with the exposure and the outcome, (2) not in the causal pathway between exposure and outcome, and (3) unequally distributed between the groups being compared. | Association of any two variables includes direct associations and indirect associations through each of the variables having direct associations with a third variable. | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Philippe Rocca-Serra, Muhammad Afzal, Janice Tufte, Bhagvan Kommadi | 2023-07-14 vote 5-0 by Paul Whaley, Harold Lehmann, Cauê Monaco, Jesus Lopez-Alcalde, Paola Rosati | 2021-05-07 vote 4-2 on "Comparator Bias = A bias resulting from differences (other than in variables directly involved in the analysis) between the groups being compared." by KM Saif-Ur-Rahman, Harold Lehmann, Alejandro Piscoya, Paola Rosati, Tatyana Shamliyan, Bhagvan Kommadi 2021-05-10 vote 11-1 on "Confounding Covariate Bias = A bias resulting from differences in covariates (variables other than the exposure and outcome) between the groups being compared." by Eric Harvey, KM Saif-Ur-Rahman, Janice Tufte, Bhagvan Kommadi, Paola Rosati, Alejandro Piscoya, Harold Lehmann, Ahmad Sofi-Mahmudi, Eric Au, Jesus Lopez-Alcalde, Tatyana Shamliyan, Joanne Dehnbostel AGREEMENT VOTE 8/8 as of 5/17/2021: Tatyana Shamliyan, Janice Tufte, Mario Tristan, Bhagvan Kommadi, Jesús López-Alcalde, Isaac Fwemba, Eric Harvey, Paola Rosati On 2023-06-16 the Steering Group corrected a technical error in the definition (between A or B ... corrected to ... between A and B), and added a Comment for Application, so re-opened the term for vote. | A bias resulting from differences (other than in variables directly involved in the analysis) between the groups being compared. ---led to --- Which differences do you mean between the groups? This definition seems unclear. Defining a Comparator bias means to addresss some possible specific explanation. Or it is preferable to delete this bias. The definition is for selection bias resulting from nonrandom allocation of participants to interventions. Random allocation of trial participants to interentions would reduce this bias. Comprator seletion would not. A bias resulting from differences in covariates (variables other than the exposure and outcome) between the groups being compared -- led to I agree with the definition but I suggest detailing that the covariate is associated to the outcome | ||||||
3 | SEVCO:00032 | allocation bias | A confounding covariate bias resulting from methods for assignment of the independent variable by the investigator to evaluate a response or outcome. | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Muhammad Afzal, Janice Tufte, Bhagvan Kommadi | 8/8 as of 5/17/2021: Tatyana Shamliyan, Janice Tufte, Mario Tristan, Bhagvan Kommadi, Jesús López-Alcalde, Isaac Fwemba, Eric Harvey, Paola Rosati | 2021-05-07 vote 5-1 on "Comparator Selection Bias = A comparator bias resulting from methods for selection of or allocation to groups for comparative analysis that have the potential to introduce differences (other than in variables directly involved in the analysis) between the groups being compared." by KM Saif-Ur-Rahman, Harold Lehmann, Alejandro Piscoya, Paola Rosati, Tatyana Shamliyan, Bhagvan Kommadi, 2021-05-10 vote 11-1 on "Allocation Bias = A confounding covariate bias resulting from methods for assignment of exposures in an interventional study." by Eric Harvey, KM Saif-Ur-Rahman, Janice Tufte, Bhagvan Kommadi, Paola Rosati, Alejandro Piscoya, Harold Lehmann, Ahmad Sofi-Mahmudi, Eric Au, Jesus Lopez-Alcalde, Tatyana Shamliyan, Joanne Dehnbostel | A comparator bias resulting from methods for selection of or allocation to groups for comparative analysis that have the potential to introduce differences (other than in variables directly involved in the analysis) between the groups being compared. -- led to--- Selection of comparators would not reduce differences between compared groups. A confounding covariate bias resulting from methods for assignment of exposures in an interventional study. --led to-- In my opinion, in an interventional study the investigator assigns the intervention, not the exposures. The differences in the covariates results from the methods for the assignment of the intervention. For example not concealed allocation. | |||||||
4 | SEVCO:00031 | inadequate allocation concealment | An allocation bias resulting from awareness of the assigned intervention before study enrolment and intervention assignment | Brian S. Alper, Joanne Dehnbostel, Janice Tufte, Philippe Rocca-Serra | 10/10 as of 6/11/2021: Names not captured | |||||||||
4 | SEVCO:00278 | bias due to non-randomized allocation | An allocation bias resulting from a process of assigning participants or subjects to different groups or conditions which is not random. | A confounding covariate bias is defined as a situation in which the effect or association between an exposure and outcome is distorted by another variable. For confounding covariate bias to occur the distorting variable must be (1) associated with the exposure and the outcome, (2) not in the causal pathway between exposure and outcome, and (3) unequally distributed between the groups being compared. Allocation bias is defined as a confounding covariate bias resulting from *methods for assignment* of the independent variable by the investigator to evaluate a response or outcome. Methods for assignment that are not random may introduce confounding with measured or unmeasured variables. Non-random methods of generation of an allocation sequence may introduce a confounding covariate bias through associations with one ore more non-random variables related to sequence generation. A non-random allocation sequence may be described as a predictable sequence in mathematical terms. The SEVCO term [Quasi-Randomized assignment](https://fevir.net/resources/CodeSystem/27270#SEVCO:01004) is defined as an interventional study design with a method of allocation that is not limited to random chance but is intended to produce similar baseline groups for experimentation. Although Quasi-Randomized assignment is "intended to produce similar baseline groups" the term is classified as a type of [Non-randomized assignment](https://fevir.net/resources/CodeSystem/27270#SEVCO:01005). Examples of non-random methods (which may be called 'partially randomized' or 'quasi-random') include every other participant, day of the week, even/odd identification number, birth date, etc. | Brian S. Alper, Harold Lehmann, Kenneth Wilkins, Joanne Dehnbostel, Muhammad Afzal | 2023-07-28 vote 5-0 by Brian S. Alper, Paul Whaley, Jesus Lopez-Alcalde, Eric Harvey, Harold Lehmann | 2023-05-12 vote 4-1 by Brian S. Alper, Joanne Dehnbostel, Janice Tufte, Jesus Lopez-Alcalde, Harold Lehmann 2023-05-26 vote 5-1 by Harold Lehmann, Jesus Lopez-Alcalde, Sunu Alice Cherian, Brian S. Alper, Muhammad Afzal, Joanne Dehnbostel 2023-06-09 vote 4-1 by Eric Harvey, Cauê Monaco, Paul Whaley, Sunu Alice Cherian, Harold Lehmann 2023-06-16 vote 3-2 by Eric Harvey, Paul Whaley, Sunu Alice Cherian, Harold Lehmann, Paola Rosati 2023-07-14 vote 4-1 by Jesus Lopez-Alcalde, Paul Whaley, Harold Lehmann, Cauê Monaco, Paola Rosati | 2023-05-12 comment: Does this term definition actually define "inadequate" = potentially predictable sequence ? seems like an innapropriate allocation bias perhaps 2023-05-26 comment: Defintion: Methods of allocating study participants to treatment comparison groups that are not random, but are intended to produce similar groups. Alternative terms: Quasi random allocation 2023-06-09 comment: The comment for application describes "unrecognised associations", but the definition talks about "potentially predictable", which implies exploiting a recognised association to break blinding. I am not sure it can be both of these. 2023-06-16 comments: Type of bias that arises in research studies when the process of assigning participants or subjects to different groups or conditions is not random. I think I remember my original concern now - in the definition, the problem is not that the sequence is predictable, it is that the sequence is associated with another variable, thus introducing this other variable as a confounder. Unless it is about the investigator being able to break blinding, in which case the concept of the sequence being predictable is important. 2023-07-14 comment: I think non-random methods are those clearly non-random, such as allocation by provider's preferences. However, quasi-random methods are those that apply a method that attempts to be random but that it isn't. Example: day of the week. 2023-07-28 comment: For consistency, should we call it, "Confounding Bias due to non-randomized allocation"? | ||||||
3 | SEVCO:00033 | comparator selection bias | A confounding covariate bias resulting from methods used to select participating subjects, or factors that influence study participation, for the comparator group. | This situation is more commonly related to observational research. | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Muhammad Afzal, Janice Tufte, Bhagvan Kommadi | 8/8 as of 5/17/2021: Tatyana Shamliyan, Janice Tufte, Mario Tristan, Bhagvan Kommadi, Jesús López-Alcalde, Isaac Fwemba, Eric Harvey, Paola Rosati | ||||||||
3 | SEVCO:00034 | confounding difference | A confounding covariate bias in which the unequal distribution of a potentially distorting variable is recognized. | The potentially distorting variable is a covariate, and not the exposure or the outcome. Even if adjusted for in the analysis, a risk of bias can be present. | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Muhammad Afzal, Janice Tufte, Bhagvan Kommadi, Philippe Rocca-Serra | 8/8 as of 6/7/2021: KM Saif-Ur-Rahman, Sebastien Bailly, Bhagvan Kommadi, Leo Orozco, Alejandro Piscoya, Jesús López-Alcalde, Janice Tufte, Paola Rosati | 2021-05-07 vote 5-1 on "Recognized Difference with Potential for Confounding = A comparator bias resulting from known differences (other than in variables directly involved in the analysis) between the groups being compared." by KM Saif-Ur-Rahman, Harold Lehmann, Alejandro Piscoya, Paola Rosati, Tatyana Shamliyan, Bhagvan Kommadi, , 2021-05-24 vote 6-1 on "A confounding covariate bias in which the unequal distribution of a potentially distorting variable is recognized." by Harold Lehmann, Eric Harvey, KM Saif-Ur-Rahman, Bhagvan Kommadi, janice tufte, Paola Rosati, Jesus Lopez-Alcalde | A comparator bias resulting from known differences (other than in variables directly involved in the analysis) between the groups being compared. -- led to-- This defintion seems tricky. If you find any diference between groups that can go astray with analysis you simply address the potention for confounding explicitly in the discussion session of yoru protocol/paper The potnetial for confounding needs to be consideriend in the protocol, and specifically addresssed int eh post-analysis to avoid any further bias. The term comparator bias is misleading since differnece between groups would not be reduced by selecting different comparators. If this is recognized and adjusted for, is it still a bias? Seems that we need to address this circumstance. | ||||||
3 | SEVCO:00280 | confounding by time of observation | A confounding covariate bias in which the distorting variable is the time at which the outcome is measured or observed. | A confounding covariate bias is defined as a situation in which the effect or association between an exposure and outcome is distorted by another variable. For confounding covariate bias to occur the distorting variable must be (1) associated with the exposure and the outcome, (2) not in the causal pathway between exposure and outcome, and (3) unequally distributed between the groups being compared. The time at which the outcome is measured or observed may be absolute (e.g. a specific date) or relative (e.g. 3 months after study enrollment). To understand "confounding by time of observation" consider the following example: An observational study is comparing patients with asthma taking Superdrug and patients with asthma not taking Superdrug. The outcome of interest is mortality. The patients taking Superdrug are observed for their full duration of exposure to Superdrug. For comparison, the control group not receiving Superdrug is measured during a 1-year calendar period. For the mortality outcome comparing Superdrug vs. no Superdrug, the time of observation for the control group is consistently 1 year but for the Superdrug group the time of observation varies for each patient. This comparison is confounded by the time of observation. | Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Kenneth Wilkins, Paul Whaley | 2023-10-06 vote 5-0 by Jesus Lopez-Alcalde, Eric Harvey, Paul Whaley, Harold Lehmann, Mario Tristan | 2023-06-09 vote 3-1 by Eric Harvey, Cauê Monaco, Paul Whaley, Sunu Alice Cherian 2023-06-16 vote 4-1 by Paola Rosati, Eric Harvey, Paul Whaley, Sunu Alice Cherian, Harold Lehmann 2023-07-14 vote 7-0 by Muhammad Afzal, Joanne Dehnbostel, Khalid Shahin, Jesus Lopez-Alcalde, Paul Whaley, Harold Lehmann, Cauê Monaco THEN REOPENED 2023-08-04 due to comment that suggests removing parenthetical from definition | 2023-06-09 comment: The comment for application is not sufficiently informative. I am also not sure I understand what the definition means - what is the importance of recognition of unequal distribution of follow-up time? 2023-06-16 comments: A confounding that occurs when the relationship between an exposure or intervention and an outcome is confounded by the time at which the outcome is measured or observed. Alternate terms: time-varying confounding Comment for application: This occurs when both the exposure and the outcome change over time, and there are other time-dependent factors that influence the outcome The Comment for Application seems to be repeating the definition of the parent term. I though we usually add details specific to the current term. | ||||||
3 | SEVCO:00281 | lead time bias | A confounding covariate bias in which the distorting variable is the length of time that the participant has had the condition of interest at study enrollment. | A Confounding Covariate Bias is defined as a situation in which the effect or association between an exposure or outcome is distorted by another variable. For confounding covariate bias to occur the distorting variable must be (1) associated with the exposure and the outcome, (2) not in the causal pathway between exposure and outcome, and (3) unequally distributed between the groups being compared. A lead time bias is often manifest as a distortion overestimating the apparent time surviving with a disease caused by bringing forward the time of its diagnosis (https://catalogofbias.org/biases/lead-time-bias/). Lead time bias is a type of bias that occurs in medical screening or diagnostic tests when the early detection of a disease or condition artificially appears to improve survival or prognosis, even if it does not actually provide a true benefit in terms of overall health outcomes. Lead time refers to the amount of time between the detection of a disease through early screening or diagnostic testing and the time when the disease would have been clinically detected without screening. | Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Paul Whaley | 2023-08-18 vote 5-0 by Paul Whaley, Eric Harvey, Mario Tristan, Cauê Monaco, Harold Lehmann | 2023-06-09 vote 3-1 by Eric Harvey, Cauê Monaco, Paul Whaley, Sunu Alice Cherian 2023-06-16 vote 3-2 by Eric Harvey, Paul Whaley, Sunu Alice Cherian, Harold Lehmann, Paola Rosati 2023-07-14 vote 6-0 by Muhammad Afzal, Joanne Dehnbostel, Jesus Lopez-Alcalde, Paul Whaley, Harold Lehmann, Cauê Monaco THEN REOPENED 2023-08-04 due to comment that suggests removing parenthetical from definition | 2023-06-09 comment: I am not sure I can successfully parse the syntax of the definition. While I think I understand what is meant, I feel it could be phrased more clearly. 2023-06-16 comments: Lead time bias is a type of bias that occurs in medical screening or diagnostic tests when the early detection of a disease or condition artificially appears to improve survival or prognosis, even if it does not actually provide a true benefit in terms of overall health outcomes Comment for application: Lead time refers to the amount of time between the detection of a disease through early screening or diagnostic testing and the time when the disease would have been clinically detected without screening. This definition seems difficult to understand: does it convey that lead time bias is related to the potentially distorting variable of the length of time chosen in the study in which some participants could have confounding differences between their diagnosis of the condition of interest and the time of enrolment? I have some problem in understanding, sorry. | Lead time bias A distortion overestimating the apparent time surviving with a disease caused by bringing forward the time of its diagnosis https://catalogofbias.org/biases/lead-time-bias/ | |||||
3 | SEVCO:00282 | confounding influencing adherence to intervention | A confounding covariate bias in which the distorting variable is associated with deviations from the intended intervention. | A confounding covariate bias is defined as a situation in which the effect or association between an exposure or outcome is distorted by another variable. For confounding covariate bias to occur the distorting variable must be (1) associated with the exposure and the outcome, (2) not in the causal pathway between exposure and outcome, and (3) unequally distributed between the groups being compared. For 'Confounding influencing adherence to intervention', the association of the distorting variable and the exposure is specific to deviations from the intended exposure (intended intervention). Deviations from the intended intervention may include deviations from the intervention protocol or lack of adherence. Lack of adherence includes imperfect compliance, cessation of intervention, crossovers to the comparator intervention and switches to another active intervention. The term 'Confounding influencing adherence to intervention' is distinct from 'Performance Bias' (including 'Nonadherence of participants' or 'Imbalance in deviations from intended interventions') in that an additional variable (the distorting variable or confounding covariate) is acting as a confounder, while the 'Performance Bias' may occur with or without any differences in a third variable. | Brian Alper, Joanne Dehnbostel, Harold Lehmann, Paul Whaley, Kenneth Wilkins | 2023-09-29 vote 5-0 by Joanne Dehnbostel, Harold Lehmann, Paul Whaley, Eric Harvey, Mario Tristan | 2023-07-28 vote 3-1 by Eric Harvey, Harold Lehmann, Jesus Lopez-Alcalde, Paul Whaley 2023-08-04 vote 5-0 by Joanne Dehnbostel, Philippe Rocca-Serra, Eric Harvey, Harold Lehmann, Mario Tristan BUT comment suggests removing parenthetical from definition 2023-08-11 vote 3-1 by Mario Tristan, Cauê Monaco, Eric Harvey, Joanne Dehnbostel | 2023-06-02 comment from steering group: need to see the background to ROBINS-I to understand context for this term 2023-07-28 comment: I think the definition is good but the comment for application should specifically address this term and not just duplicate the definition of confounding covariate bias. 2023-08-11 comment: Is this the same as compliance bias, or compliance bias ("https://catalogofbias.org/biases/compliance-bias/") is a subtype of this? If "compliance bias" is a synonim, should be added as such. If not, should be added as a separate term | trigger question from ROBINS-I: 1.3. Were intervention discontinuations or switches likely to be related to factors that are prognostic for the outcome? | |||||
3 | SEVCO:00284 | confounding by indication | A confounding covariate bias in which the distorting variable is the reason for receiving an exposure. | A Confounding Covariate Bias is defined as a situation in which the effect or association between an exposure or outcome is distorted by another variable. For confounding covariate bias to occur the distorting variable must be (1) associated with the exposure and the outcome, (2) not in the causal pathway between exposure and outcome, and (3) unequally distributed between the groups being compared. The term 'indication' is derived from the medical community, in which the reason that an intervention is provided is called the indication. A reason for not providing an intervention may be called a 'contraindication' rather than 'indication to not provide'. For example, people exposed to chemotherapy have higher mortality. This observation can easily be confounded by people exposed to chemotherapy having a higher rate of cancer (as the reason for receiving the chemotherapy). | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Muhammad Afzal | 2023-08-18 vote 5-0 by Paul Whaley, Eric Harvey, Mario Tristan, Cauê Monaco, Joanne Dehnbostel | 2023-05-12 vote 5-0 by Muhammad Afzal, Brian S. Alper, Joanne Dehnbostel, Jesus Lopez-Alcalde, Harold Lehmann BUT THEN TERM CHANGED WITH HIERARCHY CHANGE on 2023-06-30 2023-07-14 vote 2-1 by Jesus Lopez-Alcalde, Paul Whaley, Harold Lehmann 2023-07-28 vote 3-1 by Paul Whaley, Jesus Lopez-Alcalde, Eric Harvey, Harold Lehmann 2023-08-04 vote 5-0 by Joanne Dehnbostel, Philippe Rocca-Serra, Eric Harvey, Harold Lehmann, Mario Tristan BUT comment suggests removing parenthetical from definition | 2023-05-12 comment: For Comment for Application, I thought we usually put the definition of the parent term first, and the comments about this child. So I would arrange the current 1, 2, 3 paragraphs as 2, 1, 3. And I think what is now the first paragraph should start with, "A confounding different bias..." 2023-07-01 comment: I would add to the definition "or lack of". Thus: "A confounding covariate bias in which the confounder (distorting variable) is the reason for (or for lack of) an intended exposure. 2023-07-14 comment: I think the definition is good but the comment for application should specifically address this term in more detail than providing a definition for "indication". It is a complex concept and I am not sure I understand what is happening with this bias. 2023-07-28 comment: I still feel that an example of how the reason for receiving an exposure can end up being a confounder would be helpful. | Confounding by indication A distortion that modifies an association between an exposure and an outcome, caused by the presence of an indication for the exposure that is the true cause of the outcome. from https://catalogofbias.org/biases/confounding-by-indication/ | |||||
3 | SEVCO:00388 | confounding by contraindication | A confounding covariate bias in which the distorting variable is the reason for not receiving an exposure. | A Confounding Covariate Bias is defined as a situation in which the effect or association between an exposure or outcome is distorted by another variable. For confounding covariate bias to occur the distorting variable must be (1) associated with the exposure and the outcome, (2) not in the causal pathway between exposure and outcome, and (3) unequally distributed between the groups being compared. The term 'indication' is derived from the medical community, in which the reason that an intervention is provided is called the indication. A reason for not providing an intervention may be called a 'contraindication' rather than 'indication to not provide'. For example, people with cancer exposed to surgery for curative resection have lower mortality than other people with cancer. This observation can easily be confounded by people exposed to surgery for curative resection having a lower rate of metastatic cancer (which is a contraindication to such a surgery). | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Caue Monaco | 2023-08-18 vote 5-0 by Paul Whaley, Eric Harvey, Mario Tristan, Cauê Monaco, Joanne Dehnbostel | 2023-07-28 vote 3-1 by Paul Whaley, Jesus Lopez-Alcalde, Eric Harvey, Harold Lehmann 2023-08-04 vote 5-0 by Joanne Dehnbostel, Philippe Rocca-Serra, Eric Harvey, Harold Lehmann, Mario Tristan BUT comment suggests removing parenthetical from definition | 2023-07-28 comment: I still feel that an example of how the reason for receiving an exposure can end up being a confounder would be helpful. | ||||||
3 | SEVCO:00390 | time-varying confounding affected by past exposure | A confounding covariate bias in which the distorting variable is itself influenced by the exposure. | Confounding Covariate Bias is defined as a situation in which the effect or association between an exposure and outcome is distorted by another variable. For confounding covariate bias to occur the distorting variable must be (1) associated with the exposure and the outcome, (2) not in the causal pathway between exposure and outcome, and (3) unequally distributed between the groups being compared. To distinguish "confounding by time of observation" from "time-varying confounding affected by past exposure" consider the following example: An observational study is comparing patients with asthma taking Superdrug and patients with asthma not taking Superdrug. The outcome of interest is mortality, both for association with the dose of Superdrug and compared to not receiving Superdrug. For comparison, the control group not receiving Superdrug is measured during a 1-year calendar period. For the mortality outcome comparing Superdrug vs. no Superdrug, the time of observation for the control group is consistently 1 year but for the Superdrug group the time of observation varies for each patient. This comparison is confounded by the time of observation. For the mortality outcome comparing high-dose vs. low-dose Superdrug, the confounding variable of asthma exacerbation rate is complicated in several ways. First, the asthma exacerbation rate is associated with the outcome (mortality) independent from the effects of Superdrug. Second, the asthma exacerbation rate may influence the exposure (the dose of Superdrug which is increased if frequent asthma exacerbations) and the exposure (higher dose of Superdrug) may influence the confounder (reducing the asthma exacerbation rate). This comparison of high-dose vs. low-dose Superdrug for effects on mortality is distorted by time-varying confounding affected by past exposure. | Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins, Harold Lehmann, Caue Monaco | 2023-10-06 vote 5-0 by Jesus Lopez-Alcalde, Eric Harvey, Paul Whaley, Harold Lehmann, Mario Tristan | 2023-09-01 comment (with No vote): This term seems unnecessary. Describes a bias rarely seen. | |||||||
2 | SEVCO:00017 | performance bias | A bias resulting from differences between the received exposure and the intended exposure. | Such differences could be the administration of additional interventions that are inconsistent with the study protocol, or non-adherence by the interventionalists or study participants to their assigned intervention. Such differences may occur based on assignment to intervention or may occur due to adherence to intervention. | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Muhammad Afzal, Janice Tufte, Bhagvan Kommadi, Philippe Rocca-Serra | 8/8 as of 6/7/2021: KM Saif-Ur-Rahman, Sebastien Bailly, Bhagvan Kommadi, Leo Orozco, Alejandro Piscoya, Jesús López-Alcalde, Janice Tufte, Paola Rosati | 2021-05-24 vote 5-2 on "A bias resulting from differences between the received exposure and the intended exposure. Such differences could be the administration of additional interventions that are inconsistent with the study protocol, or non-adherence by the interventionalists or study participants to their assigned intervention. " by Harold Lehmann, Eric Harvey, KM Saif-Ur-Rahman, Bhagvan Kommadi, janice tufte, Paola Rosati, Jesus Lopez-Alcalde | Definition of performance bias should be modified, Performance bias should involve the blinding at participant level and implementer level in definition.I would add that the differences must be present between the study arms In a RCT with an active control (for example drug A vs drug B) both study arms may have had low adherence but if these deviations from the protocol occurred homogeneously accross arms the effect estimate may not be distorted (biased). As a reviewer, I would not penalise this estimate due to high risk of performance bias. So, concerning the definition, I would propose "A bias resulting from differences accross the study arms between the [...]" | ||||||
3 | SEVCO:00035 | inadequate blinding of participants | A performance bias due to awareness of the allocated intervention by participants | Inadequate blinding of participants is applied when there is awareness of assigned intervention AFTER intervention assignment. If there is awareness BEFORE study enrolment and intervention assignment, this would be Inadequate allocation concealment. The term "Inadequate blinding of participants" is used to denote the TYPE of bias. Separate terms for the RATING of risk of bias are used to report the likelihood of the presence and influence of the type of bias. | Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Janice Tufte, Erfan Shamsoddin, Bhagvan Kommadi, Philippe Rocca-Serra | 8/8 as of 6/14/2021: Eric Harvey, Eric Au, Bhagvan Kommadi, Ahmad Sofi-Mahmudi, Erfan Shamsoddin, Janice Tufte, Joanne Dehnbostel, Leo Orozco, | 2021-06-07 vote 7-1 on "Inadequate blinding of participants = A performance bias due to awareness of the allocated intervention by participants" by KM Saif-Ur-Rahman, Sebastien Bailly, Bhagvan Kommadi, Leo Orozco, Alejandro Piscoya, Jesús López-Alcalde, Janice Tufte, Paola Rosati, 2021-06-11 vote 9-1 on same | Need to distinguish blinding of intervention from blinding of allocation Inadequate blinding of participants does not always imply bias. Besides, it can also imply detection bias in patient reported outcomes | ||||||
3 | SEVCO:00036 | inadequate blinding of intervention deliverers | A performance bias due to awareness of the allocated intervention by individuals providing or delivering the intervention | Inadequate blinding of intervention deliverers is applied when there is awareness of assigned intervention AFTER intervention assignment. If there is awareness BEFORE study enrolment and intervention assignment, this would be Inadequate allocation concealment. The term noted here is used to denote the TYPE of bias. Separate terms for the RATING of risk of bias are used to report the likelihood of the presence and influence of the type of bias. | Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Janice Tufte, Erfan Shamsoddin, Bhagvan Kommadi, Philippe Rocca-Serra | 8/8 as of 6/14/2021: Eric Harvey, Eric Au, Bhagvan Kommadi, Ahmad Sofi-Mahmudi, Erfan Shamsoddin, Janice Tufte, Joanne Dehnbostel, Leo Orozco, | 2021-06-07 vote 7-1 on "Inadequate blinding of participants = A performance bias due to awareness of the allocated intervention by individuals providing or delivering the intervention" by KM Saif-Ur-Rahman, Sebastien Bailly, Bhagvan Kommadi, Leo Orozco, Alejandro Piscoya, Jesús López-Alcalde, Janice Tufte, Paola Rosati, 2021-06-11 vote 9-1 on same | Need to distinguish blinding of intervention from blinding of allocation; Should we use the term interventionalist or interventionist? Inadequate blinding of intervention deliverers does not always imply Performance bias | ||||||
3 | SEVCO:00037 | deviation from study intervention protocol | A performance bias in which the intervention received differs from the intervention specified in the study protocol | Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Janice Tufte, Erfan Shamsoddin, Bhagvan Kommadi | 8/8 as of 6/7/2021: KM Saif-Ur-Rahman, Sebastien Bailly, Bhagvan Kommadi, Leo Orozco, Alejandro Piscoya, Jesús López-Alcalde, Janice Tufte, Paola Rosati | |||||||||
3 | SEVCO:00038 | deviation from standard of care | A performance bias in which the intervention or exposure received differs from the from the usual practice or expected care | Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Janice Tufte, Erfan Shamsoddin, Bhagvan Kommadi | 8/8 as of 6/7/2021: KM Saif-Ur-Rahman, Sebastien Bailly, Bhagvan Kommadi, Leo Orozco, Alejandro Piscoya, Jesús López-Alcalde, Janice Tufte, Paola Rosati | |||||||||
3 | SEVCO:00039 | nonadherence of implementation | A performance bias in which the intervention deliverers do not completely adhere to the expected intervention | Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Janice Tufte, Erfan Shamsoddin, Bhagvan Kommadi | 8/8 as of 6/7/2021: KM Saif-Ur-Rahman, Sebastien Bailly, Bhagvan Kommadi, Leo Orozco, Alejandro Piscoya, Jesús López-Alcalde, Janice Tufte, Paola Rosati | interventionist vs. intervention deliverer | ||||||||
3 | SEVCO:00040 | nonadherence of participants | A performance bias in which the participants do not completely adhere to the expected intervention or exposure | Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Janice Tufte, Erfan Shamsoddin, Bhagvan Kommadi | 8/8 as of 6/7/2021: KM Saif-Ur-Rahman, Sebastien Bailly, Bhagvan Kommadi, Leo Orozco, Alejandro Piscoya, Jesús López-Alcalde, Janice Tufte, Paola Rosati | is known or unknown | ||||||||
3 | SEVCO:00041 | imbalance in deviations from intended intervention | A performance bias in which the degree of performance bias is unequally distributed between groups being compared | Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Janice Tufte, Erfan Shamsoddin, Bhagvan Kommadi | 8/8 as of 6/7/2021: KM Saif-Ur-Rahman, Sebastien Bailly, Bhagvan Kommadi, Leo Orozco, Alejandro Piscoya, Jesús López-Alcalde, Janice Tufte, Paola Rosati | |||||||||
2 | SEVCO:00019 | attrition bias | A bias due to absence of expected participation or data collection after selection for study inclusion. | Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel, Philippe Rocca-Serra, Mario Tristan, Janice Tufte, Erfan Shamsoddin | 13/13 as of 6/18/2021: Eric Au, Harold Lehmann, Erfan Shamsoddin, Ahmad Sofi-Mahmudi, Mario Tristan, Eric Harvey, Sebastien Bailly, Bhagvan Kommadi, Leo Orozco, Alejandro Piscoya, Jesús López-Alcalde, Janice Tufte, Paola Rosati | 6/14/2021-06-14 vote 7-1 on "Attrition Bias = A bias due to absence of expected participation or data collection after study enrollment." by, Eric Harvey, Eric Au, Bhagvan Kommadi, Ahmad Sofi-Mahmudi, Erfan Shamsoddin, Janice Tufte, Joanne Dehnbostel, Leo Orozco, | The phrase "after study enrolment" might be confusing. Does enrolment apply to retrospective observational studies? | |||||||
3 | SEVCO:00286 | attrition bias due to participant attrition | A bias due to absence of expected participation due to participant dropout, withdrawal or non-participation after selection for study inclusion. | Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel | 2023-05-12 vote 6-0 by Muhammad Afzal, Brian S. Alper, Joanne Dehnbostel , Harold Lehmann, Jesus Lopez-Alcalde, Eric Harvey | 2023-04-28 comment: I would not detail that 20%: it is misleading and not evidence-based | ||||||||
3 | SEVCO:00287 | attrition bias due to missing data | A bias due to data loss or absence of data collection from participants after selection for study inclusion. | Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel | 2023-05-12 vote 6-0 by Muhammad Afzal, Brian S. Alper, Joanne Dehnbostel , Harold Lehmann, Jesus Lopez-Alcalde, Eric Harvey | 2023-04-28 comment: I would not detail that 20%: it is misleading and not evidence-based | ||||||||
4 | SEVCO:00386 | attrition bias due to missing outcome data | An attrition bias due to missing data specific to the dependent variable. | In a situation of repeated measures outcomes, attrition bias due to missing outcome data can occur if one or more measurements are missing. | Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel, Muhammad Afzal, Khalid Shahin | 2023-06-16 vote 5-0 by Joanne Dehnbostel, Eric Harvey, Paul Whaley, Sunu Alice Cherian, Harold Lehmann | 2023-05-19 5-1 Muhammad Afzal, Janice Tufte, Jesus Lopez-Alcalde, Eric Harvey, Harold Lehmann, Joanne Dehnbostel 2023-06-09 vote 3-1 by Eric Harvey, Cauê Monaco, Paul Whaley, Harold Lehmann | The information in the parentheses, "(or data on an independent variable)," is unclear in its intended meaning. To improve clarity, we could revise the definition. 2023-06-09 comment: The definition is too difficult to parse, and probably too similar to the preferred term. The comment for application is also very difficult to read. | ||||||
4 | SEVCO:00288 | attrition bias due to missing exposure data | An attrition bias due to missing data specific to the independent variable(s) of primary interest, such as exposure or intervention. | If coding a bias related to the classification of exposure, misclassification of exposure may be coded as Exposure Detection Bias, but if the data is excluded from analysis it may then be coded as Attrition bias due to missing exposure data. | Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel, Muhammad Afzal, Khalid Shahin, Paul Whaley | 2023-06-09 vote 6-0 by Cauê Monaco, Eric Harvey, Paul Whaley, Harold Lehmann, Jesus Lopez-Alcalde, Sunu Alice Cherian | 2023-05-19 5-1 Muhammad Afzal, Janice Tufte, Jesus Lopez-Alcalde, Eric Harvey, Harold Lehmann, Joanne Dehnbostel | The information in the parentheses, "(or data on an independent variable)," is unclear in its intended meaning. To improve clarity, we could revise the definition. 2023-06-09 comment: This needs a comment for application, but the definition is clearer than for "attrition bias due to missing outcome data". | ||||||
4 | SEVCO:00289 | attrition bias due to missing modifier data | An attrition bias due to missing data specific to a confounder or effect modifier | The term modifier is intended to be broad, including variables used for modeling interactions, stratification factors to account for effect modification, or other variables such as mediators that need to be accounted for when modeling the relationship between the outcome and exposure. | Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel, Muhammad Afzal, Khalid Shahin | 2023-05-12 vote 5-0 by Joanne Dehnbostel, Janice Tufte, Harold Lehmann, Jesus Lopez-Alcalde, Eric Harvey | ||||||||
4 | SEVCO:00387 | attrition bias due to missing data about attrition | An attrition bias due to missing data specific to the extent of or reasons for missing data. | Attrition bias due to missing data is defined as a bias due to data loss or absence of data collection from participants after selection for study inclusion. Data about the amount of missing data and data about the reasons for missing data are types of data that can also be missing. For example, in a time-to-event study, the reason a participant is censored might be missing and missing such data may interfere with distinguishing informative from non-informative censoring. | Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel | 2023-06-16 vote 5-0 by Joanne Dehnbostel, Eric Harvey, Paul Whaley, Sunu Alice Cherian, Harold Lehmann | 2023-06-16 comments: Time-to-event should be hyphenated Funnily enough, this came up straight after our call in relation to another bias project I am working on, so I would consider this addition useful! | |||||||
3 | SEVCO:00290 | imbalance in missing data | An attrition bias in which the degree of missing data is unequally distributed between groups being compared. | Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel | 2023-05-12 vote 5-0 by Joanne Dehnbostel, Janice Tufte, Harold Lehmann, Jesus Lopez-Alcalde, Eric Harvey | |||||||||
3 | SEVCO:00291 | inadequate response rate | An attrition bias in which the reason for absence of data collection is a low response rate to data collection surveys. | Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel | 2023-05-12 vote 5-0 by Joanne Dehnbostel, Janice Tufte, Harold Lehmann, Jesus Lopez-Alcalde, Eric Harvey | |||||||||
2 | SEVCO:00020 | detection bias | A bias due to distortions in any process involved in the determination of the recorded values for a variable. | Detection of the value of the variable comprises three processes involved in the determination of the recorded values for the variable: ascertainment (providing the opportunity for assessment), assessment (measurement and/or classification), and documentation (recording of data values for analysis). | Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel, Philippe Rocca-Serra, Mario Tristan, Harold Lehmann, Janice Tufte, Muhammad Afzal; Paul Whaley | 2022-01-28 vote 9-0 by Mario Tristan, Janice Tufte, Robin Ann Yurk, Brian S. Alper, C P Ooi, Harold Lehmann, Paola Rosati, Jesus Lopez-Alcalde, Paul Whaley | 6/14/2021-06-14 vote 7-1 on "Detection Bias = A bias due to distortions in how variable values (data) are determined (measured, classified or ascertained)." by, Eric Harvey, Eric Au, Bhagvan Kommadi, Ahmad Sofi-Mahmudi, Erfan Shamsoddin, Janice Tufte, Joanne Dehnbostel, Leo Orozco AGREEMENT REACHED 10/10 as of 8/27/2021: Eric Au, Eric Harvey, Harold Lehmann, Alejandro Piscoya, Mario Tristan, Bhagvan Kommadi, Leo Orozco, Janice Tufte, Jesús López-Alcalde, Brian S. Alper --- for DEFINITION OF: A bias due to distortions in how variable values (data) are determined. COMMENT FOR APPLICATION: Determination may include ascertainment or assessment (classification or measurement). 2022-10-14 vote 3-1 by Harold Lehmann, Robin Ann Yurk, Janice Tufte, Paul Harris 2022-01-21 vote 6-1 by Harold Lehmann, Robin Ann Yurk, Janice Tufte, Paul Harris, Paul Whaley, Alejandro Piscoya, Philippe Rocca-Serra | We need to state that this bias relates to the "outcome" -- The ROB-1 says the term "outcome assessment" as an alternative for detection bias. The ROBINS-1 says that "Non-differential misclassification is unrelated to the outcome and will usually bias the estimated effect of intervention towards the null". Still though, this leads to inadvertent deviations in the outcome assessment. I would suggest to at least state that this bias relates to outcome assessment. I remember Joanne saying that we will add a few "child concepts" later on and if that is the case here, then it is fine. Nevertheless, the RoB2 suggests not to use these terms to prevent "confusion" and does not actually agree with these sub-classifications (the fist page of the introduction section). Alternative terms according to (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5130591/): "Observer Bias", "Ascertainment Bias", or "Assessment Bias" 8/27/21 comment: Detection bias is no included in the list the more problematic Cochrane ROB1 however (Jørgensen et al. Systematic Reviews (2016) describe alll the domains of ROB1 "frequently implemented in a non-recommended way" The description in general is clear. 2022-10-14 comments: Do we need "Outcome Detection Bias" in addition to "Detection Bias"? Blinding or masking may be used to reduce the risk of distorted outcome measurement(s). 2022-01-21 comment: I am not sure whether to vote yes or no: I understand the definition because I have been following our discussions and it is consistent with the bias model we have developed, but I worry that this definition may not be consistently understood or applied by a user of SEVCO - I feel there is too much unspoken metaphysical baggage that is coherent and correct but not useful. 2022-01-28 comment: Not perfect but good enough to live with. Could maybe improve on ascertainment component of the comment for application. | ||||||
3 | SEVCO:00042 | outcome detection bias | A detection bias due to distortions in how an outcome is determined. | Brian S. Alper, Joanne Dehnbostel, Philippe Rocca-Serra, Mario Tristan, Janice Tufte, Harold Lehmann, Erfan Shamsoddin, Muhammad Afzal, Kenneth Wilkins | 10/10 as of 8/27/2021: Eric Au, Eric Harvey, Harold Lehmann, Alejandro Piscoya, Mario Tristan, Bhagvan Kommadi, Leo Orozco, Janice Tufte, Jesús López-Alcalde, Brian S. Alper | |||||||||
4 | SEVCO:00047 | cognitive interpretive bias for outcome determination | An outcome detection bias due to the subjective nature of human interpretation. | Brian S. Alper, Harold Lehmann, Muhammad Afzal, Janice Tufte, Joanne Dehnbostel, Kenneth Wilkins | 10/10 as of 8/27/2021: Eric Au, Eric Harvey, Harold Lehmann, Alejandro Piscoya, Mario Tristan, Bhagvan Kommadi, Leo Orozco, Janice Tufte, Jesús López-Alcalde, Brian S. Alper, 7/7 for Alternative terms on 9/24/21: Janice/Brian/Eric/Paola/Jesus/Bhagvan/Mario | 8/27/21 comment: This bias is difficult to manage and avoid it. | ||||||||
5 | SEVCO:00048 | bias due to lack of masking for outcome determination | A cognitive interpretive bias for outcome determination due to awareness of the participant's status with respect to the exposure of interest. | Lack of blinding or masking is not automatically a bias, but if awareness of exposure status systematically distorts the outcome determination then a 'Bias due to lack of masking for outcome determination' exists. | Brian S. Alper, Harold Lehmann, Muhammad Afzal, Janice Tufte, Joanne Dehnbostel, Paul Whaley, Kenneth Wilkins | 2022-03-18 vote 5-0 by Mario Tristan, Robin Ann Yurk, Paola Rosati, Nisha Mathew, Brian S. Alper | 8/27/2021 vote 9-1 on "Lack of blinding during outcome assessment = A cognitive interpretive bias for outcome determination due to the outcome assessor’s awareness of the participant's status with respect to the exposure of interest." by, Eric Au, Eric Harvey, Harold Lehmann, Alejandro Piscoya, Mario Tristan, Bhagvan Kommadi, Leo Orozco, Janice Tufte, Jesús López-Alcalde, Brian S. Alper earlier term approved 5/5 as of 8/30/2021: Eric Harvey, Harold Lehmann, Mario Tristan, Bhagvan Kommadi, Janice Tufte | 2021-08-27 comment: In my opinion "lack of blinding during outcome assessment" does not always imply bias for outcome determination (for example, for hard outcomes, such as analytic parameters, or all-cause mortality) 2022-03-18 comment: I would consider editing the term definition to ...lack of blinding. | ||||||
5 | SEVCO:00049 | observer bias for outcome determination | A cognitive interpretive bias for outcome determination due to subjective interpretations in the process of observing and recording information. | Multiple types of bias can overlap. Observer bias is different than lack of blinding with respect to the exposure. Observer bias is about the influence of the observer's interpretation of what they are observing, whether or not the observer is aware of the participant's exposure. | Brian S. Alper, Harold Lehmann, Muhammad Afzal, Janice Tufte, Joanne Dehnbostel | 5/5 as of 8/30/2021: Eric Harvey, Harold Lehmann, Mario Tristan, Bhagvan Kommadi, Janice Tufte, | 8/27/2021 vote 9-1 on "Observer bias for outcome determination = A cognitive interpretive bias for outcome determination due to subjective interpretations in the process of observing and recording information." by, Eric Au, Eric Harvey, Harold Lehmann, Alejandro Piscoya, Mario Tristan, Bhagvan Kommadi, Leo Orozco, Janice Tufte, Jesús López-Alcalde, Brian S. Alper | 2021-08-27 comment: This situation seems to be covered by "Lack of blinding for outcome determination" and "Outcome ascertainment bias". I would suggest deleting this term to remove the overlap. | ||||||
6 | SEVCO:00052 | confirmation bias for outcome determination | An observer bias for outcome determination due to previous opinions or knowledge of a subject’s prior exposures or assessments. | Brian S. Alper, Harold Lehmann, Muhammad Afzal, Janice Tufte, Joanne Dehnbostel, Kenneth Wilkins, Mario Tristan | 5/5 as of 9/17/2021: Eric Harvey, Paola Rosati, Alejandro Piscoya, Bhagvan Kommadi, Janice Tufte, | |||||||||
5 | SEVCO:00050 | recall bias for outcome determination | A cognitive interpretive bias for outcome determination due to differences in accuracy or completeness of recall of past events or experiences. | Brian S. Alper, Harold Lehmann, Muhammad Afzal, Janice Tufte, Joanne Dehnbostel | 10/10 as of 8/27/2021: Eric Au, Eric Harvey, Harold Lehmann, Alejandro Piscoya, Mario Tristan, Bhagvan Kommadi, Leo Orozco, Janice Tufte, Jesús López-Alcalde, Brian S. Alper | |||||||||
5 | SEVCO:00051 | apprehension bias for outcome determination | A cognitive interpretive bias for outcome determination due to a study participant's responding or behaving differently when aware of being observed. | Brian S. Alper, Harold Lehmann, Muhammad Afzal, Janice Tufte, Joanne Dehnbostel, Mario Tristan | 5/5 as of 8/30/2021: Eric Harvey, Harold Lehmann, Mario Tristan, Bhagvan Kommadi, Janice Tufte, | 8/27/2021 vote 8-2 on "Apprehension bias for outcome determination = A cognitive interpretive bias for outcome determination due to study participants’ awareness of being observed resulting in different responses or behaviors." by, Eric Au, Eric Harvey, Harold Lehmann, Alejandro Piscoya, Mario Tristan, Bhagvan Kommadi, Leo Orozco, Janice Tufte, Jesús López-Alcalde, Brian S. Alper | 2021-08-27 comments: A cognitive interpretive bias for outcome determination due to study participants’ awareness of being observed and resulting in different responses or behaviors. (just a slight rewording - the existing wording doesn't read well to me) This definition seems to refer to performance bias. The key is that [...] results in different responses or behaviours concerning the outcome determination. | |||||||
5 | SEVCO:00053 | hypothetical assessment bias for outcome determination | A cognitive interpretive bias for outcome determination due to a difference between an individual’s report of an imagined or hypothetical response from their actual response. The response may be a behavior or valuation. | Brian S. Alper, Harold Lehmann, Muhammad Afzal, Janice Tufte, Joanne Dehnbostel, Kenneth Wilkins, Mario Tristan | 7/7 as of 9/24/21: , Janice Tufte, Brian S. Alper, Eric Harvey, Paola Rosati, Jesus Lopez-Alcalde, Bhagvan Kommadi, Mario Tristan | |||||||||
5 | SEVCO:00054 | mimicry bias for outcome determination | A cognitive interpretive bias for outcome determination due to a misinterpretation of observations that resemble the outcome. | Brian S. Alper, Harold Lehmann, Muhammad Afzal, Janice Tufte, Joanne Dehnbostel | 7/7 as of 9/24/21: , Janice Tufte, Brian S. Alper, Eric Harvey, Paola Rosati, Jesus Lopez-Alcalde, Bhagvan Kommadi, Mario Tristan | |||||||||
5 | SEVCO:00057 | unacceptability bias for outcome determination | A cognitive interpretive bias for outcome determination due to distortions in response, response values, or recording of responses resulting from perception of the social unacceptability of an outcome. | Brian S. Alper, Harold Lehmann, Muhammad Afzal, Janice Tufte, Joanne Dehnbostel, Ken Wilkins, Lisa Schilling | 5/5 as of 10/1/21: , Joanne Dehnbostel, Brian S. Alper, Eric Harvey, Alejandro Piscoya, Bhagvan Kommadi, | |||||||||
4 | SEVCO:00058 | outcome ascertainment bias | An outcome detection bias due to distortions in how the data are collected. | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal | 10/10 as of 8/27/2021: Eric Au, Eric Harvey, Harold Lehmann, Alejandro Piscoya, Mario Tristan, Bhagvan Kommadi, Leo Orozco, Janice Tufte, Jesús López-Alcalde, Brian S. Alper | |||||||||
5 | SEVCO:00097 | nonrepresentative observation period for outcome of interest | An outcome ascertainment bias due to differences in the period used for observation of the outcome and the period for the outcome of interest. | Brian S. Alper, Mario Tristan, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal | 10/29/2021 vote 6-0 by Cheow Peng Ooi, Robin Ann Yurk, Paola Rosati, Jesus Lopez-Alcalde, Eric Harvey, Janice Tufte | 10/15/2021 vote 5-2 on "Inappropriate follow up period for outcome of interest = An outcome ascertainment bias due to differences in the time period used for observation of the outcome and the true time period for outcome occurrence." by Eric Harvey, Paola Rosati, Alejandro Piscoya, Joanne Dehnbostel, Robin Ann Yurk, Janice Tufte, Brian S. Alper 10/25/21 vote 3-1 on "Misaligned follow up period for outcome of interest = An outcome ascertainment bias due to differences in the time period used for observation of the outcome and the true time period for outcome occurrence." by Paola Rosati, Robin Ann Yurk, Jesus Lopez-Alcalde, Eric Harvey | 2021-10-15 comments: I wonder if we need to address interim analysis validity. What about adding to inappropriate 'unreliable'?; Change word Inappropriate to Different 2021-10-25 comments: It is unclear what do you mean with 'and the true time period for outcome occurrence', On the other hand, I propose using 'period' instead of 'time period' | |||||||
5 | SEVCO:00098 | nonrepresentative context for outcome ascertainment | An outcome ascertainment bias due to differences in the context in which the outcome is observed and the intended context for the outcome of interest. | This term is used when the context used for outcome ascertainment is incorrect, insensitive, or nonspecific. If the context (whether representative or not) is applied inconsistently, then use the term "Inconsistency in outcome ascertainment" | Brian S. Alper, Mario Tristan, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal, Harold Lehmann, Paul Whaley | 2022-03-18 vote 5-0 by Mario Tristan, Robin Ann Yurk, Paola Rosati, Nisha Mathew, Brian S. Alper | 10/15/2021 vote 6-1 on "Unreliable method for outcome ascertainment = An outcome ascertainment bias due to methods of data collection that result in inconsistent data values." by Eric Harvey, Paola Rosati, Alejandro Piscoya, Joanne Dehnbostel, Robin Ann Yurk, Janice Tufte, Brian S. Alper 10/25/21 vote 3-1 on "Undependable method for outcome of interest = An outcome ascertainment bias due methods of data collection that result in inconsistent or incorrect data values." by Paola Rosati, Robin Ann Yurk, Jesus Lopez-Alcalde, Eric Harvey 10/29/2021 vote 5-1 by Cheow Peng Ooi, Robin Ann Yurk, Paola Rosati, Jesus Lopez-Alcalde, Eric Harvey, Janice Tufte 11/22/2021 vote 6-1 2021-12-03 vote for priort term 7-0 by Philippe Rocca-Serra, Janice Tufte, Mario Tristan, Harold Lehmann, Paul Whaley, Joanne Dehnbostel, C Ooi | 2021-10-15 comments: the word Unreliable is misleading as more applicable to measurement error than bias 2021-10-25 comments: I do not fully understand the difference between the second and the third definitions 2021-10-29 comments: Suggest Incorrect or inconsistent method. 2021-11-22 comments: The term 'inconsistent' may be more appropriate -- steering group discussion to move the "Comment for application" property higher on the page and see if this comment will resolve the concern | ||||||
5 | SEVCO:00099 | inconsistency in outcome ascertainment | An outcome ascertainment bias due to differences within or between groups in how the data are collected. | This term is used when the context (whether representative or not) is applied inconsistently. If the context used for outcome ascertainment is incorrect, insensitive, or nonspecific, then use the term "Nonrepresentative context for outcome ascertainment" | Brian S. Alper, Mario Tristan, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal, Paul Whaley, Harold Lehmann | 2022-03-18 vote 5-0 by Mario Tristan, Robin Ann Yurk, Paola Rosati, Nisha Mathew, Brian S. Alper | 10/15/2021 vote 6-1 on "Imbalance in application of outcome ascertainment = An outcome ascertainment bias due to differences within or between groups in how the data are collected." by Eric Harvey, Paola Rosati, Alejandro Piscoya, Joanne Dehnbostel, Robin Ann Yurk, Janice Tufte, Brian S. Alper 10/25/21 vote 3-1 on "Imbalance in application of outcome ascertainment = An outcome ascertainment bias due to differences within or between groups in how the data are collected." by Paola Rosati, Robin Ann Yurk, Jesus Lopez-Alcalde, Eric Harvey 10/29/2021 vote on prior term 6-0 by Cheow Peng Ooi, Robin Ann Yurk, Paola Rosati, Jesus Lopez-Alcalde, Eric Harvey, Janice Tufte | 2021-10-15 comments: Imbalance is misleading as more applicable to measurement error? 2021-10-25 comment: Suggestion, replace imbalance with Variation or Heterogeneity 2021-10-29 comment: Alternative Terms: Variation or Heterogeneity --> converted 2021-10-29 to suggested addition of Alternative term "Variation in application of outcome ascertainment" by Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel, Muhammad Afzal, Harold Lehmann, Mario Tristan, Bhagvan Kommadi | ||||||
4 | SEVCO:00059 | outcome measurement bias | An outcome detection bias due to distortions in how the observed outcomes are measured. | If one is addressing a bias in the instruments or processes used to measure the observed outcome, use Outcome Measurement Bias. If one is addressing how the measured outcome is categorized, use Outcome Classification Bias. | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal; Paul Whaley | 2022-01-21 vote 7-0 by Harold Lehmann, Robin Ann Yurk, Janice Tufte, Paul Harris, Paul Whaley, Alejandro Piscoya, Philippe Rocca-Serra | PRIOR AGREEMENT 10/10 as of 8/27/2021: Eric Au, Eric Harvey, Harold Lehmann, Alejandro Piscoya, Mario Tristan, Bhagvan Kommadi, Leo Orozco, Janice Tufte, Jesús López-Alcalde, Brian S. Alper WITH DEFINTION: An outcome detection bias due to distortions in how the data are measured. | 2022-01-11 comment: Outcome Measurement Bias has a similar term definition as Outcome Classification Bias. May need to add an additional comment for application from T&O discussion. | ||||||
5 | SEVCO:00100 | inappropriate method for outcome measurement | An outcome measurement bias due to use of an incorrect method or protocol. | Brian S. Alper, Mario Tristan, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal | 7 of 7 on 2021-11-05: Bhagvan Kommadi, Paola Rosati, Paul Whaley, Janice Tufte, Alejandro Piscoya, Robin Ann Yurk, Eric Harvey | 10/15/2021 vote 6-1 on "Outcome measurement method inappropriate = An outcome measurement bias due to use of an incorrect method or protocol." by Eric Harvey, Paola Rosati, Alejandro Piscoya, Joanne Dehnbostel, Robin Ann Yurk, Janice Tufte, Brian S. Alper 10/25/21 vote 2-2 on "Outcome measurement method inappropriate = An outcome measurement bias due to use of an incorrect method or protocol." by Paola Rosati, Robin Ann Yurk, Jesus Lopez-Alcalde, Eric Harvey | 2021-10-15 comments: I would change word inappropriate to different as the bias is from difference in comparison not flaws or errors in scientific methods. 2021-10-25 comments: suggest replace with incorrect method; Should not be 'Inappropriate outcome measurement method' (instead of placing the adjective at the end?) 2022-03-11 Preferred term revised (and Alternative term added) to match corresponding changes in Exposure Detection Bias) | |||||||
5 | SEVCO:00101 | insensitive measure bias for outcome determination | An outcome measurement bias due to use of a method that does not reliably detect the outcome when the outcome is present. | Use of an inadequately sensitive outcome measure is likely to result in false negative findings. | Brian S. Alper, Mario Tristan, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal | 7 of 7 on 2021-11-05: Bhagvan Kommadi, Paola Rosati, Paul Whaley, Janice Tufte, Alejandro Piscoya, Robin Ann Yurk, Eric Harvey | 10/15/2021 vote 6-1 on "Insensitive measure bias for outcome determination = An outcome measurement bias due to use of a method that does not reliably detect the outcome when the outcome is present." by Eric Harvey, Paola Rosati, Alejandro Piscoya, Joanne Dehnbostel, Robin Ann Yurk, Janice Tufte, Brian S. Alper 10/25/21 vote 3-1 on "Insensitive measure bias for outcome determination =An outcome measurement bias due to use of a method that does not reliably detect the outcome when the outcome is present." by Paola Rosati, Robin Ann Yurk, Jesus Lopez-Alcalde, Eric Harvey | 2021-10-15 comments: Change word Insensitive to Sensitivity measure bias as double negative in phrase 2021-10-25 comment: False Negative measure Bias or Unreliable measure bias | ||||||
5 | SEVCO:00211 | nonspecific measure bias for outcome determination | An outcome measurement bias due to use of a method that falsely detects the outcome when the outcome is absent. | Use of an inadequately specific outcome measure is likely to result in false positive findings. | Brian S. Alper, Mario Tristan, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal | 7 of 7 on 2021-11-05: Bhagvan Kommadi, Paola Rosati, Paul Whaley, Janice Tufte, Alejandro Piscoya, Robin Ann Yurk, Eric Harvey | 10/15/2021 vote 6-1 on "Nonspecific measure bias for outcome determination = An outcome measurement bias due to use of a method that falsely detects the outcome when the outcome is absent." by Eric Harvey, Paola Rosati, Alejandro Piscoya, Joanne Dehnbostel, Robin Ann Yurk, Janice Tufte, Brian S. Alper 10/25/21 vote 3-1 on "Nonspecific measure bias for outcome determination = An outcome measurement bias due to use of a method that falsely detects the outcome when the outcome is absent." by Paola Rosati, Robin Ann Yurk, Jesus Lopez-Alcalde, Eric Harvey | 2021-10-15 comments: I would change to Specificity measurement bias. Remove word falsely from the definition as it implies problems with scientific methods 2021-10-25 comment: Suggest use False Positive Measure Biac | ||||||
5 | SEVCO:00102 | DEPRECATED: unclear outcome measurement method | An outcome measurement bias due to use of a method that is not reported with sufficient clarity and detail such that measurement could be replicated. | Brian S. Alper, Mario Tristan, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal | 10/15/2021 vote 5-2 on "Outcome measurement method unclear = An outcome measurement bias due to use of a method that is not reported with sufficient clarity and detail such that measurement could be replicated." by Eric Harvey, Paola Rosati, Alejandro Piscoya, Joanne Dehnbostel, Robin Ann Yurk, Janice Tufte, Brian S. Alper 10/25/21 vote 3-1 on "Outcome measurement method unclear = An outcome measurement bias due to use of a method that is not reported with sufficient clarity and detail such that measurement could be replicated." by Paola Rosati, Robin Ann Yurk, Jesus Lopez-Alcalde, Eric Harvey 2021-11-05 vote 6-1 on "Unclear outcome measurement method = An outcome measurement bias due to use of a method that is not reported with sufficient clarity and detail such that measurement could be replicated." by Bhagvan Kommadi, Paola Rosati, Paul Whaley, Janice Tufte, Alejandro Piscoya, Robin Ann Yurk, Eric Harvey | 2021-10-15 comments: I suggest to delete unclear and use 'unreliable' as the measurement could be replicated; I would eliminate this as this is a reviewers criticism of the design which suggests flawed methods. 2021-10-25 comment: Should not be 'Unclear measurement method' (instead of placing the adjective at the end?) 2021-11-05 comment: No. This does not look like a bias, it seems to be more of a shortcoming in reported methods, that may or may not be biased depending on the actual methods that were used, but have been under-specified in the documentation; I would suggest removing this as a bias. | 2021-11-05 This term was deprecated with the concept that one can use a different term for the type of Bias and apply a Rating of Factor Presence term such as Presence or absence of factor unclear. Decision made in COKA ROB Terminology and Tooling WG by Brian Alper, Paul Whaley, Joanne Dehnbostel, Harold Lehmann, Janice Tufte, Khalid Shahin, Muhammad Afzal, Bhagvan Kommadi | |||||||
5 | SEVCO:00103 | inappropriate application of method for outcome measurement | An outcome measurement bias due to inappropriate application of the method or protocol. | An inappropriate application of the method or protocol suggests error is introduced by the process of measurement, as distinct from the method or protocol used for measurement (which would be an Inappropriate method for outcome measurement). | Brian S. Alper, Mario Tristan, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal, Paul Whaley | 2022-04-08 vote 11-1 (no rationale provided for the negative vote) by Muhammad Afzal, Paul Whaley, Mario Tristan, Brian S. Alper, Jesus Lopez-Alcalde, Philippe Rocca-Serra, Paola Rosati, Robin Ann Yurk, nelle.stocquart, nisha mathew, Harold Lehmann, Cauê Monaco | 10/15/2021 vote 6-1 on "Outcome measurement conduct inappropriate = An outcome measurement bias due to incorrect application of the method or protocol." by Eric Harvey, Paola Rosati, Alejandro Piscoya, Joanne Dehnbostel, Robin Ann Yurk, Janice Tufte, Brian S. Alper 10/25/21 vote 2-2 on "Outcome measurement conduct inappropriate = An outcome measurement bias due to incorrect application of the method or protocol." by Paola Rosati, Robin Ann Yurk, Jesus Lopez-Alcalde, Eric Harvey 2021-11-05 vote 6-1 on "Inappropriate outcome measurement conduct = An outcome measurement bias due to incorrect application of the method or protocol." by Bhagvan Kommadi, Paola Rosati, Paul Whaley, Janice Tufte, Alejandro Piscoya, Robin Ann Yurk, Eric Harvey 2022-03-18 vote 4-1 by Mario Tristan, Robin Ann Yurk, Paola Rosati, Nisha Mathew, Brian S. Alper 2022-03-25 vote 7-1 by Mario Tristan, Robin Ann Yurk, Paola Rosati, Philippe Rocca-Serra, Brian S. Alper, Jesus Lopez-Alcalde, Paul Whaley, Muhammad Afzal | 2021-10-15 comments: I would eliminate this definition - as suggests flawed study design.. 2021-10-25 comments: replace inappropriate with incorrect; Should not be 'Inappropriate outcome measurement conduct' (instead of placing the adjective at the end?) 2021-11-05 comment: There is enormous overlap with this term and "Inappropriate outcome measurement method", so this one should be eliminated As of 2021-11-05 this term is not being prepared for vote. The current ROB tools do not distinguish the inappropriate conduct (used in QUADAS-2) from inadequate method (used in most other ROB tools) in the same tool, so the demand for this term is uncertain and thus not applied for version 1 of the Code System. On 2022-03-11 we revised this term to match corresponding changes that passed for Exposure Detection Bias. 2022-03-18 comment: Suggest edit Alternative term from conduct to process 2022-03-25 comment: Recommend edit term definition so it reads: Outcome Measurement method Bias. Suggest reviewing your complete taxonomy of terms and identify similarities or duplicate terms and potentially integrating terms by keeping as primary term versus adding to alternate term for prior vote with similar term definition or statements. | ||||||
5 | SEVCO:00104 | inconsistency in outcome measurement | An outcome measurement bias due to differences within groups in how the observed outcomes are measured. | "How the observed outcomes are measured" may refer to the methods applied for measurement or the application of those methods. | Brian S. Alper, Mario Tristan, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal, Paul Whaley, Robin Ann Yurk, Harold Lehmann | 2022-01-21 vote 6-0 by Harold Lehmann, Robin Ann Yurk, Janice Tufte, Paul Whaley, Alejandro Piscoya, Philippe Rocca-Serra | 10/15/2021 vote 6-1 on "Imbalance in application of outcome measurement = An outcome measurement bias due to differences within or between groups in how the data are measured." by Eric Harvey, Paola Rosati, Alejandro Piscoya, Joanne Dehnbostel, Robin Ann Yurk, Janice Tufte, Brian S. Alper 10/25/21 vote 3-1 on "Imbalance in application of outcome measurement = An outcome measurement bias due to differences within or between groups in how the data are measured." by Paola Rosati, Robin Ann Yurk, Jesus Lopez-Alcalde, Eric Harvey 2021-11-05 vote 6-1 on "Inconsistency in application of outcome measurement = An outcome measurement bias due to differences within or between groups in how the data are measured." by Bhagvan Kommadi, Paola Rosati, Paul Whaley, Janice Tufte, Alejandro Piscoya, Robin Ann Yurk, Eric Harvey 2021-11-22 vote 3-2 on "Inconsistency in application of outcome measurement" = "An outcome measurement bias due to differences within or between groups in how the data are measured."2021-12-10 vote 5-1 by Joanne Dehnbostel, Janice Tufte, Robin Ann Yurk, Paola Rosati, Jesus Lopez-Alcalde, Paul Whaley PRIOR AGREEMENT 2021-12-17 vote 6-0 by Robin Ann Yurk, Janice Tufte, Paul Whaley, Mario Tristan, C P Ooi, Jesus Lopez-Alcalde FOR DEFINITION: An outcome measurement bias due to differences within groups in how the data are measured. AND COMMENT FOR APPLICATION: "How the data are measured" may refer to the methods applied for data measurement or the application of those methods. | 2021-10-15 comments: I would eliminate this definition 2021-10-25 comment: Replace Imbalance with Heterogeneity 2021-11-05 comment: This is a specific type of "Inappropriate outcome measurement method" so this term should be moved into that position or eliminated (are we really going to describe all of the inappropriate methods?) [[discussed in COKA WG and noted that ROB2 has separate questions 4.1 and 4.2 for these terms so we need to support that] 2021-11-22 comments: "The wording 'inconsistent method of outcome measurement' may better reflect the definition" and "May be pedantic, but is it data that are measured, or the outcome as a variable (that results in data)? I also wonder if we mean differences within groups - some variation would be expected, but what matters is if the variation results in systematic error in measuring the variable between groups. If we feel that e.g. a study design where two different ways of measuring outcome were implemented within groups, but this did not lead to bias across the exposure and control arms, then I would vote yes (pending clarification of "data")."2021-12-10 comment: It seems to not quite be correctly written. The two choices for definition are differently phrased ("application of methods" / "methods applied")even though I think they are supposed to refer to across groups or within groups, but both refer to within groups, so I am not sure how to interpret this. 2022-01-21 comment: As a comment: Is this term redundant, if the two child terms are the complete set of options for inconsistency in outcome measurement? | ||||||
6 | SEVCO:00243 | inconsistency in instruments used for outcome measurement | An outcome measurement bias due to differences within groups in the instruments used for measurement. | Instruments used for measurement may include devices, surveys, and technologies. The concepts of "instruments used for measurement" is distinct from "process used for measurement" which may include protocols, techniques, and variations in context. | Brian S. Alper, Mario Tristan, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal, Harold Lehmann, Paul Whaley | 2022-01-21 vote 7-0 by Andrew Beck, Harold Lehmann, Robin Ann Yurk, Paul Whaley, Janice Tufte, Alejandro Piscoya, Philippe Rocca-Serra | 2021-12-17 vote 5-1 by Robin Ann Yurk, Janice Tufte, Paul Whaley, C P Ooi, Paola Rosati, Jesus Lopez-Alcalde 2022-01-07 vote 5-1 by Robin Ann Yurk, Janice Tufte, Paul Whaley, Paola Rosati, Harold Lehmann, Mario Tristan | 2021-12-17 comment: To me there is no semantic difference between this definition and the other subordinate term for inconsistency in outcome measurement (application of methods ? methods applied) 2022-01-07 comment: I would approve this, except I am still not sure that one can measure data ("facts and statistics collected together for reference or analysis"). One can collect data, or measure a variable, but I don't think one can collect data. | ||||||
6 | SEVCO:00244 | inconsistency in processes used for outcome measurement | An outcome measurement bias due to differences within groups in the processes by which the instruments are used for measurement. | The processes used for measurement may include protocols, techniques, and variations in context. The concept of "processes used for measurement" is distinct from "instruments used for measurement" which may include devices, surveys, and technologies. | Brian S. Alper, Mario Tristan, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal, Harold Lehmann, Paul Whaley | 2022-01-21 vote 7-0 by Andrew Beck, Harold Lehmann, Robin Ann Yurk, Paul Whaley, Janice Tufte, Alejandro Piscoya, Philippe Rocca-Serra | 2021-12-17 vote 5-1 by Robin Ann Yurk, Janice Tufte, Paul Whaley, C P Ooi, Paola Rosati, Jesus Lopez-Alcalde 2022-01-07 vote 5-1 by Robin Ann Yurk, Janice Tufte, Paul Whaley, Paola Rosati, Harold Lehmann, Mario Tristan | 2021-12-17 comment: To me there is no semantic difference between this definition and the other subordinate term for inconsistency in outcome measurement (application of methods ? methods applied) 2022-01-07 comment: I would approve this, except I am still not sure that one can measure data ("facts and statistics collected together for reference or analysis"). One can collect data, or measure a variable, but I don't think one can collect data. | ||||||
5 | SEVCO:00240 | imbalance in outcome measurement | An outcome measurement bias due to differences between groups in how the observed outcomes are measured. | "How the observed outcomes are measured" may refer to the methods applied for measurement or the application of those methods. | Brian S. Alper, Joanne Dehnbostel, Paul Whaley, Robin Ann Yurk, Janice Tufte, Harold Lehmann, Mario Tristan, Kenneth Wilkins, Muhammad Afzal | 2022-01-21 vote 6-0 by Harold Lehmann, Robin Ann Yurk, Paul Whaley, Janice Tufte, Alejandro Piscoya, Philippe Rocca-Serra | 2021-12-10 vote 5-0 by Janice Tufte, Robin Ann Yurk, Paola Rosati, Jesus Lopez-Alcalde, Paul Whaley but steering group decided to make changes consistent with changes to Inconsistency in outcome measurement. PRIOR AGREEMENT 2021-12-17 vote 5-0 by Robin Ann Yurk, Janice Tufte, Paul Whaley, C P Ooi, Paola Rosati FOR DEFINITION: An outcome measurement bias due to differences between groups in how the data are measured. WITH COMMENT FOR APPLICATION: "How the data are measured" may refer to the methods applied for data measurement or the application of those methods. | 2021-12-10 comment: Referring back to my comment on the inconsistency in method, I realise I hadn't read it quite right. In both cases, they maybe aren't quite as easy to parse as would be ideal but I can't think of a better definition. Maybe a use note to refer to how the terms are similar and clarify when one vs. the other should be used? 2022-01-21 comments: The term definition and comment is the same for Inconsistency in outcome measurement bias. Suggest combining the two terms by listing one as an Alternative term. (yellow highlighting in messaging applied to show the differences in the terms) As a comment: Is this term redundant, if the two child terms are the complete set of options for inconsistency in outcome measurement? | ||||||
6 | SEVCO:00245 | imbalance in instruments used for outcome measurement | An outcome measurement bias due to differences between groups in the instruments used for measurement. | Instruments used for measurement may include devices, surveys, and technologies. The concepts of "instruments used for measurement" is distinct from "process used for measurement" which may include protocols, techniques, and variations in context. | Brian S. Alper, Mario Tristan, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal, Harold Lehmann, Paul Whaley | 2022-01-21 vote 7-0 by Harold Lehmann, Robin Ann Yurk, Paul Whaley, Janice Tufte, Alejandro Piscoya, Philippe Rocca-Serra, Andrew Beck | 2021-12-17 vote 4-1 by Janice Tufte, Paul Whaley, C P Ooi, Paola Rosati, Jesus Lopez-Alcalde 2022-01-07 vote 5-1 by Robin Ann Yurk, Janice Tufte, Paul Whaley, Paola Rosati, Harold Lehmann, Mario Tristan | 2021-12-17 comment: To me there is no semantic difference between this definition and the other subordinate term for inconsistency in outcome measurement (application of methods ? methods applied) 2022-01-07 comment: I would approve this, except I am still not sure that one can measure data ("facts and statistics collected together for reference or analysis"). One can collect data, or measure a variable, but I don't think one can collect data. | ||||||
6 | SEVCO:00246 | imbalance in processes used for outcome measurement | An outcome measurement bias due to differences between groups in the processes by which the instruments are used for measurement. | The processes used for measurement may include protocols, techniques, and variations in context. The concept of "processes used for measurement" is distinct from "instruments used for measurement" which may include devices, surveys, and technologies. | Brian S. Alper, Mario Tristan, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal, Harold Lehmann, Paul Whaley | 2022-01-21 vote 7-0 by Harold Lehmann, Robin Ann Yurk, Paul Whaley, Janice Tufte, Alejandro Piscoya, Philippe Rocca-Serra, Andrew Beck | 2021-12-17 vote 4-1 by Janice Tufte, Paul Whaley, C P Ooi, Paola Rosati, Robin Ann Yurk 2022-01-07 vote 5-1 by Robin Ann Yurk, Janice Tufte, Paul Whaley, Paola Rosati, Harold Lehmann, Mario Tristan | 2021-12-17 comment: To me there is no semantic difference between this definition and the other subordinate term for inconsistency in outcome measurement (application of methods ? methods applied) 2022-01-07 comment: I would approve this, except I am still not sure that one can measure data ("facts and statistics collected together for reference or analysis"). One can collect data, or measure a variable, but I don't think one can collect data. | ||||||
4 | SEVCO:00060 | outcome classification bias | An outcome detection bias due to distortions in how the observed outcomes are classified. | If one is addressing a bias in the instruments or processes used to measure the observed outcome, use Outcome Measurement Bias. If one is addressing how the measured outcome is categorized, use Outcome Classification Bias. | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal, Paul Whaley | 2022-01-21 vote 7-0 by Harold Lehmann, Robin Ann Yurk, Janice Tufte, Paul Harris, Paul Whaley, Alejandro Piscoya, Philippe Rocca-Serra | PRIOR AGREEMENT 10/10 as of 8/27/2021: Eric Au, Eric Harvey, Harold Lehmann, Alejandro Piscoya, Mario Tristan, Bhagvan Kommadi, Leo Orozco, Janice Tufte, Jesús López-Alcalde, Brian S. Alper, 7/7 for renaming on 9/24/21: Janice/Brian/Eric/Paola/Jesus/Bhagvan/Mario FOR DEFINITION: An outcome detection bias due to distortions in how the data are classified. | 2022-01-11 comment: Outcome Classification Bias has a similar term definition as Outcome Measurement Bias. May need to add an additional comment for application from T&O discussion. | ||||||
5 | SEVCO:00061 | outcome classification system bias | An outcome classification bias resulting from the definition or threshold used for outcome classification. | An outcome classification system bias suggests an internal validity problem in which the definition or threshold used for outcome classification does not represent the outcome of interest. If considering an external validity problem, the "Wrong question bias" (term not yet defined) may be used. An outcome classification system bias is present when there are differences between the outcome of interest and the definition or threshold used for outcome classification. | Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Mario Tristan | 5/5 as of 9/17/2021: Eric Harvey, Paola Rosati, Alejandro Piscoya, Bhagvan Kommadi, Janice Tufte, | ||||||||
6 | SEVCO:00105 | nonrepresentative definition for outcome classification | An outcome classification system bias due to a mismatch between the outcome of interest and the definition or threshold used for outcome measurement. | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal, Paul Whaley | 2021-12-10 vote 5-0 by Paul Whaley, Janice Tufte, Robin Ann Yurk, Paola Rosati, Jesus Lopez-Alcalde | 10/15/2021 vote 6-1 on "Nonrepresentative definition for outcome classification = An outcome classification system bias due to a definition or threshold that does not represent the outcome of interest." by Eric Harvey, Paola Rosati, Alejandro Piscoya, Joanne Dehnbostel, Robin Ann Yurk, Janice Tufte, Brian S. Alper 2021-11-29 vote 6-1 on "Nonrepresentative definition for outcome classification" = "An outcome classification system bias due to a definition or threshold that does not represent the outcome of interest." by Harold Lehmann, Paul Whaley, Janice Tufte, C P Ooi, Joanne Dehnbostel, Philippe Rocca-Serra, Robin Ann Yurk | 2021-10-15 comments: I would eliminate this definition 2021-11-29 comments: ("represent in its entirety" instead? A definition could *partially* represent the outcome of interest, so perhaps we want to make clear that this bias is invoked only for something that is more than "partial"?) "Represent" feels ambiguous, would it be useful to clarify what is meant here? Is it that it includes outcomes in addition to that of interest, and/or excludes outcomes that are of interest? Maybe that doesn't make things clearer. | |||||||
7 | SEVCO:00108 | surrogate marker bias for outcome classification | An outcome classification system bias due to use of a definition that is proxy for the outcome rather than direct observation of the outcome. | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal, Paul Whaley | 2021-12-10 vote 5-0 by Paul Whaley, Janice Tufte, Robin Ann Yurk, Paola Rosati, Jesus Lopez-Alcalde | 10/15/2021 vote 6-1 on "Surrogate marker bias for outcome classification = A nonrepresentative definition for outcome classification due to use of a factor associated with the outcome rather than a direct observation of the outcome." by Eric Harvey, Paola Rosati, Alejandro Piscoya, Joanne Dehnbostel, Robin Ann Yurk, Janice Tufte, Brian S. Alper 2021-11-29 vote 3-3 on "Surrogate marker bias for outcome classification" = "A nonrepresentative definition for outcome classification due to use of a proxy for the outcome rather than a direct observation of the outcome." | 2021-10-15 comments: I would edit the definition: An outcome classification system bias due to use of a definition that is proxy rather than direct observation of the outcome. {{Definition changed as result of this comment}} 2021-11-29 comments: The 10/15 comments stated that the definition should start with "An outcome classification system bias...."; but this definition does not. A little pickier, I might say, "result from use of a definition" rather than "due to". The latter sounds like the bias will always occur; the former, that there is a bias as a result, in this instance. I'm not sure I fully understand this definition. A surrogate would generally be used in place of an outcome that cannot readily be observed in a research setting. I am not sure how this can be a classification error (the surrogate is what the surrogate is). I can, however, see how it could be an error in inference (assuming that because the exposure affects the surrogate, then the exposure also affects the outcome of actual interest). Is this a helpful way of thinking about this, or would it just be over-complicating matters? This suggested definition is more appropriate: An outcome classification system bias due to use of a definition that is proxy rather than direct observation of the outcome | |||||||
6 | SEVCO:00106 | post-hoc definition of outcome | An outcome classification system bias due to defining the outcome after interacting with the study data. | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal, Paul Whaley | 2022-01-07 vote 9-0 by Robin Ann Yurk, Janice Tufte, Paul Whaley, C P Ooi, Paola Rosati, Jesus Lopez-Alcalde, Harold Lehmann, Joanne Dehnbostel, Mario Tristan | 10/15/2021 vote 6-1 on "Definition not prespecified for outcome classification = An outcome classification system bias due to absence of a predetermined definition." by Eric Harvey, Paola Rosati, Alejandro Piscoya, Joanne Dehnbostel, Robin Ann Yurk, Janice Tufte, Brian S. Alper 10/25/21 vote 3-1 on "Definition not prespecified for outcome classification = An outcome classification system bias due to absence of a predetermined definition." by Paola Rosati, Robin Ann Yurk, Jesus Lopez-Alcalde, Eric Harvey 2021-12-03 vote 5-2 by Harold Lehmann, Paul Whaley, Janice Tufte, C P Ooi, Joanne Dehnbostel, Philippe Rocca-Serra, Robin Ann Yurk 2021-12-10 vote 2-2 by Paul Whaley, Robin Ann Yurk, Paola Rosati, Jesus Lopez-Alcalde | 2021-10-15 comments: I would eliminate this definition as suggest flawed study design 2021-10-25 comment: I would phrase 'Not prespecified definition for outcome classification' 2021-12-03 comments: It feels uninformative to define "not prespecified" as "not predetermined". I wonder if "predetermined" can be clarified - presumably, the issue here is that the outcome is defined post-hoc, after data collection, so that outcome ends up being defined around the data rather than specified in advance of conduct of the research. // Rephrasing to this 'No prespecified definition for outcome classification' may be clearer and easier to understand. 2021-12-10 comments: Consider removing term. As methods are permitted to be revised for a variety of reasons with new definitions but would be described in methods or a revised protocol. If truly post-hoc after a data set is closed then there are different issues for discussion. /// Suggest changing "due to determination of the outcome definition" to "due to outcome being defined" | |||||||
6 | SEVCO:00107 | deprecated: unclear definition for outcome classification | An outcome classification system bias due to use of a definition that is not reported with sufficient clarity and detail such that classification could be replicated. | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal | 10/15/2021 vote 5-2 on "Definition unclear for outcome classification = An outcome classification system bias due to use of a definition that is not reported with sufficient clarity and detail such that classification could be replicated." by Eric Harvey, Paola Rosati, Alejandro Piscoya, Joanne Dehnbostel, Robin Ann Yurk, Janice Tufte, Brian S. Alper 10/25/21 vote 3-1 on "Definition unclear for outcome classification = An outcome classification system bias due to use of a definition that is not reported with sufficient clarity and detail such that classification could be replicated." by Paola Rosati, Robin Ann Yurk, Jesus Lopez-Alcalde, Eric Harvey | 2021-10-15 comments: As above detail, I suggest to change unclear with 'unreliable'; I would eliminate this definition as suggest flawed study design. 2021-10-25 comment: I would phrase 'Unclear definition for outcome classification' | 2021-11-12 This term was deprecated with the concept that one can use a different term for the type of Bias and apply a Rating of Factor Presence term such as Presence or absence of factor unclear. Decision made in COKA ROB Terminology and Tooling WG by Brian Alper, Joanne Dehnbostel, Muhammad Afzal, Kenneth Wilkins, Mario Tristan | |||||||
5 | SEVCO:00062 | outcome classification process bias | An outcome classification bias resulting from the application of the method used for outcome classification. | Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Mario Tristan | 7/7 as of 9/24/21: , Janice Tufte, Brian S. Alper, Eric Harvey, Paola Rosati, Jesus Lopez-Alcalde, Bhagvan Kommadi, Mario Tristan | 4-1 vote as of 9/17/2021 regarding Outcome Classification Process Bias (SEVCO:00062) (Classification process bias for outcome determination) [Draft Term] = An outcome misclassification bias resulting from the application of the method used for outcome classification.: Eric Harvey, Paola Rosati, Alejandro Piscoya, Bhagvan Kommadi, Janice Tufte, | comment: "This might be related to outcome classification bias (child relationship)" | |||||||
5 | SEVCO:00063 | incorporation bias for outcome determination | An outcome classification bias due to the inclusion of the exposure under investigation in the method or process used for outcome classification. | In predictive model research, incorporation bias for outcome determination occurs if the predictor (explanatory variable) is included in the outcome definition. | Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Mario Tristan | 5/5 as of 9/17/2021: Eric Harvey, Paola Rosati, Alejandro Piscoya, Bhagvan Kommadi, Janice Tufte, | ||||||||
3 | SEVCO:00043 | exposure detection bias | A detection bias due to distortions in how an exposure of interest is determined. | The exposure of interest can be an intervention or a prognostic factor, depending on the research context. | Brian S. Alper, Joanne Dehnbostel, Philippe Rocca-Serra, Mario Tristan, Janice Tufte, Harold Lehmann, Erfan Shamsoddin, Muhammad Afzal, Kenneth Wilkin | 10/10 as of 8/27/2021: Eric Au, Eric Harvey, Harold Lehmann, Alejandro Piscoya, Mario Tristan, Bhagvan Kommadi, Leo Orozco, Janice Tufte, Jesús López-Alcalde, Brian S. Alper | ||||||||
4 | SEVCO:00055 | cognitive interpretive bias for exposure determination | An exposure detection bias due to the subjective nature of human interpretation. | The exposure of interest can be an intervention or a prognostic factor, depending on the research context. The human interpretation can be that of the observer or participant. | Brian S. Alper, Harold Lehmann, Muhammad Afzal, Janice Tufte, Joanne Dehnbostel, Kenneth Wilkins | 2022-02-04 vote 6-0 by Jesus Lopez-Alcalde, Paul Whaley, Mario Tristan, Robin Ann Yurk, Janice Tufte, Brian S. Alper | 2022-02-04 comment: Consistency of phrasing with other definitions ("bias due to distortions in..."), need comment for application. | |||||||
5 | SEVCO:00056 | bias due to lack of masking for exposure determination | A cognitive interpretive bias for exposure determination due to awareness of the participant's status with respect to the outcome of interest or other relevant exposures. | The exposure of interest can be an intervention or a prognostic factor, depending on the research context. Lack of blinding is not automatically a bias, but if awareness of some data systematically distorts the exposure determination then a 'Bias due to lack of masking for exposure determination' exists. | Brian S. Alper, Harold Lehmann, Muhammad Afzal, Janice Tufte, Joanne Dehnbostel, Kenneth Wilkins, Paul Whaley | 2022-02-25 vote 8-0 by Robin Ann Yurk, Sunu Alice Cherian, Paola Rosati, Harold Lehmann, Joanne Dehnbostel, Janice Tufte, nisha mathew, Paul Whaley | 2022-02-04 vote 5-1 by Jesus Lopez-Alcalde, Paul Whaley, Mario Tristan, Robin Ann Yurk, Janice Tufte, Brian S. Alper 2022-02-11 vote 8-1 by Mario Tristan, Paul Whaley, Sunu Alice Cherian, Janice Tufte, Robin Ann Yurk, Alejandro Piscoya, Harold Lehmann, Yasser Sami Amer, Jesus Lopez-Alcalde 2022-02-18 vote 10-3 by Rebecca Baker, Brian S. Alper, Mario Tristan, Paul Whaley, Sunu Alice Cherian, Janice Tufte, Robin Ann Yurk, Alejandro Piscoya, Harold Lehmann, Yasser Sami Amer, Jesus Lopez-Alcalde, Joanne Dehnbostel,Sumalatha A | 2022-02-04 comment: Is it just awareness of the participant's status with respect solely to the outcome of interest? I could imagine being aware of e.g. socioeconomic status rather than outcome, and this potentially having an influence on exposure assessment. Blinding I think is supposed to be to as many characteristics of the participant as possible. 2022-02-11 comment: This definition assumes that the "Lack of blinding for exposure determination" always associates bias, which may not be the case. For example, if we want to assess the role of sex as a prognostic factor for ICU admission, the participant may not be blinded but this does not cause bias in his/her prognostic factor determination (sex) 2022-02-18 comments: As "lack of blinding" is contributing to but not the bias itself, perhaps rename to Awareness bias for exposure determination This definition assumes that the "Lack of blinding for exposure determination" always associates bias, which may not be the case. For example, if we want to assess the role of sex as a prognostic factor for ICU admission, the participant may not be blinded but this does not cause bias in his/her prognostic factor determination (sex) Not much difference between existing and new terminology 2022-02-25 comment: Suggest removing Lack of blinding during exposure assessment from Alternative term and just list the other 3 Alternative terms. The comment is based on your comment for application description. | ||||||
5 | SEVCO:00238 | observer bias for exposure determination | A cognitive interpretive bias for exposure determination due to subjective interpretations in the process of observing and recording information. | The exposure of interest can be an intervention or a prognostic factor, depending on the research context. Multiple types of bias can overlap. Observer bias is different than lack of blinding with respect to the outcome. Observer bias is about the influence of the observer's interpretation of what they are observing, whether or not the observer is aware of the participant's outcome. | Brian S. Alper, Harold Lehmann, Muhammad Afzal, Janice Tufte, Joanne Dehnbostel | 2022-02-04 vote 6-0 by Jesus Lopez-Alcalde, Paul Whaley, Mario Tristan, Robin Ann Yurk, Janice Tufte, Brian S. Alper | ||||||||
6 | SEVCO:00239 | confirmation bias for exposure determination | An observer bias for exposure determination due to previous opinions or knowledge of a subject’s prior exposures or assessments. | The exposure of interest can be an intervention or a prognostic factor, depending on the research context. | Brian S. Alper, Harold Lehmann, Muhammad Afzal, Janice Tufte, Joanne Dehnbostel, Kenneth Wilkins, Mario Tristan | 2022-02-04 vote 5-0 by Jesus Lopez-Alcalde, Mario Tristan, Robin Ann Yurk, Janice Tufte, Brian S. Alper | ||||||||
5 | SEVCO:00214 | recall bias for exposure determination | A cognitive interpretive bias for exposure determination due to differences in accuracy or completeness of recall of past events or experiences. | The exposure of interest can be an intervention or a prognostic factor, depending on the research context. | Brian S. Alper, Harold Lehmann, Muhammad Afzal, Janice Tufte, Joanne Dehnbostel | 2022-02-04 vote 5-0 by Jesus Lopez-Alcalde, Mario Tristan, Robin Ann Yurk, Janice Tufte, Paul Whaley | ||||||||
5 | SEVCO:00215 | apprehension bias for exposure determination | A cognitive interpretive bias for exposure determination due to a study participant's responding or behaving differently when aware of being observed. | The exposure of interest can be an intervention or a prognostic factor, depending on the research context. | Brian S. Alper, Harold Lehmann, Muhammad Afzal, Janice Tufte, Joanne Dehnbostel, Mario Tristan | 2022-02-04 vote 5-0 by Jesus Lopez-Alcalde, Mario Tristan, Robin Ann Yurk, Janice Tufte, Paul Whaley | 2022-02-04 comment: What about using Hawthorne Effect for term definition and Apprehension Bias for Alternative term | |||||||
5 | SEVCO:00216 | hypothetical assessment bias for exposure determination | A cognitive interpretive bias for exposure determination due to a difference between an individual’s report of an imagined or hypothetical response from their actual response. | The exposure of interest can be an intervention or a prognostic factor, depending on the research context. The response may be a behavior or valuation. An individual's response to "What would you do?" or "What would you have done?" (an imagined or hypothetical response) may be different than the individual's response to "What did you do?" or observation of the individual's behavior (a reporting of an actual response). This bias is relevant for preference studies. | Brian S. Alper, Harold Lehmann, Muhammad Afzal, Janice Tufte, Joanne Dehnbostel, Kenneth Wilkins, Mario Tristan | 2022-02-11 vote 9-0 by Mario Tristan, Paul Whaley, Sunu Alice Cherian, Robin Ann Yurk, Alejandro Piscoya, Harold Lehmann, Yasser Sami Amer, Jesus Lopez-Alcalde, Janice Tufte | 2022-02-04 vote 4-1 by Jesus Lopez-Alcalde, Mario Tristan, Robin Ann Yurk, Janice Tufte, Paul Whaley | 2022-02-04 comments: Is there a spelling error in Subjunctivity? A minor issue - would the sentence "The response may be a behavior or valuation." be better placed in the comment for application (otherwise, would vote yes) 2022-02-11 comment: I would add a comment for application for the word hypothetical | ||||||
5 | SEVCO:00217 | mimicry bias for exposure determination | A cognitive interpretive bias for exposure determination due to a misinterpretation of observations that resemble the exposure. | The exposure of interest can be an intervention or a prognostic factor, depending on the research context. Other terms (Exposure Ascertainment Bias, Exposure Measurement Bias, Exposure Classification Bias) may be used to describe the process in Exposure Detection in which the bias occurs. The term 'Mimicry bias for exposure determination' is used to represent the type of cognitive interpretive bias occurring in this process. | Brian S. Alper, Harold Lehmann, Muhammad Afzal, Janice Tufte, Joanne Dehnbostel, Kenneth Wilkins | 2022-02-18 vote 11-0 by Jesus Lopez-Alcalde, Mario Tristan, Robin Ann Yurk, Janice Tufte, Paul Whaley, Yasser Sami Amer, Harold Lehmann, Alejandro Piscoya, Sunu Alice Cherian, Joanne Dehnbostel, Sumalatha A | 2022-02-18 comments: Suggest insert Alternative term: Duplicate I'm not quite sure this is clear enough, though I don't have any concrete suggestions for improvement. It might be that I am not familiar enough with the issue in question to interpret the definition. Reading around this a bit, it resembles a misclassification type bias (for a given set of observations, the observer takes X to be cause when the true cause is Y). Given our model for bias (see our flow diagram), might it be better defined in those terms? -- RESOLVED IN GROUP DISCUSSION | |||||||
5 | SEVCO:00218 | unacceptability bias for exposure determination | A cognitive interpretive bias for exposure determination due to distortions in response, response values, or recording of responses resulting from perception of the social unacceptability of an exposure. | The exposure of interest can be an intervention or a prognostic factor, depending on the research context. | Brian S. Alper, Harold Lehmann, Muhammad Afzal, Janice Tufte, Joanne Dehnbostel, Ken Wilkins, Lisa Schilling | 2022-02-04 vote 5-0 by Jesus Lopez-Alcalde, Mario Tristan, Robin Ann Yurk, Janice Tufte, Paul Whaley | ||||||||
4 | SEVCO:00219 | exposure ascertainment bias | An exposure detection bias due to distortions in how the data are collected. | The exposure of interest can be an intervention or a prognostic factor, depending on the research context. | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal | 2022-02-04 vote 5-0 by Jesus Lopez-Alcalde, Paul Whaley, Mario Tristan, Robin Ann Yurk, Janice Tufte | 2022-02-04 comment: Suggest modify Alternative term to Data Collection Bias | |||||||
5 | SEVCO:00220 | nonrepresentative observation period for exposure of interest | An exposure ascertainment bias due to differences in the time period used for observation of the exposure and the intended time period for the exposure of interest. | The exposure of interest can be an intervention or a prognostic factor, depending on the research context. | Brian S. Alper, Mario Tristan, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal | 2022-02-11 vote 9-0 by Mario Tristan, Paul Whaley, Sunu Alice Cherian, Robin Ann Yurk, Alejandro Piscoya, Harold Lehmann, Yasser Sami Amer, Jesus Lopez-Alcalde, Janice Tufte | 2022-02-04 vote 4-1 by Jesus Lopez-Alcalde, Paul Whaley, Mario Tristan, Robin Ann Yurk, Janice Tufte | 2022-02-04 comment: I think this is about right but it could perhaps be tidied up a bit, e.g. using "time period" in both instances of "period" | ||||||
5 | SEVCO:00221 | nonrepresentative context for exposure ascertainment | An exposure ascertainment bias due to differences in the context in which the exposure is observed and the intended context for the exposure of interest. | The exposure of interest can be an intervention or a prognostic factor, depending on the research context. This term is used when the context used for exposure ascertainment is incorrect, insensitive, or nonspecific. If the context (whether representative or not) is applied inconsistently, then use the term "Inconsistency in exposure ascertainment" | Brian S. Alper, Mario Tristan, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal, Harold Lehmann, Paul Whaley | 2022-02-25 vote 8-0 by Robin Ann Yurk, Sunu Alice Cherian, Paola Rosati, Harold Lehmann, Joanne Dehnbostel, Janice Tufte, nisha mathew, Paul Whaley | 2022-02-18 vote 10-1 by Jesus Lopez-Alcalde, Paul Whaley, Mario Tristan, Robin Ann Yurk, Janice Tufte, Yasser Sami Amer, Harold Lehmann, Alejandro Piscoya, Sunu Alice Cherian, Joanne Dehnbostel, Sumalatha A | 2022-02-18 comments: Comment for application. I would delete sentence: If the method (whether dependable or undependable) is applied inconsistently then use the term inconsistency in application of exposure of ascertainment. I'm not sure if "undependable" is the word we really want to use. Also, (1) no method for exposure ascertainment will give a strictly "correct" result, (2) inconsistency can result in random error and imprecision, not necessarily bias, (3) we are presumably worried about consistency over- or under-reading of a measurement method compared to some (possibly hypothetical) gold standard? Overall, it feels like there is more to discuss here. 2022-02-25 comment: I would delete or edit the current Alternative term and replace with insensitive, or nonspecific context for exposure ascertainment. | ||||||
5 | SEVCO:00222 | inconsistency in exposure ascertainment | An exposure ascertainment bias due to differences within or between groups in how the data are collected. | The exposure of interest can be an intervention or a prognostic factor, depending on the research context. This term is used when the context (whether representative or not) is applied inconsistently. If the context used for exposure ascertainment is incorrect, insensitive, or nonspecific, then use the term "Nonrepresentative context for exposure ascertainment" | Brian S. Alper, Mario Tristan, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal, Harold Lehmann, Paul Whaley | 2022-02-25 vote 8-0 by Robin Ann Yurk, Sunu Alice Cherian, Paola Rosati, Harold Lehmann, Joanne Dehnbostel, Janice Tufte, nisha mathew, Paul Whaley | 2022-02-18 vote 8-1 by Jesus Lopez-Alcalde, Paul Whaley, Mario Tristan, Robin Ann Yurk, Janice Tufte, Yasser Sami Amer, Harold Lehmann, Alejandro Piscoya, Sunu Alice Cherian, Joanne Dehnbostel, Sumalatha A | 2022-02-18 comments: I would add comment for application from previous term. If the method (whether dependable or undependable) is applied inconsistently then use the term inconsistency in application of exposure of ascertainment. I don't really understand the term "Inconsistency in application of exposure ascertainment" - I am not clear what the nouns and verbs actually are here, nor what they refer to. I have been involved in the discussion of the underlying bias model and I still don't grasp the meaning here. 2022-02-25 comment: I would remove Alternative term. | ||||||
4 | SEVCO:00223 | exposure measurement bias | An exposure detection bias due to distortions in how the observed exposures are measured. | The exposure of interest can be an intervention or a prognostic factor, depending on the research context. If one is addressing a bias in the instruments or processes used to measure the observed exposure, use Exposure Measurement Bias. If one is addressing how the measured exposure is categorized, use Exposure Classification Bias. | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal | 2022-02-04 vote 5-0 by Jesus Lopez-Alcalde, Paul Whaley, Mario Tristan, Robin Ann Yurk, Janice Tufte | ||||||||
5 | SEVCO:00224 | inappropriate method for exposure measurement | An exposure measurement bias due to use of an incorrect method or protocol. | The exposure of interest can be an intervention or a prognostic factor, depending on the research context. | Brian S. Alper, Mario Tristan, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal | 2022-03-11 vote 5-0 by Janice Tufte, Harold Lehmann, Philippe Rocca-Serra, nisha mathew, Paul Whaley | 2022-02-25 vote 11-2 by Paul Whaley, Mario Tristan, Robin Ann Yurk, Janice Tufte, Yasser Sami Amer, Harold Lehmann, Alejandro Piscoya, Sunu Alice Cherian, Jesus Lopez-Alcalde, Sumalatha A, Joanne Dehnbostel, Paola Rosati, nisha mathew | 2022-02-25 comments: I would list measurement methods as examples under comment for application, such as pharma, survey... I am not sure of the difference between this bias and "Undependable method for exposure ascertainment" bias. It also seems to me that "inappropriate" is a subjective term so I am not sure how it should be applied. [Side note: in the ballot, it might be useful to have terms arranged as they are in the SEVCO hierarchy, as this might be causing some of the confusion I am experiencing.] The previous term convey almost similar meaning | ||||||
5 | SEVCO:00225 | insensitive measure bias for exposure determination | An exposure measurement bias due to use of a method that does not reliably detect the exposure when the exposure is present. | The exposure of interest can be an intervention or a prognostic factor, depending on the research context. Use of an inadequately sensitive exposure measure is likely to result in false negative findings. | Brian S. Alper, Mario Tristan, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal | 2022-02-25 vote 13-0 by Paul Whaley, Mario Tristan, Robin Ann Yurk, Janice Tufte, Yasser Sami Amer, Harold Lehmann, Alejandro Piscoya, Sunu Alice Cherian, Jesus Lopez-Alcalde, Sumalatha A, Joanne Dehnbostel, Paola Rosati, nisha mathew | 2022-02-18 comments: Suggest use term as Sensitivity Measure bias for exposure determination and insensitive measure bias for exposure determination for alternate term. "Sensitivity" is not, in my experience, viewed exclusively in terms of measurement. Some experimental models cannot show the exposure (or outcome) because they are incapable of it, however it is measured in situ. For example, if the exposure was measured via presence of a metabolite, but the participant was not able to produce the metabolite, then the experiment would be insensitive regardless of measurement method. I am not sure this affects us here, but does it suggest a need for us to handle sensitivity in a comprehensive fashion? (Perhaps also specificity?) As a side note, defining sensitivity well could be important for progress on risk of bias assessment methods used by EPA, who currently have assessment of "sensitivity" as a separate issue entirely outside of risk of bias assessment. NEGATIVE VOTE CHANGED TO POSITIVE DURING DISCUSSION 2022-02-25 | |||||||
5 | SEVCO:00226 | nonspecific measure bias for exposure determination | An exposure measurement bias due to use of a method that falsely detects the exposure when the exposure is absent. | The exposure of interest can be an intervention or a prognostic factor, depending on the research context. Use of an inadequately specific exposure measure is likely to result in false positive findings. | Brian S. Alper, Mario Tristan, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal | 2022-02-04 vote 5-0 by Paul Whaley, Mario Tristan, Robin Ann Yurk, Janice Tufte, Jesus Lopez-Alcalde | 2022-02-04 comment: Suggest use Specificity measure bias for exposure determination and non-specific measure bias for exposure determination for Alternative term. | |||||||
5 | SEVCO:00228 | inappropriate application of method for exposure measurement | An exposure measurement bias due to inappropriate application of the method or protocol. | The exposure of interest can be an intervention or a prognostic factor, depending on the research context. An inappropriate application of the method or protocol suggests error is introduced by the process of measurement, as distinct from the method or protocol used for measurement (which would be an Inappropriate method for exposure measurement). | Brian S. Alper, Mario Tristan, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal, Paul Whaley | 2022-03-11 vote 5-0 by Janice Tufte, Harold Lehmann, Philippe Rocca-Serra, nisha mathew, Paul Whaley | 2022-02-25 vote 12-1 by Paul Whaley, Mario Tristan, Robin Ann Yurk, Janice Tufte, Yasser Sami Amer, Harold Lehmann, Alejandro Piscoya, Sunu Alice Cherian, Jesus Lopez-Alcalde, Sumalatha A, Joanne Dehnbostel, Paola Rosati, nisha mathew | 2022-02-11 comments: Add alternate term: Incorrect application of exposure measurement bias. I think this is OK, but the term should be rewritten so it is easier to read and understand what it means (the syntax is awkward, as it could be read as one adjective and three nouns) 2022-03-11 comment: In documenting this, and the "inappropriate method for exposure measurement", I think it would be helpful to document what we mean by e.g. "method" vs. "application of method". I feel these are meta-terms like "study design feature" that are part of the scaffolding of SEVCO, but not part of SEVCO itself. | Noted for Outcome Detection Bias: As of 2021-11-05 this term is not being prepared for vote. The current ROB tools do not distinguish the inappropriate conduct (used in QUADAS-2) from inadequate method (used in most other ROB tools) in the same tool, so the demand for this term is uncertain and thus not applied for version 1 of the Code System. | |||||
5 | SEVCO:00229 | inconsistency in exposure measurement | An exposure measurement bias due to differences within groups in how the observed exposures are measured. | The exposure of interest can be an intervention or a prognostic factor, depending on the research context. "How the observed exposures are measured" may refer to the methods applied for measurement or the application of those methods. | Brian S. Alper, Mario Tristan, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal | 2022-02-11 vote 9-0 by Jesus Lopez-Alcalde, Mario Tristan, Robin Ann Yurk, Paul Whaley, Janice Tufte, Sunu Alice Cherian, Alejandro Piscoya, Harold Lehmann, Yasser Sami Amer | ||||||||
6 | SEVCO:00247 | inconsistency in instruments used for exposure measurement | An exposure measurement bias due to differences within groups in the instruments for measurement. | The exposure of interest can be an intervention or a prognostic factor, depending on the research context. Instruments used for measurement may include devices, surveys, and technologies. The concepts of "instruments used for measurement" is distinct from "process used for measurement" which may include protocols, techniques, and variations in context. | Brian S. Alper, Mario Tristan, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal, Harold Lehmann, Paul Whaley | 2022-02-11 vote 8-0 by Jesus Lopez-Alcalde, Mario Tristan, Robin Ann Yurk, Janice Tufte, Sunu Alice Cherian, Alejandro Piscoya, Harold Lehmann, Yasser Sami Amer | ||||||||
6 | SEVCO:00248 | inconsistency in processes used for exposure measurement | An exposure measurement bias due to differences within groups in the processes by which the instruments are used for measurement. | The exposure of interest can be an intervention or a prognostic factor, depending on the research context. The processes used for measurement may include protocols, techniques, and variations in context. The concept of "processes used for measurement" is distinct from "instruments used for measurement" which may include devices, surveys, and technologies. | Brian S. Alper, Mario Tristan, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal, Harold Lehmann, Paul Whaley | 2022-02-11 vote 8-0 by Jesus Lopez-Alcalde, Mario Tristan, Robin Ann Yurk, Janice Tufte, Sunu Alice Cherian, Alejandro Piscoya, Harold Lehmann, Yasser Sami Amer | ||||||||
5 | SEVCO:00241 | imbalance in exposure measurement | An exposure measurement bias due to differences between groups in how the observed exposures are measured. | The exposure of interest can be an intervention or a prognostic factor, depending on the research context. "How the observed exposures are measured" may refer to the methods applied for data measurement or the application of those methods. | Brian S. Alper, Joanne Dehnbostel, Paul Whaley, Robin Ann Yurk, Janice Tufte, Harold Lehmann | 2022-02-11 vote 8-0 by Jesus Lopez-Alcalde, Mario Tristan, Robin Ann Yurk, Janice Tufte, Sunu Alice Cherian, Alejandro Piscoya, Harold Lehmann, Yasser Sami Amer | ||||||||
6 | SEVCO:00249 | imbalance in instruments used for exposure measurement | An exposure measurement bias due to differences between groups in the instruments used for measurement. | The exposure of interest can be an intervention or a prognostic factor, depending on the research context. Instruments used for measurement may include devices, surveys, and technologies. The concepts of "instruments used for measurement" is distinct from "process used for measurement" which may include protocols, techniques, and variations in context. | Brian S. Alper, Mario Tristan, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal, Harold Lehmann, Paul Whaley | 2022-02-11 vote 8-0 by Jesus Lopez-Alcalde, Mario Tristan, Robin Ann Yurk, Janice Tufte, Sunu Alice Cherian, Alejandro Piscoya, Harold Lehmann, Yasser Sami Amer | ||||||||
6 | SEVCO:00250 | imbalance in processes used for exposure measurement | An exposure measurement bias due to differences between groups in the processes by which the instruments are used for measurement. | The exposure of interest can be an intervention or a prognostic factor, depending on the research context. The processes used for measurement may include protocols, techniques, and variations in context. The concept of "processes used for measurement" is distinct from "instruments used for measurement" which may include devices, surveys, and technologies. | Brian S. Alper, Mario Tristan, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal, Harold Lehmann, Paul Whaley | 2022-02-11 vote 8-0 by Jesus Lopez-Alcalde, Mario Tristan, Robin Ann Yurk, Janice Tufte, Sunu Alice Cherian, Alejandro Piscoya, Harold Lehmann, Yasser Sami Amer | ||||||||
4 | SEVCO:00230 | exposure classification bias | An exposure detection bias due to distortions in how the observed exposures are classified. | The exposure of interest can be an intervention or a prognostic factor, depending on the research context. If one is addressing a bias in the instruments or processes used to measure the observed exposure, use Exposure Measurement Bias. If one is addressing how the measured exposure is categorized, use Exposure Classification Bias. | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal | 2022-02-11 vote 8-0 by Jesus Lopez-Alcalde, Mario Tristan, Robin Ann Yurk, Janice Tufte, Sunu Alice Cherian, Alejandro Piscoya, Harold Lehmann, Yasser Sami Amer | ||||||||
5 | SEVCO:00231 | exposure definition bias | An exposure classification bias resulting from the definition or threshold used for exposure classification. | The exposure of interest can be an intervention or a prognostic factor, depending on the research context. An exposure definition bias suggests an internal validity problem in which the definition or threshold used for exposure classification does not represent the exposure of interest. If considering an external validity problem, the "Wrong question bias" (term not yet defined) may be used. An exposure definition bias is present when there are differences between the exposure of interest and the definition or threshold used for exposure classification. | Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Mario Tristan, Harold Lehmann, Paul Whaley | 2022-02-18 vote 5-0 by Joanne Dehnbostel, Sumalatha A, Janice Tufte, Harold Lehmann, Paul Whaley | 2022-02-11 vote 8-0 by Jesus Lopez-Alcalde, Mario Tristan, Robin Ann Yurk, Janice Tufte, Sunu Alice Cherian, Alejandro Piscoya, Harold Lehmann, Yasser Sami Amer [but comment discussion led to new term] | 2022-02-11 comments: Suggest Alternative term: threshold bias for exposure determination. Suggest remove sentence on external validity problem.... In the comments, "term not yet identified", should be flagged for later replacement. | ||||||
6 | SEVCO:00232 | nonrepresentative definition for exposure classification | An exposure definition bias due to a mismatch between the exposure of interest and the definition or threshold used for exposure measurement. | The exposure of interest can be an intervention or a prognostic factor, depending on the research context. | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal, Paul Whaley | 2022-02-18 vote 6-0 by Joanne Dehnbostel, Alejandro Piscoya, Sumalatha A, Janice Tufte, Harold Lehmann, Paul Whaley | 2022-02-11 vote 8-0 by Jesus Lopez-Alcalde, Mario Tristan, Robin Ann Yurk, Janice Tufte, Sunu Alice Cherian, Alejandro Piscoya, Harold Lehmann, Yasser Sami Amer [but definition changed to match change to parent term] | 2022-02-11 comment: Should there be a hyphen between "classification" and "system"? (Is it a system(s) bias or a classification-system bias?) (I think this question applies to several definitions) | ||||||
7 | SEVCO:00233 | surrogate marker bias for exposure classification | An exposure definition bias due to use of a definition that is proxy for the exposure rather than direct observation of the exposure. | The exposure of interest can be an intervention or a prognostic factor, depending on the research context. | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal, Paul Whaley | 2022-02-18 vote 5-0 by Joanne Dehnbostel, Sumalatha A, Janice Tufte, Harold Lehmann, Paul Whaley | 2022-02-11 vote 8-0 by Jesus Lopez-Alcalde, Mario Tristan, Robin Ann Yurk, Janice Tufte, Sunu Alice Cherian, Alejandro Piscoya, Harold Lehmann, Yasser Sami Amer [but definition changed to match change to parent term] | 2022-02-11 comment: Suggest add Alternative term: proxy bias for exposure classification system. | ||||||
6 | SEVCO:00234 | post-hoc definition of exposure | An exposure definition bias due to definition of the exposure after interacting with the study data. | The exposure of interest can be an intervention or a prognostic factor, depending on the research context. | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal, Paul Whaley | 2022-02-18 vote 5-0 by Joanne Dehnbostel, Sumalatha A, Janice Tufte, Harold Lehmann, Paul Whaley | 2022-02-11 vote 8-0 by Jesus Lopez-Alcalde, Mario Tristan, Robin Ann Yurk, Janice Tufte, Sunu Alice Cherian, Alejandro Piscoya, Harold Lehmann, Yasser Sami Amer [but definition changed to match change to parent term] | |||||||
5 | SEVCO:00236 | classification process bias for exposure determination | An exposure classification bias resulting from the application of the method used for exposure classification. | The exposure of interest can be an intervention or a prognostic factor, depending on the research context. A classification process bias for exposure determination suggests error is introduced by the process of classification, as distinct from the definition or threshold used (which would be an Exposure Definition Bias). | Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Mario Tristan, Harold Lehmann, Paul Whaley | 2022-02-18 vote 6-0 by Joanne Dehnbostel, Sumalatha A, Robin Ann Yurk, Janice Tufte, Harold Lehmann, Paul Whaley | 2022-02-11 vote 8-0 by Jesus Lopez-Alcalde, Mario Tristan, Robin Ann Yurk, Janice Tufte, Sunu Alice Cherian, Alejandro Piscoya, Harold Lehmann, Yasser Sami Amer [but term changed to parallel changes to Exposure Definition Bias] | 2022-02-11 comments: I would provide an example such as survey severity classification example of a method. (Inconsistent capitalization) | ||||||
5 | SEVCO:00237 | incorporation bias for exposure determination | An exposure classification bias due to the inclusion of the outcome or other relevant exposures under investigation in the method or process used for exposure classification. | The exposure of interest can be an intervention or a prognostic factor, depending on the research context. If the statistical analysis assumes independence of two variables, but one variable incorporates the other variable in its definition, the assumption will be false and the result will be distorted. | Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Mario Tristan, Paul Whaley | 2022-03-11 vote 5-0 by Janice Tufte, Harold Lehmann, Philippe Rocca-Serra, nisha mathew, Paul Whaley | 2022-02-25 vote 11-2 by Paul Whaley, Mario Tristan, Robin Ann Yurk, Janice Tufte, Yasser Sami Amer, Harold Lehmann, Alejandro Piscoya, Sunu Alice Cherian, Jesus Lopez-Alcalde, Sumalatha A, Joanne Dehnbostel, Paola Rosati, nisha mathew | 2022-02-18 comments: Needs an Alternative term or new term definition. I.e. Inclusion Bias for exposure definition for the term. Alternative term; eligibility bias for exposure determination Definitely needs a comment for application, I can't picture what this means! | ||||||
3 | SEVCO:00044 | confounder detection bias | A detection bias due to distortions in how the data for a potential confounder are determined. | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal | 10/10 as of 8/27/2021: Eric Au, Eric Harvey, Harold Lehmann, Alejandro Piscoya, Mario Tristan, Bhagvan Kommadi, Leo Orozco, Janice Tufte, Jesús López-Alcalde, Brian S. Alper | |||||||||
3 | SEVCO:00045 | detection bias related to the reference standard | A detection bias due to distortions in how the reference standard result is determined. | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal | 10/10 as of 8/27/2021: Eric Au, Eric Harvey, Harold Lehmann, Alejandro Piscoya, Mario Tristan, Bhagvan Kommadi, Leo Orozco, Janice Tufte, Jesús López-Alcalde, Brian S. Alper | |||||||||
3 | SEVCO:00046 | detection bias related to the index test | A detection bias due to distortions in how the index test result is determined. | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal | 5/5 as of 8/30/2021: Eric Harvey, Harold Lehmann, Mario Tristan, Bhagvan Kommadi, Janice Tufte, | 8/27/2021 vote 8-1 on "Detection Bias related to the index test (Bias for index text result determination) = A detection bias due to distortions in how the index text result is determined." by, Eric Au, Eric Harvey, Harold Lehmann, Alejandro Piscoya, Mario Tristan, Bhagvan Kommadi, Leo Orozco, Janice Tufte, Jesús López-Alcalde, Brian S. Alper | 2021-08-27 comment: I think the word "text" should be "test" in the Alternative term and definition. Please consider broadening this term and definition to include distortions in how the index event is determined | |||||||
3 | SEVCO:00383 | data entry bias | A detection bias due to differences between measured values and recorded values. | Data Entry Bias may include distorted results due to errors in transcription, translation, or transposition between the measured value and the recorded value, or between a recorded value and a subsequent recording of the value. | Brian S. Alper, Harold Lehmann, Janice Tufte, Muhammad Afzal, Kenneth Wilkins | 2022-08-26 vote 7-0 by nisha mathew, Philippe Rocca-Serra, Jesus Lopez-Alcalde, Harold Lehmann, Mario Tristan, Cauê Monaco, Eric Harvey | ||||||||
3 | SEVCO:00389 | inappropriate time interval between predictor assessment and outcome determination | A detection bias involving the time interval between the observation of the predictor and outcome, where the interval used by the study differs from the interval assumed by the predictive model. | Nonrepresentative observation period for outcome of interest is defined as an outcome ascertainment bias due to differences in the period used for observation of the outcome and the period for the outcome of interest. Nonrepresentative observation period for exposure of interest is defined as an exposure ascertainment bias due to differences in the time period used for observation of the exposure and the intended time period for the exposure of interest. In the context of predictive modeling, the time interval between the exposure (predictor) and the outcome should be representative of the time interval of interest. | Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Kenneth Wilkins, Harold Lehmann | 2023-10-20 vote 5-0 by Muhammad Afzal, Eric Harvey, Harold Lehmann, Louis Leff, Joanne Dehnbostel | 2023-10-06 vote 3-1 by Jesus Lopez-Alcalde, Harold Lehmann, Eric Harvey, Paul Whaley | 2023-10-06 comment: Two problems: (1) I am not sure how the definition equates to the term - in the term, it is about inappropriate time interval, but in the definition it is about the time interval not being that which is intended and representative of application of model. (2) I don't understand what is meant by the phrase "the intended time interval between the predictor and outcome that is representative of the application of the predictive model" - there are too many concepts all at once here, I think? | ||||||
2 | SEVCO:00021 | analysis bias | A bias related to the analytic process applied to the data. | Harold Lehmann, Brian S. Alper, Joanne Dehnbostel, Janice Tufte, Philippe Rocca-Serra, Mhuammad Afzal, Kenneth Wilkins | 6/6 as of 8/15/2021: Harold Lehmann, Brian S. Alper, Joanne Dehnbostel, Janice Tufte, Mhuammad Afzal, Eric Harvey | |||||||||
3 | SEVCO:00022 | bias related to selection of the analysis | An analysis bias due to inappropriate choice of analysis methods before the analysis is applied. | An analysis selection after the analysis is applied would be considered a Selective Analysis Reporting Bias. | Harold Lehmann, Brian S. Alper, Joanne Dehnbostel, Janice Tufte, Philippe Rocca-Serra, Mhuammad Afzal, Kenneth Wilkins | 6/6 as of 8/15/2021: Harold Lehmann, Brian S. Alper, Joanne Dehnbostel, Janice Tufte, Mhuammad Afzal, Eric Harvey | ROBIS 4.2 Were all pre-defined analyses reported or departures explained? | |||||||
4 | SEVCO:00376 | bias related to selection of the data for analysis | An analysis bias due to inappropriate choice of data included in the analysis before the analysis is applied. | An analysis selection after the analysis is applied would be considered a Selective Analysis Reporting Bias. | Brian S. Alper, Kenneth Wilkins, Mario Tristan, Joanne Dehnbostel | 2022-05-13 vote 6-0 by Eric Harvey, raradhikaag@gmail.com, Jesus Lopez-Alcalde, Janice Tufte, Mario Tristan, Paola Rosati | ||||||||
5 | SEVCO:00213 | bias due to post-baseline factors influencing selection of the data for analysis | A bias related to selection of the data analysis based on participant characteristics observed after study enrollment. | Brian S. Alper, Kenneth Wilkins, Mario Tristan, Joanne Dehnbostel | 2022-05-13 vote 6-0 by Eric Harvey, raradhikaag@gmail.com, Jesus Lopez-Alcalde, Janice Tufte, Mario Tristan, Paola Rosati | ROBINS-I 2.1. Was selection of participants into the study (or into the analysis) based on participant characteristics observed after the start of intervention? | ||||||||
5 | SEVCO:00312 | missing or inadequate intention-to-treat analysis | A bias related to selection of the data analysis in which data are not completely analyzed according to the original assignment to comparison groups in an interventional study. | An intention-to-treat analysis may be defined as analysis of all randomized subjects according to their assigned intervention rather than according to the intervention actually received. There is considerable variation in reported studies with respect to the use of the term 'intention-to-treat analysis' and 'modified intention-to-treat analysis' but if the risk of bias assessment suggests an insufficient accounting for all participants as intended then one may report 'Inadequate intention-to-treat analysis'. In non-randomized studies, this term may be used to denote missing or inadequate analysis according to the intended treatment, e.g prescribed medication vs. taken medication. | Brian S. Alper, Kenneth Wilkins, Mario Tristan, Joanne Dehnbostel | 2022-06-03 vote 6-0 by Joanne Dehnbostel, Mario Tristan, Eric M Harvey, Harold Lehmann, Brian S. Alper, Jesus Lopez-Alcalde | 2022-05-13 vote 4-1 by Eric Harvey, raradhikaag@gmail.com, Jesus Lopez-Alcalde, Mario Tristan, Paola Rosati 2022-05-20 vote 9-1 by Joanne Dehnbostel, nelle.stocquart@kce.fgov.be, Eric M Harvey, Jesus Lopez-Alcalde, Paul Whaley, Robin Ann Yurk, Harold Lehmann, raradhikaag@gmail.com, Mario Tristan, Paola Rosati 2022-05-27 vote 4-1 by Robin Ann Yurk, Mario Tristan, Jesus Lopez-Alcalde, Eric M Harvey, Harold Lehmann | 2022-05-13 comment: Instead of defining "Inadequate intention-to-treat analysis" why not defining waht "intention-to-treat analysis" is? 2022-05-20 comment: Suggest change term name to Intention to Treat Analysis and remove word inadequate from the term as this term includes the limitation of the analysis in the definition. 2022-05-27 comment: Missing Data Analysis: examples are imputation of data according to rules. Purpose: To provide additional validity that the data are not biased from the missing data. {{2022-05-27 discussion suggests this can be handled by the SEVCO:00307 term [Inappropriate handling of missing data] which is classified as a 'Bias in processing of data'}} | ||||||
5 | SEVCO:00313 | missing or inadequate per-protocol analysis | A bias related to selection of the data analysis in which data are not completely analyzed according to the study protocol. | A per-protocol analysis may be defined as analysis of participants according to adherence to the assigned intervention (the 'treatment protocol') and/or according to adherence to the data collection protocol. Adherence may refer to adherence by the study participants or study personnel. | Brian S. Alper, Kenneth Wilkins, Mario Tristan, Joanne Dehnbostel, Paul Whaley, Harold Lehmann, Muhammad Afzal | 2022-06-03 vote 6-0 by Joanne Dehnbostel, Mario Tristan, Eric M Harvey, Harold Lehmann, Brian S. Alper, Jesus Lopez-Alcalde | 2022-05-13 vote 4-1 by Eric Harvey, raradhikaag@gmail.com, Jesus Lopez-Alcalde, Mario Tristan, Paola Rosati 2022-05-20 vote 7-3 by Joanne Dehnbostel, nelle.stocquart@kce.fgov.be, Eric M Harvey, Jesus Lopez-Alcalde, Paul Whaley, Robin Ann Yurk, Harold Lehmann, raradhikaag@gmail.com, Mario Tristan, Paola Rosati 2022-05-27 vote 4-1 by Robin Ann Yurk, Mario Tristan, Jesus Lopez-Alcalde, Eric M Harvey, Harold Lehmann | 2022-05-20 comments: I do not fully agree with this definition. I propose following the Cochrane Handbook: Naïve ‘per-protocol’ analysis: analysis restricted to individuals who adhered to their assigned interventions. Moreover, there is another analysis that is often biased: ‘As-treated’ analysis: analysis in which participants are analysed according to the intervention they actually received, even if their randomized allocation was to a different treatment group I would present these as different analyses (not as synonims) https://training.cochrane.org/handbook/current/chapter-08 ------ I think I see what the definition is saying but it is rather hard to parse. re: "Inadequate per-protocol analysis" = "A bias related to selection of the data analysis in which data are not completely analyzed according to the assignment to comparison groups according to the interventions received." Suggest edit term so it reads per protocol analysis and remove the word inadequate. This type of analysis includes the bias in the term already 2022-05-13 comment: Instead of defining "Inadequate per-protocol analysis" why not defining what "per-protocol anlysis" is? 2022-05-27 comment: Missing Data Analysis: examples are imputation of data according to rules. Purpose: To provide additional validity that the data are not biased from the missing data. {{2022-05-27 discussion suggests this can be handled by the SEVCO:00307 term [Inappropriate handling of missing data] which is classified as a 'Bias in processing of data'}} | ||||||
5 | SEVCO:00381 | missing or inadequate as-treated analysis | A bias related to selection of the data analysis in which data are not completely analyzed according to the interventions actually received. | An as-treated analysis may be defined as analysis of subjects according to the intervention actually received rather than their assigned intervention. | Brian S. Alper, Paul Whaley, Harold Lehmann, Joanne Dehnbostel | 2022-06-03 vote 6-0 by Joanne Dehnbostel, Mario Tristan, Eric M Harvey, Harold Lehmann, Brian S. Alper, Jesus Lopez-Alcalde | 2022-05-20 comments (from precursor term of Inadequate per-protocol analysis): I do not fully agree with this definition. I propose following the Cochrane Handbook: Naïve ‘per-protocol’ analysis: analysis restricted to individuals who adhered to their assigned interventions. Moreover, there is another analysis that is often biased: ‘As-treated’ analysis: analysis in which participants are analysed according to the intervention they actually received, even if their randomized allocation was to a different treatment group I would present these as different analyses (not as synonims) https://training.cochrane.org/handbook/current/chapter-08 ------ I think I see what the definition is saying but it is rather hard to parse. re: "Inadequate per-protocol analysis" = "A bias related to selection of the data analysis in which data are not completely analyzed according to the assignment to comparison groups according to the interventions received." Suggest edit term so it reads per protocol analysis and remove the word inadequate. This type of analysis includes the bias in the term already 2022-05-27 comment: Missing Data Analysis: examples are imputation of data according to rules. Purpose: To provide additional validity that the data are not biased from the missing data. {{2022-05-27 discussion suggests this can be handled by the SEVCO:00307 term [Inappropriate handling of missing data] which is classified as a 'Bias in processing of data'}} | |||||||
4 | SEVCO:00377 | bias related to selection of the variables for analysis | An analysis bias due to inappropriate choice of variables included in the analysis before the analysis is applied. | An analysis selection after the analysis is applied would be considered a Selective Analysis Reporting Bias. | Brian S. Alper, Kenneth Wilkins, Mario Tristan, Joanne Dehnbostel, Muhammad Afzal | 2022-05-13 vote 5-0 by Eric Harvey, raradhikaag@gmail.com, Jesus Lopez-Alcalde, Mario Tristan, Paola Rosati | ||||||||
5 | SEVCO:00292 | bias related to selection of the variables for adjustment for confounding | An analysis bias due to inappropriate choice of the variables for adjustment for confounding before the analysis is applied. | An analysis selection after the analysis is applied would be considered a Selective Analysis Reporting Bias. | Brian S. Alper, Kenneth Wilkins, Mario Tristan, Joanne Dehnbostel, Muhammad Afzal | 2022-05-13 vote 5-0 by Eric Harvey, raradhikaag@gmail.com, Jesus Lopez-Alcalde, Mario Tristan, Paola Rosati | This term was determined to also match 'Post-intervention confounding different (draft) Code: SEVCO:00283' which was originally derived from the trigger question from ROBINS-I: 1.6. Did the authors control for any post-intervention variables that could have been affected by the intervention? Detailed analysis found this to be more about improper control of 'confounding variables' that were not truly confounding variables. | |||||||
6 | SEVCO:00299 | bias controlling for time-varying confounding | A bias related to selection of the variables for adjustment for confounding in which the confounding is time-dependent. | An analysis selection after the analysis is applied would be considered a Selective Analysis Reporting Bias. | Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel, Mario Tristan, Muhammad Afzal | 2022-05-13 vote 6-0 by Eric Harvey, raradhikaag@gmail.com, Jesus Lopez-Alcalde, Mario Tristan, Paola Rosati, Janice Tufte | ||||||||
6 | SEVCO:00301 | inadequate adherence effect analysis | A bias related to selection of the variables for adjustment for confounding by adherence. | An analysis selection after the analysis is applied would be considered a Selective Analysis Reporting Bias. | Brian S. Alper, Kenneth Wilkins, Mario Tristan, Joanne Dehnbostel, Muhammad Afzal | 2022-05-13 vote 6-0 by Eric Harvey, raradhikaag@gmail.com, Jesus Lopez-Alcalde, Mario Tristan, Paola Rosati, Janice Tufte | ||||||||
5 | SEVCO:00302 | predictors included in outcome definition | An analysis bias due to inappropriate choice of the variables for estimation of association in which one variable is incorporated in the definition of the other variable. | Predictors are also called covariates, risk indicators, prognostic factors, determinants, index test results, or independent variables (https://www.acpjournals.org/doi/10.7326/M18-1377). If a predictor in the model forms part of the definition or assessment of the outcome that the model predicts, the association between predictor and outcome will likely be overestimated, and estimates of model performance will be optimistic; in diagnostic research, this problem is generally called incorporation bias. (https://www.acpjournals.org/doi/10.7326/M18-1377) When this type of analysis bias is applied to predictive model analyses (in which the predictor is the exposure of interest), this type of bias is equivalent to "Incorporation bias for outcome determination" [SEVCO:00063] | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins | 2022-07-29 vote 5-0 by Janice Tufte, Philippe Rocca-Serra, Jesus Lopez-Alcalde, Harold Lehmann, Eric Harvey | 2022-07-29 comment: should "incorporation bias" be added as 'Alternative term' ? | |||||||
5 | SEVCO:00319 | bias related to selection of predictors based on univariable analysis | An analysis bias due to inappropriate choice of the predictor variables for estimation of association in which predictors are selected based on statistically significant univariable associations (without adjustment for other predictors). | Predictors are also called covariates, risk indicators, prognostic factors, determinants, index test results, or independent variables (https://www.acpjournals.org/doi/10.7326/M18-1377). | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins | 2022-07-29 vote 5-0 by Janice Tufte, Philippe Rocca-Serra, Jesus Lopez-Alcalde, Harold Lehmann, Eric Harvey | PROBAST (https://www.acpjournals.org/doi/10.7326/M18-1377).: 4.5 Was selection of predictors based on univariable analysis avoided? (Model development studies only) A data set will often have many features that could be used as candidate predictors, and in many studies researchers want to reduce the number of predictors during model development to produce a simpler model. In a univariable analysis, individual predictors are tested for their association with the outcome. Researchers often select the predictors with a statistically significant univariable association (for example, P < 0.05) for inclusion in the development of a final prediction model. This method can lead to incorrect predictor selection because predictors are chosen on the basis of their statistical significance as a single predictor rather than in context with other predictors (49, 50, 191). Bias occurs when univariable modeling results in omission of variables from the model, because some predictors are important only after adjustment for other predictors, known from previous research to be important, did not reach statistical significance in the particular development set (for example, due to small sample size). Also, predictors may be selected on the basis of a spurious (accidental) association with the outcome in the development set. A better approach to decide on omitting, combining, or including candidate predictors in multivariable modeling is to use nonstatistical methods—that is, methods without any statistical univariable pretesting of the associations between candidate predictors and outcome. Better methods include those based on existing knowledge of previously established predictors in combination with the reliability, consistency, applicability, availability, and costs of predictor measurement relevant to the targeted setting. Well-established predictors and those with clinical credibility should be included and retained in a prediction model regardless of any statistical significance (49, 50, 192). Alternatively, some statistical methods that are not based on prior statistical tests between predictor and outcome can be used to reduce the number of modeled predictors (for example, principal components analysis). | |||||||
4 | SEVCO:00378 | bias related to selection of the analytic framework | An analysis bias due to inappropriate choice of the analytic framework before the analysis is applied. | An analytic framework is the model, scaffolding, or organizational representation of concepts used in analyzing the data. The concepts included in an analytic framework may involve data, variables, formulas, assumptions, and adjustments. | Brian S. Alper, Kenneth Wilkins, Mario Tristan, Joanne Dehnbostel, Muhammad Afzal | 2022-05-27 vote 5-0 by Mario Tristan, Jesus Lopez-Alcalde, Eric M Harvey, Harold Lehmann, Joanne Dehnbostel | 2022-05-13 vote 6-0 by Eric Harvey, raradhikaag@gmail.com, Jesus Lopez-Alcalde, Mario Tristan, Paola Rosati, Janice Tufte [[but then the term changed in webmeeting 2022-05-13]] 2022-05-20 vote 4-2 by Joanne Dehnbostel, Eric M Harvey, Mario Tristan, Jesus Lopez-Alcalde, Paul Whaley, Harold Lehmann | 2022-05-20 comments: seems to be entirely too much overlap with the "inappropriate analytic framework" term I like this term and definition but I am not sure it is adequately differentiated from "inappropriate analytical framework". I think the term needs changing in some way. | ||||||
5 | SEVCO:00297 | inappropriate statistical model | A bias related to selection of the analytic framework in which the analytic model does not match the dataset characteristics or does not match the intention of the analysis. | A bias related to selection of the analytic framework is defined as an analysis bias due to inappropriate choice of the analytic framework before the analysis is applied. An inappropriate statistical model may include one in which there is a mismatch between the realities of the data and the assumptions required for the analytic model. Complexities in the data may include univariate concerns (e.g. skewness or outliers) and multivariate concerns (e.g. curvilinearity, co-linearity, or latent associations between variables). | Brian S. Alper, Harold Lehmann, Kenneth Wilkins, Muhammad Afzal, Joanne Dehnbostel | 2022-05-27 vote 5-0 by Mario Tristan, Jesus Lopez-Alcalde, Eric M Harvey, Harold Lehmann, Joanne Dehnbostel | 2022-05-20 vote 5-1 by Joanne Dehnbostel, Eric M Harvey, Mario Tristan, Jesus Lopez-Alcalde, Paul Whaley, Harold Lehmann | 2022-05-20 comment: I like this term and definition but I am not sure it is adequately differentiated from "Bias related to selection of the analytic framework". I think the term needs changing in some way. 2022-09-30 Steering Group change to Comment to application: comment added to this term instead of creating a new term for 'Inappropriate handling of complexities in the data' | ||||||
6 | SEVCO:00375 | inappropriate modeling of censoring | An inappropriate statistical model due to inappropriate accounting for ranges of potential observation in which data observation is not possible. | An inappropriate statistical model is a bias related to selection of the analytic framework in which the analytic model does not match the dataset characteristics or does not match the intention of the analysis. The "ranges of potential observation" may include periods of time (temporal ranges within which observation may occur), or ranges of detection with a measurement instrument (ranges of values that could be observed). The concept of ranges of potential observation in which data observation is "not possible" may include impossibility due to physical realities (such as timing after competing risks or measurement instruments with limited ranges of detection) or impossibility due to administrative decisions (such as the observation period defined by the study protocol). | Brian S. Alper, Harold Lehmann, Kenneth Wilkins, Joanne Dehnbostel, Muhammad Afzal | 2022-10-20 vote 7-0 by Philippe Rocca-Serra, Harold Lehmann, Joanne Dehnbostel, Mario Tristan, Paul Whaley, Janice Tufte, Eric Harvey | PROBAST 4.6 Were complexities in the data (e.g. censoring, competing risks, sampling of controls) accounted for appropriately? | |||||||
5 | SEVCO:00316 | bias due to selection of the statistical significance threshold | An analysis bias resulting from selection of an inappropriate threshold for statistical significance. | The statistical significance threshold is part of the analytic framework. A bias related to selection of the analytic framework is defined as an analysis bias due to inappropriate choice of the analytic framework before the analysis is applied. In frequentist analysis, statistical significance is the rejection of the null hypothesis based on the p value. In Bayesian analysis, statistical significance is the acceptance of the hypothesis based on the posterior probability. | Brian S. Alper, Harold Lehmann, Kenneth Wilkins, Muhammad Afzal, Joanne Dehnbostel, Paul Whaley | 2022-06-24 vote 5-0 by Mario Tristan, Harold Lehmann, Eric Harvey, Janice Tufte, Louis Leff | 2022-06-10 vote 5-1 by Brian S. Alper, Robin Ann Yurk, Paola Rosati, Mario Tristan, Harold Lehmann, Eric M Harvey 2022-06-17 vote 4-1 by Paul Whaley, Muhammad Afzal, Eric M Harvey, Jesus Lopez-Alcalde, Paola Rosati | 2022-06-10 comment: Consider editing the term definition to just Statistical significance threshold. For the Alternative term remove word bias. For the comment for application remove the first sentence about bias. 2022-06-17 comments: I think I get it, but it is a bit tortured and I wonder if a normal user would interpret it correctly or understand it? I am not sure we can rephrase the concept name making it more compact like "Statistical significance threshold selection bias" | ||||||
6 | SEVCO:00317 | bias related to multiple comparison adjustment | An analysis bias resulting from selection of a threshold for statistical significance which does not appropriately account for the effect of multiple comparisons on the statistical probability related to the result. | This bias may cause inappropriate rejection of the null hypothesis due to an unmodified threshold for significance in the face of multiple comparisons. This bias may also occur when adjustment for multiple comparisons is inappropriately applied and leads to failure to reject the null hypothesis. A bias due to selection of the statistical significance threshold is defined as an analysis bias resulting from selection of an inappropriate threshold for statistical significance. In frequentist analysis, statistical significance is the rejection of the null hypothesis based on the p value. In Bayesian analysis, statistical significance is the acceptance of the hypothesis based on the posterior probability. | Harold Lehmann, Kenneth Wilkins, Joanne Dehnbostel, Paul Whaley | 2022-06-24 vote 5-0 by Muhammad Afzal, Mario Tristan, Harold Lehmann, Eric Harvey, Louis Leff | 2022-06-10 vote 3-2 by Brian S. Alper, Robin Ann Yurk, Paola Rosati, Mario Tristan, Harold Lehmann 2022-06-17 vote 4-1 by Paul Whaley, Muhammad Afzal, Eric M Harvey, Jesus Lopez-Alcalde, Paola Rosati | 2022-06-10 comments: The measure does not have a statistical probability, the finding or result has a statistical probability. Change definition to "A statistical significance threshold selection bias in which the threshold for statistical significance does not account for the effect of multiple comparisons on the statistical probability related to the result."Is this a bias or just an incomplete analysis due to data requirements needed to compute the multiple comparison adjustment. 2022-06-17 comment: Looking at the significance threshold bias terms, the other two refer to selection of the analytic framework, but this one does not. Is there a reason for that? | ||||||
6 | SEVCO:00382 | mismatch of significance threshold and purpose | An analysis bias resulting from selection of a threshold for statistical significance which is inappropriate due to a mismatch between (1) how the statistical probability related to the result is determined and (2) the purpose for categorizing the result as statistically significant. | A threshold used for variable selection in regression analysis is often more liberal than a threshold used in hypothesis testing. Similarly a situation regarding safety may tolerate a higher chance of false positive findings so significance threshold may be higher. Some factors to consider include sample size, power of the test, and expected losses from Type I and Type II errors. In frequentist analysis, statistical significance is the rejection of the null hypothesis based on the p value. In Bayesian analysis, statistical significance is the acceptance of the hypothesis based on the posterior probability. | Harold Lehmann, Kenneth Wilkins, Joanne Dehnbostel, Mario Tristan, Paul Whaley | 2022-06-24 vote 5-0 by Muhammad Afzal, Mario Tristan, Harold Lehmann, Eric Harvey, Janice Tufte | 2022-06-10 vote 2-2 by Brian S. Alper, Robin Ann Yurk, Mario Tristan, Harold Lehmann 2022-06-17 vote 4-1 by Paul Whaley, Muhammad Afzal, Eric M Harvey, Jesus Lopez-Alcalde, Paola Rosati | 2022-06-10 comments: A mismatch can occur even if the purpose was taken into account. As the term name "Mismatch of significance threshold and purpose" is a match for the definition of the parent term (Statistical significance threshold selection bias) there is a question of whether this term is needed. Receiver operator curves are traditionally a statistic used to represent the continuum of cut point for the threshold value. The Sensitivity and Specificity can be calculated to evaluate the validity of the threshold cut point. 2022-06-17 comment: Add "Bias related to..." at beginning for consistency with others. What work is "selection of the analytic framework" doing in this definition? | How to Choose the Level of Significance: A Pedagogical Note -- The level of significance should be chosen with careful consideration of the key factors such as the sample size, power of the test, and expected losses from Type I and II errors. While the conventional levels may still serve as practical benchmarks, they should not be adopted mindlessly and mechanically for every application. (https://mpra.ub.uni-muenchen.de/66373/1/MPRA_paper_66373.pdf) | |||||
5 | SEVCO:00304 | immortal time bias | A bias related to selection of the analytic framework in which an outcome variable includes an observation period during which the outcome could not have occurred. | Consider a study in which a sample is followed from 2000 to 2010. Mortality during this time period is the outcome, and receipt of Superdrug is the exposure. --If 20 people received Superdrug in 2009 and 5 of them died in the subsequent year, the mortality with Superdrug is 25%. --If 20 people never received Superdrug and 1 died each year so by 2010 the mortality without Superdrug is 50%. Interpreting this result as Superdrug having a 50% relative risk reduction for mortality would be biased (distorted) by not accounting for the 9 years of time (immortal time) that the Superdrug recipients must have survived to be able to receive Superdrug in 2009. If the outcome variable were defined as mortality 2009-2010, there would be no bias and the result would be a 150% relative risk increase. If the outcome variable were defined as mortality 2000-2010, there is an immortal time bias (the Superdrug recipients could not have died before receiving Superdrug). | Brian S. Alper, Kenneth Wilkins, Harold Lehmann, Joanne Dehnbostel, Paul Whaley, Janice Tufte | 2022-07-22 vote 7-0 by Mario Tristan, Paul Whaley, Philippe Rocca-Serra, Harold Lehmann, Jesus Lopez-Alcalde, Janice Tuft, Eric Harvey | 2022-07-15 vote 5-1 by Mario Tristan, Philippe Rocca-Serra, Jesus Lopez-Alcalde, Eric Harvey, Harold Lehmann, Janice Tufte | 2022-07-15 comment: Why the need to specify "in a larger sample" in the second sentence, since there is no assumption about size of the sample in the first assertion? | Catalog of bias: A distortion that modifies an association between an exposure and an outcome, caused when a cohort study is designed so that follow-up includes a period of time where participants in the exposed group cannot experience the outcome and are essentially 'immortal'. in https://academic.oup.com/aje/article/167/4/492/233064 : Immortal time refers to a span of time in the observation or follow-up period of a cohort during which the outcome under study could not have occurred (13, 14). It usually occurs with the passing of time before a subject initiates a given exposure. While a subject is not truly immortal during this time span, the subject necessarily had to remain event free until start of exposure to be classified as exposed. An incorrect consideration of this unexposed time period in the design or analysis will lead to immortal time bias. in JAMA https://jamanetwork.com/journals/jama/article-abstract/2776315 Such studies may be subject to immortal time bias, meaning that, during the period of observation, there is some interval during which the outcome event cannot occur in https://watermark.silverchair.com/dyab157.pdf In particular, incorrect handling of follow-up times in terms of exposure status in the analysis of such studies may introduce immortal time bias (ITB) in favour of the exposed group.2,3 Immortal time refers to a period of time in which, by design, participants in the exposed group cannot experience the outcome. This often happens in pharmacoepidemiologic studies in which treatment is prescribed at variable times (with delay) after disease diagnosis. The bias occurs when the exposed group is considered to be exposed during their entire follow-up time (even during periods in which they are theoretically unexposed) or their unexposed follow-up times are discarded.2,3 | |||||
5 | SEVCO:00293 | inadequate sample size | A bias related to selection of the analytic framework in which the sample size invalidates the assumptions of the analytic framework. | An example of 'Inadequate sample size' is a finding of no effect with inadequate power to detect an effect. Another example of 'Inadequate sample size' is use of a parametric analysis with low numbers, which invalidates the assumptions for use of a parametric analysis. | Brian S. Alper, Harold Lehmann, Janice Tufte, Joanne Dehnbostel, Mario Tristan, Khalid Shahin | 2022-07-22 vote 7-0 by Mario Tristan, Paul Whaley, Philippe Rocca-Serra, Harold Lehmann, Jesus Lopez-Alcalde, Janice Tuft, Eric Harvey | ||||||||
3 | SEVCO:00294 | bias related to execution of the analysis | An analysis bias due to inappropriate decisions pertaining to preparation of data for analysis and/or conduct of the analysis. | "Bias related to selection of the analysis" is used when the wrong analysis is done (the analysis is planned wrongly). "Bias in processing of data" is used when the analysis is done wrong (the analysis is executed wrongly). | Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel, Janice Tufte, Harold Lehmann, Paul Whaley, Yuan Gao | 2022-11-04 vote 5-0 by Philippe Rocca-Serra, Jesus Lopez-Alcalde, Janice Tufte, Harold Lehmann, Eric Harvey | ||||||||
4 | SEVCO:00305 | incomplete analysis | An analysis bias due to absence of a component of the analytic process. | Missing components may include addressing missing data, addressing potential confounders, checking model assumptions, or robustness checks for model misspecification. | Harold Lehmann, Joanne Dehnbostel, Brian S. Alper, Kenneth Wilkins, Muhammad Afzal, Khalid Shahin | 2022-09-09 vote 6-0 by Philippe Rocca-Serra, Harold Lehmann, Jesus Lopez-Alcalde, Khalid Shahin, Janice Tufte, Eric Harvey | 2022-08-12 vote 4-1 by Paul Whaley, Mario Tristan, Harold Lehmann, Janice Tufte, Eric Harvey 2022-08-19 vote 5-1 by Paul Whaley, Mario Tristan, Harold Lehmann, Janice Tufte, Eric Harvey, Philippe Rocca-Serra 2022-08-25 vote 8-1 by nisha mathew, Jesus Lopez-Alcalde, Cauê Monaco, Paul Whaley, Mario Tristan, Harold Lehmann, Janice Tufte, Eric Harvey, Philippe Rocca-Serra | 2022-08-12 comment: Ambiguous as to whether the data is incomplete or the analytic process incomplete. Also seems to be ambiguous as to whether the analysis is of a selected subset of the existing data (thus relating to selection bias?), or of data that is not representative of the totality of theoretically available data (thus relating to external validity?). 2022-08-19 comment: tension between bias and process. Shouldn't it be "incomplete analysis related bias"? omission seems to indicate a wilful act. "absence" may be more neutral when considering a 'canonical / state of the art / standardised ' protocol. "An analysis bias due to absence of a component deemed necessary in a state-of- art (possibly regulator-approved ) analytic process." | ||||||
4 | SEVCO:00306 | inappropriate handling of uninterpretable data | An analysis bias due to omission of uninterpretable values, or their replacement with inappropriate values. | Inappropriate values may include use of non-representative imputation treating uninterpretable data like missing data. In evaluation of diagnostic tests, omission of or inappropriate classification of test results would be Inappropriate handling of uninterpretable data. | Harold Lehmann, Joanne Dehnbostel, Brian S. Alper, Kenneth Wilkins, Muhammad Afzal | 2022-09-16 vote 5-0 by Mario Tristan, Janice Tufte, Eric Harvey, Yaowaluk Ngoenwiwatkul, nisha mathew | 2022-08-12 vote 4-1 by Paul Whaley, Mario Tristan, Harold Lehmann, Janice Tufte, Eric Harvey | 2022-08-12 comment: I'm not sure I would understand the definition if I had not read the term, suggest rephrasing - "omission of accommodation for" is perhaps the problem part. | ||||||
4 | SEVCO:00307 | inappropriate handling of missing data | An analysis bias due to use of non-representative values in place of missing data. | Handling of missing data may address data missing at levels of single observations or groupings by encounter, participant, site, or subpopulation. | Harold Lehmann, Joanne Dehnbostel, Brian S. Alper, Kenneth Wilkins | 2022-08-12 vote 5-0 by Paul Whaley, Mario Tristan, Harold Lehmann, Janice Tufte, Eric Harvey | ||||||||
4 | SEVCO:00308 | inappropriate handling of variables | An analysis bias due to processing a variable in an incorrect role or with an incorrect datatype. | Typical variable roles are population, exposure, confounder, and outcome. A variable datatype may be numerical (continuous or discrete) or categorical (ordinal or nominal). | Harold Lehmann, Joanne Dehnbostel, Brian S. Alper, Kenneth Wilkins | 2022-09-16 vote 5-0 by Mario Tristan, Janice Tufte, Eric Harvey, Yaowaluk Ngoenwiwatkul, nisha mathew | Consider types to include Inappropriate handling of confounders, and Inappropriate handling of measurement error | |||||||
4 | SEVCO:00300 | bias in adjustment for selection bias | An analysis bias due to inappropriate application of adjustment techniques for correction of bias in the selection of participants for analysis. | Bias in the selection of participants for analysis could occur due to Participant Selection Bias (SEVCO:00003) or participant-level Bias related to selection of the data for analysis (SEVCO:00376). "It is in principle possible to correct for selection biases, for example by using inverse probability weights to create a pseudo-population in which the selection bias has been removed, or by modelling the distributions of the missing participants or follow up times and outcome events and including them using missing data methodology." (Sterne JA, Hernán MA, Reeves BC, et al. ROBINS-I: a tool for assessing risk of bias in non-randomised studies of interventions. BMJ. 2016 Oct 12;355:i4919. doi: 10.1136/bmj.i4919. PMID: 27733354; PMCID: PMC5062054. Supplementary Table A.) | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Janice Tufte, Muhammad Afzal | 2022-09-30 vote 5-0 by Jesus Lopez-Alcalde, Harold Lehmann, Janice Tufte, Eric Harve, Morufu Olalekan Raimi | "It is in principle possible to correct for selection biases, for example by using inverse probability weights to create a pseudo-population in which the selection bias has been removed, or by modelling the distributions of the missing participants or follow up times and outcome events and including them using missing data methodology." (Sterne JA, Hernán MA, Reeves BC, et al. ROBINS-I: a tool for assessing risk of bias in non-randomised studies of interventions. BMJ. 2016 Oct 12;355:i4919. doi: 10.1136/bmj.i4919. PMID: 27733354; PMCID: PMC5062054. Supplementary Table A.) | |||||||
4 | SEVCO:00309 | data transition bias | An analysis bias due to differences between recorded data and data used for analysis. | Data Transition Bias may include distorted results due to errors in transcription, translation, erroneous mapping, or transposition between the recorded data (values, labels, and other metadata) and the data used for analysis. Data Transition Bias may occur due to any problem encountered during the Extraction, Transformation, and Loading (ETL) process in data exchange. | Harold Lehmann, Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins | 2022-11-04 vote 5-0 by Philippe Rocca-Serra, Jesus Lopez-Alcalde, Janice Tufte, Harold Lehmann, Eric Harvey | ||||||||
4 | SEVCO:00311 | inappropriate handling of missing confounder data | An analysis bias due to use of non-representative values in place of missing data for variables in the role of confounder. | Handling of missing confounder data may address data missing at levels of single observations or groupings by encounter, participant, site, or subpopulation. Inappropriate handling of missing confounder data can result in misleading adjusted analyses. | Brian S. Alper, Harold Lehmann, Muhammad Afzal, Joanne Dehnbostel, Kenneth Wilkins | 2022-09-16 vote 5-0 by Mario Tristan, Janice Tufte, Eric Harvey, Yaowaluk Ngoenwiwatkul, nisha mathew | ||||||||
4 | SEVCO:00298 | computational implementation bias | An analysis bias due to miscalculations in the processing of the data. | This bias is intended to cover a broad range of errors in curating the data and performing the calculations specified or implied by the analytic plan, including but not limited to: memory allocation and other environmental specifications, data ingestion pipeline, statistical package choice and vetting, and syntax, semantics and logic of coding. this bias can be applied to both manual or computer based computation. | Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel, Janice Tufte, Harold Lehmann, Khalid Shahin, Muhammad Afzal, Neeraj Ojha | 2022-09-09 vote 6-0 by Philippe Rocca-Serra, Harold Lehmann, Jesus Lopez-Alcalde, Khalid Shahin, Janice Tufte, Eric Harvey | 2022-08-12 vote 4-1 by Paul Whaley, Mario Tristan, Harold Lehmann, Janice Tufte, Eric Harvey 2022-08-19 vote 4-2 by Paul Whaley, Mario Tristan, Harold Lehmann, Janice Tufte, Eric Harvey, Philippe Rocca-Serra | 2022-08-12 comment: Not sure about including data entry errors among errors in software code - the latter is a computational error, the former is not. Also, the definition does not specify computational processing. 2022-08-19 comment: the class label is ambiguous: is it "computation error caused bias" or it is 'contradictions caused bias? The latter term does not add clarity. Also, only data entry errors resulting from computational errors would fall under this type of bias, but not direct entry of values. | ||||||
3 | SEVCO:00324 | reported analysis not following pre-specified analysis plan | An analysis bias in which the reported analysis does not match the pre-specified analysis plan. | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte | 2023-03-10 vote 5-0 by Joanne Dehnbostel, Mario Tristan, Harold Lehmann, Eric Harvey, Jesus Lopez-Alcalde | |||||||||
3 | SEVCO:00303 | collider bias | An analysis bias in which an estimation of association between two variables is distorted by controlling for a third variable affected by both variables of interest (or factors causing the variables of interest). | Collider bias occurs when an exposure and outcome (or factors causing these) each influence a common third variable and that variable or collider is controlled for by design or analysis. In contrast, confounding occurs when an exposure and outcome have a shared common cause that is not controlled for. (JAMA 2022 Mar 14 https://jamanetwork.com/journals/jama/fullarticle/2790247) The "third variable" affected by both variables of interest can also be a "third variable" affected by an "intermediary variable" which is affected by both variables of interest. An analysis bias is defined as a bias related to the analytic process applied to the data. A bias is defined as a systematic distortion in research results (estimation of effect, association, or inference). Distortions in research results means differences between the reported results (findings, conclusions, effect estimates) and the actuality (the truth, the estimand [the quantity targeted for estimation]). | Brian S. Alper, Kenneth Wilkins, Harold Lehmann, Joanne Dehnbostel, Paul Whaley | 2022-07-08 vote 5-0 by Jesus Lopez-Alcalde, Eric Harvey, Paul Whaley, Janice Tufte, Harold Lehmann | 2022-07-01 vote 3-2 by Harold Lehmann, Paul Whaley, Jesus Lopez-Alcalde, Eric Harvey, Philippe Rocca-Serra | 2022-07-01 comments: Is this the same as a confounding variable? If not, please differentiate. the first comment seems a bit confusing: does collider bias occurs when the study design controls for a variable which is influenced by both the exposure and the outcome? I'm not sure this is correct. My understanding is that collision comes into play when effect modifiers are treated as confounders (and possibly when confounders are treated as modifiers? I don't know if it is symmetric). This reads as though it is an analysis unadjusted for confounders, with the factor causing both the cause and effect variables. Confounding: A < B > C and A > C Modification: A > B > C and A > C Collision: Conditioning on B under modification rather than confounding. | A structural classification of bias distinguishes between biases resulting from conditioning on common effects (“selection bias”) --- A Structural Approach to Selection Bias, https://journals.lww.com/epidem/Fulltext/2004/09000/A_Structural_Approach_to_Selection_Bias.20.aspx Collider bias occurs when an exposure and outcome (or factors causing these) each influence a common third variable and that variable or collider is controlled for by design or analysis. In contrast, confounding occurs when an exposure and outcome have a shared common cause that is not controlled for. -- JAMA 2022 Mar 14 https://jamanetwork.com/journals/jama/fullarticle/2790247 https://catalogofbias.org/biases/collider-bias/ Collider bias = A distortion that modifies an association between an exposure and outcome, caused by attempts to control for a common effect of the exposure and outcome | |||||
3 | SEVCO:00314 | preliminary analysis bias | An analysis bias related to analysis of data before the complete dataset is available. | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Muhammad Afzal, Kenneth Wilkins | 2022-12-23 vote 6-0 by Joanne Dehnbostel, Harold Lehmann, Yuan Gao, Jesus Lopez-Alcalde, Mario Tristan, Eric Harvey | |||||||||
3 | SEVCO:00295 | data-dredging bias | An analysis bias involving use of data analyses that are not pre-specified and fully disclosed, to select analyses with desirable results. | Types of data analysis that lead to data-dredging bias include but are not limited to repeated subgroup analyses, repeated adjusted analyses, repeated analyses with different analytic models, and repeated analyses across many outcomes for many variations of defining outcomes, any of which can be done to select ("cherry-pick") the analyses that provide a desired result. The desired result may be statistically significant findings or other specific results. The terms "p-hacking" and "Fishing expedition" are commonly used terms to describe data-dredging practices that lead to bias and are often used to imply bias. | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Janice Tufte, Paul Whaley, Kenneth Wilkins | 2022-12-23 vote 6-0 by Joanne Dehnbostel, Harold Lehmann, Yuan Gao, Jesus Lopez-Alcalde, Mario Tristan, Eric Harvey | 2022-12-09 votes 4-0 by Yuan Gao, Mario Tristan, Eric Harvey, Harold Lehmann 2022-12-16 votes 6-1 by Philippe Rocca-Serra, Janice Tufte, Yuan Gao, Jesus Lopez-Alcalde, Mario Tristan, Eric Harvey, Harold Lehmann | 2022-12-09 comment: Ioannidis, J. P. A. (2019) P values linked to null hypothesis significance testing (NHST) is the most widely (mis)used method of statistical inference. Empirical data suggest that across the biomedical literature (1990–2015), when abstracts use P values 96% of them have P values of 0.05 or less. The same percentage (96%) applies for full-text articles. 2022-12-16 comments: Delete comma in definition (before "that"). p-hacking and fishing expedition aren't synonyms but data processes leading to bias. "p-hacking induced bias" maybe | from Catalog of Bias (https://catalogofbias.org/biases/data-dredging-bias/): Data-dredging bias = A distortion that arises from presenting the results of unplanned statistical tests as if they were a fully prespecified course of analyses. from BMJ Evidence-Based Medicine (https://ebm.bmj.com/content/27/4/209): Background: what is data dredging bias? Data-dredging bias encompasses a number of more specific questionable practices (eg, fishing, p-hacking) all of which involve probing data using unplanned analyses and then reporting salient results without accurately describing the processes by which the results were generated. from Wikipedia (https://en.wikipedia.org/wiki/Data_dredging): Data dredging (also known as data snooping or p-hacking) is the misuse of data analysis to find patterns in data that can be presented as statistically significant, thus dramatically increasing and understating the risk of false positives. | |||||
3 | SEVCO:00348 | inadequate sensitivity analysis | An analysis bias due to inadequate approach to determine the implications of modeling assumptions, missing data, or distorted data for the interpretation of research results. | Sensitivity analysis is the process of accounting for the implications of missing or distorted data or modeling assumptions. The purpose of sensitivity analysis is to assess the robustness of findings given plausible variations in the context. Methods of sensitivity analysis to account for missing data include but are not limited to best-case scenario, worst-case scenario, and last-observation-carried-forward. Methods of sensitivity analysis to account for distorted data include but are not limited to intention-to-treat analysis, per-protocol analysis, and completer analysis. Methods of sensitivity analysis to account for modeling assumptions include but are not limited to variations in prior probabilities and changes in the statistical model. The targets and types of sensitivity analyses needed depend on the research question, the research design, the data, modeling assumptions (both verifiable and unverifiable from the data), and the context of the results interpretation. The adequacy of sensitivity analysis is assessed on the basis of the targets and types of sensitivity analyses reported. The term 'inadequate sensitivity analysis' matches the ROBIS signaling question 4.5 ''Were the findings robust, e.g. as demonstrated through funnel plot or sensitivity analyses?' A funnel plot may be used to detect missing data due to publication bias. Although funnel plot asymmetry has been equated with publication bias, the funnel plot displays a tendency for the intervention effects estimated in smaller studies to differ from those estimated in larger studies, and such small-study effects may be due to reasons other than publication bias. (Egger M, Smith GD, Schneider M, Minder C. Bias in meta-analysis detected by a simple, graphical test. BMJ 1997; 315: 629-634.) Consider also [inadequate accounting for heterogeneity](https://fevir.net/resources/CodeSystem/27270#SEVCO:00347). An inadequate sensitivity analysis does not result in a bias in the effect estimates, but may result in a bias in the interpretations derived from the effect estimates. | Brian S. Alper, Kenneth Wilkins, Saphia Mokrane, Homa Keshavarz, Joanne Dehnbostel, Harold Lehmann, Airton Stein | 2024-07-26 vote 5-0 by Harold Lehmann, Homa Keshavarz, Lenny Vasanthan, Eric Harvey, Airton Tetelbom Stein | 2024-07-05 vote 4-1 by C P Ooi, Sheyu Li, Lenny Vasanthan, Eric Harvey, Harold Lehmann 2024-07-12 vote 6-1 by Paul Whaley, Cauê Monaco, Sheyu Li, Philippe Rocca-Serra, Lenny Vasanthan, Saphia Mokrane, Eric Harvey 2024-07-19 vote 5-1 by Carlos Alva-Diaz, Sheyu Li, Harold Lehmann, Homa Keshavarz, Lenny Vasanthan, Eric Harvey | 2024-07-05 comments re: "inadequate sensitivity analysis" = "A synthesis bias due to inadequate approach to determine the magnitude or implications of missing or distorted data."1N) My understanding of the main purpose of sensitivity analysis can be testing the robustness of the findings and determining its impact factors. 2) (The paragraph on funnel plot sounds more textbooky/prescriptive information than we have generally provided, but I'm not against including it.) 2024-07-12 comment re: "inadequate sensitivity analysis" = "An analysis bias due to inadequate approach to determine the implications of missing data, distorted data, or modeling assumptions."1N) Need explanation regarding the term 'inadequate', which reflects clinical impact of this inadequate. 2024-07-19 comments re: "inadequate sensitivity analysis" = "An analysis bias due to inadequate approach to determine the implications of missing data, distorted data, or modeling assumptions."1N) The sensitivity analyses achieves multiple targets including but not limited to the quoted ones. But all try to make sure the robustness of the findings given heterogeneous research approaches. For the definition, it is good to quote the definition of sensitivity analysis, especially its purpose. Also, inadequate sensitivity analysis is rather a bias, but a failure to reach to purpose of sensitivity analysis. 2Y) I am not suggesting changes, but I had to ask chatGPT today about sensitivity analysis. This was its response: Sensitivity analysis is a technique used in the context of analyzing real-world data to assess how the results of a study or model are affected by changes in the input parameters or assumptions. It helps to understand the robustness of the findings and identify which variables have the most significant impact on the outcomes. Sensitivity analysis is particularly useful in fields such as economics, healthcare, environmental science, and engineering, where models often rely on uncertain or variable data inputs. Key Aspects of Sensitivity Analysis Purpose: To evaluate the stability and reliability of the results. To identify critical variables that significantly influence the outcomes. To assess the impact of uncertainty in input parameters on the conclusions. Applications: In healthcare, sensitivity analysis can be used to determine how different clinical assumptions affect health outcomes or cost-effectiveness in medical studies. In environmental science, it can help assess the effect of varying environmental parameters on pollution models. In economics, it evaluates how changes in economic indicators impact financial models or forecasts. Types of Sensitivity Analysis: Deterministic Sensitivity Analysis: This involves systematically changing one parameter at a time while keeping others constant to observe the effect on the outcome. Probabilistic Sensitivity Analysis: This involves changing multiple parameters simultaneously, often using statistical distributions to model uncertainty and variability in the inputs. Scenario Analysis: Examining different possible future scenarios by varying several parameters together to understand potential outcomes under different conditions. Local Sensitivity Analysis: Focuses on small changes around a baseline value of the parameters to assess local impact. Global Sensitivity Analysis: Assesses the impact of varying all parameters over their entire range to understand the overall influence on the model. re: "inadequate sensitivity analysis" = "An analysis bias due to inadequate approach to determine the implications of missing data, distorted data, or modeling assumptions." | ROBIS 4.5 Were the findings robust, e.g. as demonstrated through funnel plot or sensitivity analyses? | |||||
3 | SEVCO:00322 | final model not corresponding to multivariable analysis | An analysis bias in which the predictors and coefficients in the final model do not match the predictors and coefficients reported in the multivariable analysis. | This type of bias is applicable to model development studies and model selection within other study designs. | Kenneth Wilkins, Brian S. Alper | 2023-12-08 vote 5-0 by Brian S. Alper, Harold Lehmann, Javier Bracchiglione, Yasser Sami Amer, Eric Harvey | from PROBAST: 4.9 Do predictors and their assigned weights in the final model correspond to the results from the reported multivariable analysis? (Model development studies only) Predictors and coefficients of the final developed model, including intercept or baseline components, should be fully reported to allow others to correctly apply the model to other individuals. Mismatch between the presented final model and the reported results from the multivariable analysis (such as the intercept and predictor coefficients) is frequent. A review of prediction models in cancer in 2010 found that only 13 of 38 final prediction model equations (34%) used the same predictors and coefficients as the final presented multivariable analyses, 8 used the same predictors but different coefficients, 11 used neither the same coefficients nor the same predictors, and 6 used an unclear method to derive the final prediction model from the presented results of the multivariable analysis (121). Bias can arise when the presented final model and the results reported from the multivariable analysis do not match. One way this can occur is when nonsignificant predictors are dropped from a larger model to arrive at a final presented model but the predictor coefficients from the larger model are used to define the final model, which are no longer correct. When predictors are dropped from a larger model, it is important to reestimate all predictor coefficients of the smaller model because the latter has become the final model. These newly estimated predictor coefficients are likely different even if nonsignificant or irrelevant predictors from the larger model are dropped. When a study reports a final model in which both predictors and regression coefficients correspond to the reported results of the multivariable regression analysis or model, this question should be answered as Y. If the final model is based only on a selection of predictors from the reported multivariable regression analysis without refitting the smaller model, it should be answered as N or PN. When no information is given on the multivariable modeling from which predictors and regression coefficients are derived, it should be answered as NI. This signaling question is not about detecting improper methods of selecting predictors for the final model; such methods are addressed in signaling question 4.5. | |||||||
3 | SEVCO:00310 | cognitive interpretive bias affecting analysis | A bias related to the analytic process due to the subjective nature of human interpretation. | The Cognitive Interpretive Bias affecting analysis can be mitigated by masking the analyst as to the assignments for the groups, and by specification of the analysis prior to data availability. | Brian S. Alper, Harold Lehmann, Kenneth Wilkins, Joanne Dehnbostel, Janice Tufte | 2022-11-18 vote 6-0 by Mahnoor Ahmed, Yuan Gao, Harold Lehmann, Jesus Lopez-Alcalde, Paul Whaley, Eric Harvey | ||||||||
4 | SEVCO:00379 | cognitive interpretive bias affecting analysis selection | A bias related to selection of the analysis due to the subjective nature of human interpretation. | Bias related to selection of the analysis is defined as an analysis bias due to inappropriate choice of analysis methods before the analysis is applied. The Cognitive Interpretive Bias affecting analysis selection can be mitigated by masking the analyst as to the assignments for the groups, and by specification of the analysis prior to data availability. | Brian S. Alper, Kenneth Wilkins, Harold Lehmann, Joanne Dehnbostel | 2022-07-29 vote 5-0 by Janice Tufte, Philippe Rocca-Serra, Jesus Lopez-Alcalde, Harold Lehmann, Eric Harvey | ||||||||
5 | SEVCO:00315 | availability bias affecting analysis selection | A Cognitive Interpretive Bias due to the use of information which is most readily available, rather than information which is most representative, affecting analysis selection. | Selection of inappropriate data or variables for analysis is an availability bias when the appropriate data or variables are not readily available to the analyst and therefore the appropriate analysis is not selected. Selection of an inappropriate analysis due to familiarity with the analytic techniques is an availability bias when the appropriate technique is unfamiliar and therefore not selected. The term "Availability bias affecting analysis selection" is about selection of the analysis and not about missing data. | Brian S. Alper, Janice Tufte, Joanne Dehnbostel, Kenneth Wilkins | 2022-08-26 vote 7-0 by nisha mathew, Philippe Rocca-Serra, Jesus Lopez-Alcalde, Harold Lehmann, Mario Tristan, Cauê Monaco, Eric Harvey | 2022-08-12 vote 4-1 by Paul Whaley, Mario Tristan, Harold Lehmann, Janice Tufte, Eric Harvey 2022-08-19 vote 5-1 by Paul Whaley, Mario Tristan, Harold Lehmann, Janice Tufte, Eric Harvey, Philippe Rocca-Serra | 2022-08-12 comment: Clarify as to whether this is exclusively about cognitive availability? Seems ambiguous in current phrasing. Would suggest comment for application to make clear specific circumstances in which this applies. 2022-08-19 comment: The definition is ambiguous about whether limits on access to the information is cognitive (e.g. familiarity) or otherwise. Also, the definition specifies "information" when the thing being selected is a technique for analysing information. | Catalogue of Bias: Availability bias A distortion that arises from the use of information which is most readily available, rather than that which is necessarily most representative. | |||||
4 | SEVCO:00380 | cognitive interpretive bias affecting execution of the analysis | A bias in processing of data due to the subjective nature of human interpretation. | Bias in processing of data is defined as an analysis bias due to inappropriate decisions pertaining to preparation of data for analysis and/or conduct of the analysis. This bias may be mitigated by the partial masking or blinding of the individuals conducting the analysis. | Kenneth Wilkins, Joanne Dehnbostel, Janice Tufte, Paul Whaley, Yuan Gao, Harold Lehmann, Brian S. Alper | 2022-12-02 vote 6-0 by Mario Tristan, Yuan Gao, Mahnoor Ahmed, Muhammad Afzal, Janice Tufte, Eric Harvey | ||||||||
5 | SEVCO:00296 | lack of blinding of data analysts | A cognitive interpretive bias affecting execution of the analysis due to the analyst's awareness of the participants' status with respect to the variables defining the comparison groups. | Brian S. Alper, Kenneth Wilkins, Harold Lehmann, Janice Tufte, Joanne Dehnbostel, Paul Whaley | 2022-12-02 vote 5-0 by Mario Tristan, Mahnoor Ahmed, Muhammad Afzal, Janice Tufte, Eric Harvey | 2022-12-02 comment: Should it be participants' statuses --- EWG discussion notes that "status" can be used for the plural | ||||||||
3 | SEVCO:00392 | inappropriate weighting bias | An analysis bias in which the weights used in model construction do not align with the target of estimation or estimand. | This bias often occurs with the omission of sampling weights in a model or in the process of trying to mitigate misrepresentation of a population due to sampling. One example is use of an unweighted model with National Health and Nutrition Examination Survey (NHANES) data. This bias occurs when attempting to reweight imbalanced classes in a model to make them representative of the source population, when weights drive estimation away from the target. | Brian S. Alper, Kenneth Wilkins | 2023-10-13 vote 6-0 by Louis Leff, Jesus Lopez-Alcalde, Harold Lehmann, Eric Harvey, Paul Whaley, Janice Tufte | ||||||||
3 | SEVCO:00026 | synthesis bias | A bias in the conduct of an analysis combining two or more studies or datasets. | A synthesis bias results from methods used to select, manipulate or interpret data for evidence synthesis. | Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins | 2024-05-17 vote 5-0 by Saphia Mokrane, Lenny Vasanthan, Sheyu Li, Eric Harvey, Harold Lehmann | ||||||||
4 | SEVCO:00346 | bias related to selection of the analytic framework for synthesis | A synthesis bias resulting from an analytic approach that is not suitable for the included studies. | The term 'synthesis bias related to selection of the analytic framework' used for a systematic review or synthesis is equivalent to the term [bias related to selection of the analytic framework](#SEVCO:00378) used for a single study. An analytic framework is the model, scaffolding, or organizational representation of concepts used in analyzing the data. The concepts included in an analytic framework may involve data, variables, formulas, assumptions, and adjustments. If the analytic framework selected for synthesis does not match the data and research question, there is a risk of distorted results which constitutes bias. The term 'bias related to selection of the analytic framework for synthesis' matches the ROBIS signaling question 4.3 'Was the synthesis appropriate given the nature and similarity in the research questions, study designs and outcomes across included studies?' | Brian S. Alper, Harold Lehmann, Kenneth Wilkins, Joanne Dehnbostel | 2024-06-21 vote 8-0 by Cauê Monaco, Lenny Vasanthan, Homa Keshavarz, Yaowaluk Ngoenwiwatkul, Harold Lehmann, Sean Grant, Eric Harvey, Carlos Alva-Diaz | 2024-06-07 vote 5-1 by Sean Grant, Sheyu Li, Lenny Vasanthan, Harold Lehmann, Eric Harvey, Carlos Alva-Diaz 2024-06-14 vote 8-1 by Carlos Alva-Diaz, Yaowaluk Ngoenwiwatkul, Homa Keshavarz, Sean Grant, Sheyu Li, Eric Harvey, Lenny Vasanthan, Harold Lehmann, Janice Tufte | 2024-06-07 comment: The term itself is new to me: is there a more established term? 2024-06-14 comments re: "bias related to selection of the analytic framework for synthesis" = "A synthesis bias resulting from an analytic approach that is not suitable for the included studies."1N) As above, I understand the concept though I am not clear how this is a "bias"? "Not suitable" also sounds like poor study execution rather than a bias. 2) Have we been consistent in other Comments for application in matching the signaling questions? | ROBIS 4.3 Was the synthesis appropriate given the nature and similarity in the research questions, study designs and outcomes across included studies? | |||||
4 | SEVCO:00347 | inadequate accounting for heterogeneity | A synthesis bias due to inadequate approach to determine the magnitude, cause, or implications of variation among studies. | Adequate accounting for variation among studies includes measuring the variation among studies, determining if substantial variation is systematic or random, and addressing the implications of substantial variation if present. The term 'inadequate accounting for heterogeneity' matches the ROBIS signaling question 4.4 'Was between-study variation (heterogeneity) minimal or addressed in the synthesis?' [Bias](https://fevir.net/resources/CodeSystem/27270#SEVCO:00001) is defined as a systematic distortion in research results (estimation of effect, association, or inference). Distortions in research results means differences between the reported results (findings, conclusions, effect estimates) and the actuality (the truth, the estimand [the quantity targeted for estimation]). [Synthesis bias](https://fevir.net/resources/CodeSystem/27270#SEVCO:00026) is defined as a bias in the conduct of an analysis combining two or more studies or datasets. An inadequate approach to the accounting for heterogeneity in the conduct of an evidence synthesis can introduce or obscure a systematic distortion in research results. | Brian S. Alper, Harold Lehmann, Homa Keshavarz, Joanne Dehnbostel | 2024-07-05 vote 6-0 by Cauê Monaco, C P Ooi, Sheyu Li, Lenny Vasanthan, Eric Harvey, Harold Lehmann | 2024-06-21 vote 5-2 by Cauê Monaco, Lenny Vasanthan, Homa Keshavarz, Yaowaluk Ngoenwiwatkul, Harold Lehmann, Sean Grant, Eric Harvey 2024-06-28 vote 6-1 by Harold Lehmann, Lenny Vasanthan, Homa Keshavarz, Sean Grant, Philippe Rocca-Serra, Eric Harvey, Sheyu Li | 2024-06-21 comments re: "bias related to accounting for heterogeneity" = "A synthesis bias due to inadequate accounting for variation among studies."1N) Missing text: "The term " matches..."2N) Not clear how this is a "bias" rather than a low-quality review? 2024-06-28 comment re: "inadequate accounting for heterogeneity" = "A synthesis bias due to inadequate approach to determine the magnitude, cause, or implications of variation among studies."This again does not sound like a "bias" to me. Perhaps it contributes to a bias, though it sounds more like poor execution of an evidence synthesis rather than a "bias" | ROBIS 4.4 Was between-study variation (heterogeneity) minimal or addressed in the synthesis? | |||||
4 | SEVCO:00349 | inadequate accounting for bias in constituent studies | A synthesis bias due to inadequate approach to determine the risks of bias in the studies selected for synthesis or the implications of those risks. | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Homa Keshavarz, Airton Stein | 2024-07-26 vote 5-0 by Harold Lehmann, Homa Keshavarz, Lenny Vasanthan, Eric Harvey, Airton Tetelbom Stein | ROBIS 3.4 Was risk of bias (or methodological quality) formally assessed using appropriate criteria? ROBIS 3.5 Were efforts made to minimise error in risk of bias assessment? ROBIS 4.6 Were biases in primary studies minimal or addressed in the synthesis? | ||||||||
5 | SEVCO:00353 | inadequate criteria for methodologic quality assessment | A synthesis bias due to inadequate criteria to determine the risks of bias in the studies selected for synthesis. | The term 'inadequate criteria for methodologic quality assessment' matches the ROBIS signaling question 3.4 'Was risk of bias (or methodological quality) formally assessed using appropriate criteria?' | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Homa Keshavarz, Airton Stein | 2024-07-26 vote 5-0 by Harold Lehmann, Homa Keshavarz, Lenny Vasanthan, Eric Harvey, Airton Tetelbom Stein | ROBIS 3.4 Was risk of bias (or methodological quality) formally assessed using appropriate criteria? | |||||||
5 | SEVCO:00354 | inadequate process for methodologic quality assessment | A synthesis bias due to inadequate process to determine the risks of bias in the studies selected for synthesis. | The term 'inadequate process for methodologic quality assessment' matches the ROBIS signaling question 3.5 'Were efforts made to minimise error in risk of bias assessment?' | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Homa Keshavarz, Airton Stein | 2024-07-26 vote 5-0 by Harold Lehmann, Homa Keshavarz, Lenny Vasanthan, Eric Harvey, Airton Tetelbom Stein | ROBIS 3.5 Were efforts made to minimise error in risk of bias assessment? | |||||||
5 | SEVCO:00396 | inadequate adjusting for bias in constituent studies | A synthesis bias due to inadequate approach to determine the implications of bias in the studies selected for synthesis. | The term 'inadequate adjusting for bias in constituent studies' matches the ROBIS signaling question 4.6 'Were biases in primary studies minimal or addressed in the synthesis?' | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Homa Keshavarz, Airton Stein | 2024-07-26 vote 5-0 by Harold Lehmann, Homa Keshavarz, Lenny Vasanthan, Eric Harvey, Airton Tetelbom Stein | ROBIS 4.6 Were biases in primary studies minimal or addressed in the synthesis? | |||||||
4 | SEVCO:00369 | inadequate process for data extraction | A synthesis bias due to inadequate process to select and abstract the data from the included studies or datasets. | The term 'inadequate process for data extraction' matches the ROBIS signaling question 3.1 'Were efforts made to minimise error in data collection?' | Brian S. Alper, Harold Lehmann, Airton Stein, Joanne Dehnbostel | 2024-08-02 vote 5-0 by Harold Lehmann, Homa Keshavarz, Lenny Vasanthan, Airton Tetelbom Stein, Eric Harvey | ROBIS 3.1 Were efforts made to minimise error in data collection? | |||||||
4 | SEVCO:00351 | inadequate study characteristics available for results interpretation | A synthesis bias due to inadequate availability of information regarding design, data collection, analysis or reporting for the study results or datasets collected for synthesis. | The term 'inadequate study characteristics available for results interpretation' matches the ROBIS signaling question 3.2 'Were sufficient study characteristics available for both review authors and readers to be able to interpret the results?' | Brian S. Alper, Harold Lehmann, Airton Stein, Joanne Dehnbostel | 2024-08-02 vote 5-0 by Harold Lehmann, Homa Keshavarz, Lenny Vasanthan, Airton Tetelbom Stein, Eric Harvey | ROBIS 3.2 Were sufficient study characteristics available for both review authors and readers to be able to interpret the results? | |||||||
4 | SEVCO:00352 | bias related to selection of the data for synthesis | A synthesis bias due to inappropriate choice of study results included in the synthesis. | Selection of study results may occur before or after data extraction and transformation. The process of extracting results from a study involves identifying, collecting, and recording the findings (quantitative or qualitative) that are relevant for synthesis. The process of transformation may include unit of measure changes, other data harmonization, or statistical transformations so the data can be integrated into a meta-analysis. Another reason for not including a relevant study result is the failure to contact the study investigators for missing data. If the data selected for synthesis does not match the data that is available, there is a risk of distorted results which constitutes bias. The term 'bias related to selection of the data for synthesis' matches the ROBIS signaling question 3.3 'Were all relevant study results collected for use in the synthesis?' | Brian S. Alper, Harold Lehmann, Airton Stein | 2024-08-23 vote 9-0 by Carlos Alva-Diaz, Elma OMERAGIC, Lenny Vasanthan, Harold Lehmann, Philippe Rocca-Serra, Eric Harvey, Sean Grant, Airton Tetelbom Stein, Homa Keshavarz | 2024-08-09 vote 6-1 by Harold Lehmann, Homa Keshavarz, Sheyu Li, Sean Grant, Eric Harvey, Lenny Vasanthan, Airton Tetelbom Stein 2024-08-16 vote 5-1 by Cauê Monaco, Bhagvan Kommadi, Jennifer Hunter, Eric Harvey, Harold Lehmann, Airton Tetelbom Stein | 2024-08-09 comment re: "bias related to selection of the data for synthesis" = "A synthesis bias due to inappropriate choice of study results included in the synthesis before the synthesis is applied."1N) Once again, what is the difference between this bias and selection bias? Similar comments with above. 2024-08-16 comment re: "bias related to selection of the data for synthesis" = "A synthesis bias due to inappropriate choice of study results included in the synthesis."1N) I'm unsure that "and appropriate for synthesis" is correct. For instance, an included study may have measured the outcome of interest but not reported the results in a way that can be used in the planned meta-analysis (e.g., only the p value is reported, or no SDM/SE is reported). Even though the results cannot be used (i.e., are not appropriate) for the meta-analysis, there is still a risk of distorted results which constitutes bias. Perhaps I am mistaken, and this bias only refers to errors of judgement by the reviewers. Do we need to mention some examples relevant to selection bias arising after data extraction? e.g., failing to undertake appropriate statistical transformations so that the reported results can be used in a meta-analysis. Another reason for not including a relevant study result is the failure to contact the study investigators for missing data. Does this need to be mentioned here or is it covered under a different term? If this term is supposed to map to ROBIS 3.3., then this is another example of not collecting relevant results. 2024-08-23 comment re: "bias related to selection of the data for synthesis" = "A synthesis bias due to inappropriate choice of study results included in the synthesis."1Y) It might be good to mention conditions when the study selection happens after data extraction and transformation. One question - if anyone is working on an update of a review and if the previous author team havent selected the appropriate data in their included studies and the current team have published an update without updating the study results of the previously included studies, how do we handle that here? | ROBIS 3.3 Were all relevant study results collected for use in the synthesis? | |||||
3 | SEVCO:00320 | inappropriate evaluation of predictive model performance measures | An analysis bias in which the method for analysis of a performance measure (such as calibration or discrimination) is not adequate or suitable for the predictive model. | According to PROBAST explanation, to fully gauge the predictive performance of a model, reviewers must assess both model calibration and discrimination (such as the c-index) addressing the entire range of the model-predicted probabilities. (https://www.acpjournals.org/doi/10.7326/M18-1377) | Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Kenneth Wilkins | 2023-10-06 vote 5-0 by Jesus Lopez-Alcalde, Harold Lehmann, Eric Harvey, Paul Whaley, Janice Tufte | ||||||||
4 | SEVCO:00393 | inappropriate evaluation of calibration of predictive model | An analysis bias in which the method for analysis of calibration is not adequate or suitable for the predictive model. | Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Kenneth Wilkins | 2023-10-06 vote 5-0 by Jesus Lopez-Alcalde, Harold Lehmann, Eric Harvey, Paul Whaley, Janice Tufte | 2023-10-06 comment: Is the bias because an analyst prefers one model over another when there might be a more appropriate one ( perhaps the analyst is not familiar with?) | ||||||||
4 | SEVCO:00394 | inappropriate evaluation of discrimination of predictive model | An analysis bias in which the method for analysis of discrimination is not adequate or suitable for the predictive model. | Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Kenneth Wilkins | 2023-10-13 vote 5-0 by Louis Leff, Jesus Lopez-Alcalde, Harold Lehmann, Eric Harvey, Paul Whaley | |||||||||
4 | SEVCO:00321 | model overfitting | An analysis bias, specific to predictive model development studies, in which strategies to mitigate overfitting are not adequately applied. | Predictive model performance measures (calibration and discrimination) may be misinterpreted if there are no strategies to mitigate overfitting. This applies to development studies without external validation studies. Strategies to mitigate overfitting may include penalization/regularization, k-fold cross validation, train-test/validation split, etc. From the PROBAST explanation (https://www.acpjournals.org/doi/10.7326/M18-1377): "quantifying the predictive performance of a model on the same data from which the model was developed (apparent performance) tends to give optimistic estimates of performance due to overfitting—that is, the model is too much adapted to the development data set. This optimism is higher when any of the following are present: too few outcome events in total, too few outcome events relative to the number of candidate predictors (small EPV), dichotomization of continuous predictors, use of predictor selection strategies based on univariable analyses, or use of traditional stepwise predictor selection strategies (for example, forward or backward selection) in multivariable analysis in small data sets (small EPV)" | Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Kenneth Wilkins, Harold Lehmann | 2023-10-20 vote 5-0 by Muhammad Afzal, Eric Harvey, Harold Lehmann, Louis Leff, Joanne Dehnbostel | 2023-10-06 vote 3-1 by Jesus Lopez-Alcalde, Eric Harvey, Paul Whaley, Janice Tufte | 2023-10-06 comments: I am not sure about having a preferred term that actually consists of two terms - overfit and optimism. Is one a synonym of the other? Optimism- being too over optimistic and fitting things into the model that really were not defined early on? (adding inappropriate data that can skew the outcomes?) | ||||||
2 | SEVCO:00023 | reporting bias | A bias due to distortions in the selection of or representation of information in study results or research findings. | Bias is defined as a systematic distortion in research results (estimation of effect, association, or inference). Distortions in research results means differences between the reported results (findings, conclusions, effect estimates) and the actuality (the truth, the estimand [the quantity targeted for estimation]). | Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Yuan Gao | 2022-10-21 vote 7-0 by Philippe Rocca-Serra, Harold Lehmann, Joanne Dehnbostel, Mario Tristan, Brian Alper, Janice Tufte, Eric Harvey | CoB: Reporting biases = A systematic distortion that arises from the selective disclosure or withholding of information by parties involved in the design, conduct, analysis, or dissemination of a study or research findings (https://catalogofbias.org/biases/reporting-biases/) also notes: The Dictionary of Epidemiology defines reporting bias as the “selective revelation or suppression of information (e.g., about past medical history, smoking, sexual experiences) or of study results.” The Cochrane Handbook states it arises “when the dissemination of research findings is influenced by the nature and direction of results.” The James Lind Library states “biased reporting of research occurs when the direction or statistical significance of results influence whether and how research is reported.” QUIPS: The Statistical Analysis and Reporting domain addresses the appropriateness of the study’s statistical analysis and completeness of reporting. It helps the assessor judge whether results are likely to be spurious or biased because of analysis or reporting. To make this judgment, the assessor considers the data presented to determine the adequacy of the analytic strategy and model-building process and investigates concerns about selective reporting. Selective reporting is an important issue in prognostic factor reviews because studies commonly report only factors positively associated with outcomes. A study would be considered to have low risk of bias if the statistical analysis is appropriate for the data, statistical assumptions are satisfied, and all primary outcomes are reported. ROB2 = This domain addresses bias that arises because the reported result is selected (based on its direction, magnitude or statistical significance) from among multiple intervention effect estimates that were calculated by the trial investigators. We call this bias in selection of the reported result. Consideration of risk of bias requires distinction between: • An outcome domain. This is a state or endpoint of interest, irrespective of how it is measured (e.g. severity of depression); • An outcome measurement. This is a specific way in which an outcome domain is measured (e.g. measurement of depression using the Hamilton rating scale 6 weeks after starting intervention); and • An outcome analysis. This is a specific result obtained by analysing one or more outcome measurements (e.g. the difference in mean change in Hamilton rating scale scores from baseline to 6 weeks between experimental and comparator groups). This domain does not address bias due to selective non-reporting (or incomplete reporting) of outcome domains that were measured and analysed by the trial investigators (115). For example, deaths of trial participants may be recorded by the trialists, but the reports of the trial might contain no mortality data, or state only that the intervention effect estimate for mortality was not statistically significant. Such bias puts the result of a synthesis at risk because results are omitted based on their direction, magnitude or statistical significance. It should therefore be addressed at the review level, as part of an integrated assessment of the risk of reporting bias (116). ROBINS-I = Bias in selection of the reported result | |||||||
3 | SEVCO:00024 | selective reporting bias | A reporting bias due to inappropriate selection of the results or research findings that are reported. | A <a href="https://fevir.net/resources/CodeSystem/27270#SEVCO:00023" target="_blank">reporting bias</a> is a bias due to distortions in the selection of or representation of information in study results or research findings. | Brian S. Alper, Paul Whaley, Harold Lehmann, Joanne Dehnbostel, Muhammad Afzal, Yuan Gao, Janice Tufte | 2023-01-06 vote 5-0 by Harold Lehmann, Yuan Gao, Janice Tufte, Eric Harvey, Mario Tristan | MASTER-31. There was no discernible data dredging or selective reporting of the outcomes | |||||||
4 | SEVCO:00330 | selective outcome reporting | A selective reporting bias due to inappropriate selection of which outcomes are reported within results or research findings. | A selective reporting bias is a reporting bias due to inappropriate selection of the results or research findings that are reported. A <a href="https://fevir.net/resources/CodeSystem/27270#SEVCO:00023" target="_blank">reporting bias</a> is a bias due to distortions in the selection of or representation of information in study results or research findings. | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Paul Whaley, Muhammad Afzal | 2023-01-13 vote 5-0 by Harold Lehmann, Joanne Dehnbostel, Paul Whaley, Janice Tufte, Eric Harvey | ||||||||
5 | SEVCO:00336 | selective outcome measure reporting | A selective reporting bias due to inappropriate selection of which outcome measures are reported for an outcome. | Selective outcome measure reporting may be considered a type of selective outcome reporting in which the measurement method for determination of the outcome is interpreted as a distinct outcome. A selective reporting bias is a reporting bias due to inappropriate selection of the results or research findings that are reported. A <a href="https://fevir.net/resources/CodeSystem/27270#SEVCO:00023" target="_blank">reporting bias</a> is a bias due to distortions in the selection of or representation of information in study results or research findings. | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Paul Whaley | 2023-01-13 vote 5-0 by Harold Lehmann, Joanne Dehnbostel, Paul Whaley, Janice Tufte, Eric Harvey | ||||||||
4 | SEVCO:00331 | selective subgroup reporting | A selective reporting bias due to inappropriate selection of subsets of groups of participants for which results or research findings are reported. | A selective reporting bias is a reporting bias due to inappropriate selection of the results or research findings that are reported. A <a href="https://fevir.net/resources/CodeSystem/27270#SEVCO:00023" target="_blank">reporting bias</a> is a bias due to distortions in the selection of or representation of information in study results or research findings. Selective subgroup reporting relates to choice of attributes of participants within cohorts, for example reporting limited to male patients. | Brian S. Alper, Harold Lehmann, Kenneth Wilkins, Paul Whaley, Joanne Dehnbostel | 2023-01-27 vote 7-0 by Janice Tufte, Harold Lehmann, Mario Tristan, Jesus Lopez-Alcalde, Yuan Gao, Paul Whaley, Eric Harvey | ||||||||
4 | SEVCO:00331a | selective comparison reporting | A selective reporting bias due to inappropriate selection of comparison groups for which results or research findings are reported. | A selective reporting bias is a reporting bias due to inappropriate selection of the results or research findings that are reported. A <a href="https://fevir.net/resources/CodeSystem/27270#SEVCO:00023" target="_blank">reporting bias</a> is a bias due to distortions in the selection of or representation of information in study results or research findings. Selective comparison reporting relates to choice of cohort definitions, for example an intention-to-treat analysis (as-randomized analysis) vs. an as-treated analysis. | Brian S. Alper, Harold Lehmann, Kenneth Wilkins, Paul Whaley, Joanne Dehnbostel | 2023-01-27 vote 6-0 by Janice Tufte, Harold Lehmann, Mario Tristan, Jesus Lopez-Alcalde, Yuan Gao, Eric Harvey | 2023-01-20 vote 2-1 by Yuan Gao, Paul Whaley, Eric Harvey | 2023-01-20 comment: I don't see enough of a connection between the term (selective comparison) and the definition, which does not seem to talk about comparisons. | ||||||
4 | SEVCO:00333 | selective analysis reporting from repeated analyses at multiple times | A selective reporting bias due to inappropriate selection of which analyses are reported for an outcome that was analyzed at multiple points in time in a longitudinal study. | A selective reporting bias is a reporting bias due to inappropriate selection of the results or research findings that are reported. A <a href="https://fevir.net/resources/CodeSystem/27270#SEVCO:00023" target="_blank">reporting bias</a> is a bias due to distortions in the selection of or representation of information in study results or research findings. | Brian S. Alper, Paul Whaley, Janice Tufte, Joanne Dehnbostel | 2023-01-27 vote 6-0 by Janice Tufte, Harold Lehmann, Mario Tristan, Jesus Lopez-Alcalde, Yuan Gao, Eric Harvey | ||||||||
4 | SEVCO:00334 | selective analysis reporting from multiple analytic models | A selective reporting bias due to inappropriate selection of which analyses are reported for an outcome that was analyzed in multiple ways. | A selective reporting bias is a reporting bias due to inappropriate selection of the results or research findings that are reported. A <a href="https://fevir.net/resources/CodeSystem/27270#SEVCO:00023" target="_blank">reporting bias</a> is a bias due to distortions in the selection of or representation of information in study results or research findings. Adjustment reporting bias, or selective reporting of adjusted estimates, is a type of selective analysis reporting from multiple analytic models. | Brian S. Alper, Harold Lehmann, Kenneth Wilkins | 2023-02-10 vote 6-0 by Cauê Monaco, Paul Whaley, Janice Tufte, Brian S. Alper, Jesus Lopez-Alcalde, Eric Harvey | ||||||||
4 | SEVCO:00335 | selective threshold reporting bias | A selective reporting bias due to inappropriate selection of which thresholds (used for definitions of the variables) are reported. | A selective reporting bias is a reporting bias due to inappropriate selection of the results or research findings that are reported. A <a href="https://fevir.net/resources/CodeSystem/27270#SEVCO:00023" target="_blank">reporting bias</a> is a bias due to distortions in the selection of or representation of information in study results or research findings. | Brian S. Alper, Paul Whaley, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins | 2023-02-24 vote 7-0 by Harold Lehmann, Yasser Sami Amer, Mario Tristan, Paul Whaley, Jesus Lopez-Alcalde, Janice Tufte, Eric Harvey | 2023-02-10 vote 5-1 by Cauê Monaco, Paul Whaley, Janice Tufte, Brian S. Alper, Jesus Lopez-Alcalde, Eric Harvey | 2023-02-10 comment: I'm not clear how the definition relates specifically to reporting bias. | ||||||
3 | SEVCO:00025 | cognitive interpretive bias in reporting | A distortion in the representation of study results or research findings due to the subjective nature of human interpretation. | Reporting bias is defined as a bias due to distortions in the selection of or representation of information in study results or research findings. Cognitive interpretive bias in reporting is about interpretation of the results rather than the choice of which results are presented (which would be Selective Reporting Bias). Cognitive interpretive biases in reporting include selective theory reporting, confirmation bias, bias of rhetoric, novelty bias, popularity bias, and positive results bias. | Brian S. Alper, Paul Whaley, Harold Lehmann, Janice Tufte, Joanne Dehnbostel | 2023-02-24 vote 7-0 by Harold Lehmann, Yasser Sami Amer, Mario Tristan, Paul Whaley, Jesus Lopez-Alcalde, Janice Tufte, Eric Harvey | 2023-02-10 vote 4-1 by Cauê Monaco, Paul Whaley, Jesus Lopez-Alcalde, Janice Tuft, Eric Harvey | 2023-02-10 comment: I think the definition is sound but the comment for application should be extended to make it clearer that this is about interpretation of the results rather than the choice of which results are presented. | CoB: Spin bias = The intentional or unintentional distorted interpretation of research results, unjustifiably suggesting favourable or unfavourable findings that can result in misleading conclusions (https://catalogofbias.org/biases/spin-bias/) | |||||
4 | SEVCO:00338 | interpretation of results not addressing potential for bias | A cognitive interpretive bias in reporting whereby the reported interpretation of results does not adequately address potential for bias. | Reporting bias is defined as a bias due to distortions in the selection of or representation of information in study results or research findings. Cognitive interpretive bias in reporting is defined as a distortion in the representation of study results or research findings due to the subjective nature of human interpretation. Interpretation of results not addressing potential for bias occurs when there is an absence of risk of bias assessment or incomplete inclusion of a risk of bias assessment in the interpretation of findings. | Brian S. Alper, Paul Whaley, Kenneth Wilkins, Harold Lehmann, Joanne Dehnbostel | 2023-03-03 vote 6-0 by A.G. Radhika, Cauê Monac, Janice Tufte, Harold Lehmann, Yasser Sami Amer, Eric Harvey | ||||||||
4 | SEVCO:00328 | results emphasized based on statistical significance | A cognitive interpretive bias in reporting whereby results with statistical significance are given exaggerated attention. | This bias may occur in several ways. Results may be interpreted as "positive" or "conclusive" if below the significance threshold and "negative" or "inconclusive" if above the significance threshold without proper interpretation of the meaning of the significance threshold. Results may be selectively emphasized in overall summarization of the results based on whether or not they are under the significance threshold. Results may be interpreted based on statistical significance instead of clinical significance, or results may misrepresent statistical significance and clinical significance as synonymous. | Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel | 2023-04-07 vote 5-0 by Paul Whaley, Jesus Lopez-Alcalde, Janice Tufte, Eric Harvey, Harold Lehmann | 2023-04-07 comments: I support this term as written, although I would suggest that we consider adding that assessment of statistical significance without assessing clinical significance often leads to this bias. I might suggest adding to Comment for application: "Another mis-interpretation is when statistical significance confused with clinical significance." | |||||||
4 | SEVCO:00340 | confirmation bias in reporting | A cognitive interpretive bias in reporting due to the influence of an individual’s ideas, beliefs or hypotheses. | Reporting bias is defined as a bias due to distortions in the selection of or representation of information in study results or research findings. Cognitive interpretive bias in reporting is defined as a distortion in the representation of study results or research findings due to the subjective nature of human interpretation. | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins | 2023-03-10 vote 8-0 by Joanne Dehnbostel, Mario Tristan, Harold Lehmann, Jesus Lopez-Alcalde, A.G. Rradhika, Janice Tufte, Eric Harvey, Cauê Monaco | ||||||||
4 | SEVCO:00329 | external validity bias | A cognitive interpretive bias in reporting due to a mismatch between what the observed data represent and the results that were reported. | Reporting bias is defined as a bias due to distortions in the selection of or representation of information in study results or research findings. Cognitive interpretive bias in reporting is defined as a distortion in the representation of study results or research findings due to the subjective nature of human interpretation. In the assessment of systematic reviews, this type of bias can be phrased as "Relevance of studies to research question not appropriately considered". | Brian S. Alper, Paul Whaley, Joanne Dehnbostel, Muhammad Afzal | 2023-4-14 by Janice Tufte, Eric Harvey, Harold Lehmann, Joanne Dehnbostel, Jesus Lopez-Alcalde | derived from ROBIS https://www.bristol.ac.uk/media-library/sites/social-community-medicine/robis/ROBIS%201.2%20Clean.pdf | |||||||
3 | SEVCO:00327 | early dissemination bias | A reporting bias due to publication or reporting of results or research findings that change in subsequent reports. | One form of Early dissemination bias is the reporting of results in preprints or early versions during the peer review and publication process not matching the subsequent reports. Another form of Early dissemination bias is the reporting of interim results (even if fully peer reviewed) when a study is ongoing and more data will be analyzed for the final results. This bias may result from failure to disclose that the results are preliminary or subject to change. This definition is not meant to indicate that preprints are inherently biased. | Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins, Muhammad Afzal, Paul Whaley | 2023-04-14 by Janice Tufte, Eric Harvey, Harold Lehmann, Jesus Lopez-Alcalde, Joanne Dehnbostel | 2023-04-07 vote 3-1 by Eric Harvey, Harold Lehmann, Paul Whaley, Jesus Lopez-Alcalde | 2023-04-14 comments: Should we make clear in Comment for Application that preprints represent *potential* bias, because preprinting does not prima facie mean bias? It seems to me that the bias falls where the results do ot carefully convey that they are preliminary or early AND not to be read as final results -maybe could be word smithed Do you mean someone is reporting without full disclosure 2023-04-07 comments: I would suggest "One form of potential Premature...", since prima facie, premature reporting does not *have* to be biased. I feel that "reporting bias" has the same issue of being semantically loaded as "publication bias" - the problem is premature dissemination of results, via reporting them, publishing them, putting them in a press release, etc. So maybe "premature dissemination bias" could be considered as the preferred term? And then we could even consider "early dissemination bias" as that feels more objective than "premature", now that it is phrased this way. | ||||||
3 | SEVCO:00384 | fabrication bias | A reporting bias resulting from intentional misrepresentation of any part of the study. | Examples include plagiarism, unjustified authorship, data manipulation, and intentional misrepresentation of figures and charts. Applying this code is a serious allegation of wrongdoing. | Harold Lehmann, Kenneth Wilkins, Joanne Dehnbostel, Khalid Shahin | 2023-04-21 vote 5-0 by Brian S. Alper, Janice Tufte, Harold Lehmann, Cauê Monaco, Eric Harvey | ||||||||
3 | SEVCO:00359 | unsubstantiated interpretation of results | A reporting bias in which the interpretation of results is not adequately supported by the data collected. | Reporting bias is defined as a bias due to distortions in the selection of or representation of information in study results or research findings. | Brian S. Alper, Joanne Dehnbostel, Caue Monaco, Li Wang, Harold Lehmann | 2023-12-22 vote 5-0 Eric Harvey, Caue Monaco, Janice Tufte, Paul Whaley, Joanne Dehnbostel | ||||||||
3 | SEVCO:00271 | one-sided reference bias | A reporting bias in which included citations are limited to those that represent only some of the perspectives. | The term "perspective" covers a variety of contexts, such as a side of an argument or point of view. | Joanne Dehnbostel, Brian S. Alper, Kenneth Wilkins, Sheyu Li, Janice Tufte, Harold Lehmann | 2024-05-31 vote 5-0 by Saphia Mokrane, Lenny Vasanthan, Sheyu Li, Eric Harvey, Homa Keshavarz | 2024-05-17 vote 4-1 by Saphia Mokrane, Lenny Vasanthan, Sheyu Li, Eric Harvey, Harold Lehmann 2024-05-24 vote 7-0 by Homa Keshavarz, Sheyu Li, Eric Harvey, Lenny Vasanthan, Harold Lehmann, Janice Tufte, Saphia Mokrane | 2024-05-17 comment: "Perspective" needs to be clarified in the Comment for application. For instance, "perspective" could mean patient vs provider vs society OR in favor of drug A vs in favor of drug B. [In general, considering that there's always discussion about a term, there should be some Comment for application.] 2024-05-24 comment: Can we consider rephrasing the definition to - A study selection bias in which included studies are limited to those that represent "only " one perspective | Catalogue of Bias (https://catalogofbias.org/biases/one-sided-reference-bias/) One-sided reference bias When authors restrict their references to only those works that support their position. One-sided reference bias occurs when a study author cites only publications that demonstrate one side of the picture of available evidence. This bias may arise when researchers cite publications that support their preconceptions or hypotheses, ignoring evidence that does not support their view. This can happen in any study report, but a particular problem arises when this occurs in literature reviews, which are supposed to represent a comprehensive collection of all relevant information, along with description and appraisal of quality and content. The result can be a misrepresentation of the current totality of evidence and can lead to spurious claims or needless additional research. Catalogue of Bias Collaboration, Spencer EA, Brassey J, Heneghan C. One-sided reference bias. In: Catalogue of Bias 2017 https://www.catalogofbias.org/biases/one-sided-reference-bias | |||||
3 | SEVCO:00325 | inadequate reporting of methods | A reporting bias due to insufficient reporting of methods to determine the validity of the results. | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Muhammad Afzal | 2023-03-31 vote 5-0 by Harold Lehmann, Eric Harvey, Janice Tufte, Paola Rosati, Jesus Lopez-Alcalde | 2023-03-17 vote on "Inadequate Reporting Bias" 2-1 by Eric Harvey, Jesus Lopez-Alcalde, Janice Tufte 2023-03-17 comment on "Inadequate Reporting Bias": Inadequate reporting of methods is covered by another term. Recommend changing this term to "inadequate reporting of results" or deleting this term if terms covering "reporting results biases" have already been established. | ||||||||
3 | SEVCO:00326 | inadequate explanation of participant withdrawals | A reporting bias due to insufficient reporting of reasons for withdrawals of participants after study enrollment. | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Muhammad Afzal, Janice Tufte | 2023-03-31 vote 5-0 by Harold Lehmann, Eric Harvey, Janice Tufte, Paola Rosati, Jesus Lopez-Alcalde | 2023-03-31 comment: Somewhere in this entry should be a link to the "withdrawal" SEVCO term. Or terms. | ||||||||
2 | SEVCO:00028 | qualitative research bias | A bias specific to the design, conduct, analysis or reporting of qualitative research. | Qualitative research is a research approach that studies subjective aspects of social phenomenon, human behavior, and human perception. Qualitative research may encompass any non-quantitative method of analysis. Qualitative research often explores the meaning individuals or groups assign to concepts. Bias is defined as a systematic distortion in research results (estimation of effect, association, or inference). Distortions in research results means differences between the reported results (findings, conclusions, effect estimates) and the actuality (the truth, the estimand [the quantity targeted for estimation]). In qualitative research, the actuality may include multiple meanings that individuals or groups assign to concepts, and there is no quantitative estimand. | Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins, Caue Monaco, Li Wang | 2023-12-22 Vote 5-0 Caue Monaco, Eric Harvey, Janice Tufte, Paul Whaley, Joanne Dehnbostel | 2023-12-08 vote 5-1 by Cauê Monaco, Harold Lehmann, Javier Bracchiglione, Janice Tufte, Yasser Sami Amer, Eric Harvey | 2023-12-08 comments: Are the initial letters of each word really meant to be capitalized? Or they are this way by error? As SEVCO states, bias relates to differences between the reported results and the actuality (the truth, the estimand). Qualitative research usually does not adopt a positivist approach, therefore, it does not assume there is necessarily one truth to be found. I think at some point (maybe in comments for application) this should be described. 2023-12-15 comment: Qualitative research covers more than, "subjective aspects of social phenomenon and human behavior". For instance, I'm not sure this Comment covers usability or pain perception. How about, "human perception"? | MMAT = “Qualitative research is an approach for exploring and understanding the meaning individuals or groups ascribe to a social or human problem” (Creswell, 2013b, p. 3). | |||||
3 | SEVCO:00356 | bias in qualitative research design | A qualitative research bias in which the qualitative approach used in a study is not appropriate for the research question and problem. | Bias is defined as a systematic distortion in research results (estimation of effect, association, or inference). Distortions in research results means differences between the reported results (findings, conclusions, effect estimates) and the actuality (the truth, the estimand [the quantity targeted for estimation]). In qualitative research, the actuality may include multiple meanings that individuals or groups assign to concepts, and there is no quantitative estimand. The qualitative approach used in a study should be appropriate for the research question and problem. Common qualitative research approaches include (this list is not exhaustive): Ethnography - The aim of the study is to describe and interpret the shared cultural behavior of a group of individuals. Phenomenology - The study focuses on the subjective experiences and interpretations of a phenomenon encountered by individuals. Narrative research - The study analyzes life experiences of an individual or a group. Grounded theory - Generation of theory from data in the process of conducting research (data collection occurs first). Case study - In-depth exploration and/or explanation of issues intrinsic to a particular case. A case can be anything from a decision-making process, to a person, an organization, or a country. Qualitative description - There is no specific methodology, but a qualitative data collection and analysis, e.g., in-depth interviews or focus groups, and hybrid thematic analysis (inductive and deductive). Reference - Hong QN, Pluye P, Fàbregues S, Bartlett G, Boardman F, Cargo M, Dagenais P, Gagnon M-P, Griffiths F, Nicolau B, O’Cathain A, Rousseau M-C, Vedel I. Mixed Methods Appraisal Tool (MMAT), version 2018. Registration of Copyright (#1148552), Canadian Intellectual Property Office, Industry Canada. Accessed at: http://mixedmethodsappraisaltoolpublic.pbworks.com/w/file/fetch/127916259/MMAT_2018_criteria-manual_2018-08-01_ENG.pdf | Brian S. Alper, Li Wang, Caue Monaco | 2024-01-19 vote 5-0 by Harold Lehmann, Brian S. Alper, Homa Keshavarz, Javier Bracchiglione, Eric Harvey | 2024-01-05 vote 4-1 by Javier Bracchiglione, Joanne Dehnbostel, Janice Tufte, Eric Harvey, Cauê Monaco | 2023-12-15 comment: (Ethnography could include grounded theory, so these are not a great pair.) 2024-01-05 comment: I think there are some major problems with the definition and comment for application: - First, the definition is almost the same as the term, using basically the same words (so, it adds no new knowledge to the reader). - Second, the comment for application provides a more detailed approach to a definition than the definition itself ("The qualitative approach used in a study should be appropriate for the research question and problem"). - Third, among the common qualitative research approaches included, there are some in which the term "bias" may not be so applicable. Hierarchically, this term comes from "bias", which is defined as "A systematic distortion in research results (estimation of effect, association, or inference). Distortions in research results means differences between the reported results (findings, conclusions, effect estimates) and the actuality (the truth, the estimand [the quantity targeted for estimation])". So, we need an assumption of "actuality" for bias to exist. Some of the qualitative approaches described do not have that assumption. For example, phenomenology focuses on "subjective experiences", and grounded theory "generates a theory", which is not an "actuality" by itself. I think this comment is applicable to all of the terms related to qualitative bias. | ||||||
3 | SEVCO:00357 | bias in qualitative data collection methods | A qualitative research bias in which the data sources, the methods of data collection, and the forms of data are not adequate or appropriate to address the research question. | Bias is defined as a systematic distortion in research results (estimation of effect, association, or inference). Distortions in research results means differences between the reported results (findings, conclusions, effect estimates) and the actuality (the truth, the estimand [the quantity targeted for estimation]). In qualitative research, the actuality may include multiple meanings that individuals or groups assign to concepts, and there is no quantitative estimand. The data sources (e.g., archives, documents), the methods of data collection (e.g., in depth interviews, group interviews, and/or observations), and the forms of the data (e.g., tape recording, video material, diary, photo, and/or field notes) should be adequate and appropriate to address the research question. The term 'bias in qualitative data collection methods' may be supplemental to other terms for types of detection bias or types of selection bias. | Brian S. Alper, Joanne Dehnbostel, Caue Monaco, Li Wang | 2024-01-19 vote 5-0 by Harold Lehmann, Brian S. Alper, Homa Keshavarz, Javier Bracchiglione, Eric Harvey | 2024-01-05 vote 3-1 by Javier Bracchiglione, Janice Tufte, Eric Harvey, Cauê Monaco | 2024-01-05 comment: see 'bias in qualitative research design' | ||||||
3 | SEVCO:00358 | bias in qualitative analysis | A qualitative research bias in which the analysis approach is not appropriate for the research question and qualitative approach. | Bias is defined as a systematic distortion in research results (estimation of effect, association, or inference). Distortions in research results means differences between the reported results (findings, conclusions, effect estimates) and the actuality (the truth, the estimand [the quantity targeted for estimation]). In qualitative research, the actuality may include multiple meanings that individuals or groups assign to concepts, and there is no quantitative estimand. The analysis approach should be appropriate for the research question and qualitative approach (design). The term 'bias in qualitative analysis' may be supplemental to other terms for types of analysis bias. When interpretation is an integral part of qualitative analysis, bias in the interpretive analysis should use the term 'bias in qualitative analysis' rather than 'cognitive interpretive bias in reporting'. | Brian S. Alper, Li Wang, Caue Monaco, Joanne Dehnbostel | 2024-01-19 vote 5-0 by Harold Lehmann, Brian S. Alper, Homa Keshavarz, Javier Bracchiglione, Eric Harvey | 2024-01-05 vote 3-1 by Javier Bracchiglione, Janice Tufte, Eric Harvey, Cauê Monaco | 2023-12-15 comments: I suppose a "guard rail" would be, "Were the methods described adequately, that they could be reproduced", which is different from the bias itself I am not sure why sometimes terms include the word "inadequate" and sometimes "inappropriate" 2024-01-05 comment: see 'bias in qualitative research design' | ||||||
3 | SEVCO:00360 | incoherence among qualitative data, analysis, and interpretation | A qualitative research bias in which there is any mismatch among hypothesis, data collected, data analysis, and results interpretation. | The term mismatch applies to an inappropriate or wrong or inadequate relationship. | Harold Lehmann, Joanne Dehnbostel, Muhammad Afzal, Kenneth Wilkins, Khalid Shahin, Xing Song | 2024-03-08 vote 5-0 Homa Keshavarz, Eric Harvey, Harold Lehmann, Javier Bracchiglione, Lenny Vasanthan | 2024-01-19 vote 3-1 by Harold Lehmann, Homa Keshavarz, Javier Bracchiglione, Eric Harvey | 2024-01-19 comment: I think the definition should not start with "There are...". Maybe it should be more straightforward: "Mismatch among (...)"2024-02-02 comment: Does the phrase "in the study report" add value to the definition or add an unnecessary clause? 2024-02-16 comment: The definition sounds coherent, but, as it is, I would argue that other previous concepts (e.g. "study eligibility criteria not appropriate for review question") could also be interpreted by this term. | ||||||
2 | SEVCO:00029 | mixed methods research bias | A bias specific to the alignment of design, conduct, analysis or reporting of qualitative research and quantitative research within the same research project. | Mixed methods research is a research approach that combines both qualitative and quantitative research methods within a single study or research project. This methodology aims to provide a more comprehensive understanding of a research problem by integrating the strengths of both qualitative and quantitative research. Examples of mixed methods research include combining surveys with in-depth interviews, using quantitative data to identify patterns and trends followed by qualitative data to explore the underlying reasons and meanings, or incorporating qualitative findings to help interpret and validate quantitative results. Overall, mixed methods research provides a more holistic understanding of a research question by acknowledging and leveraging the strengths of both qualitative and quantitative approaches. | Brian S. Alper, Harold Lehmann | 2024-02-23 vote 5-0 by Homa Keshavarz, Harold Lehmann, Javier Bracchiglione, Lenny Vasanthan, Eric Harvey | 2024-01-19 vote 3-1 by Harold Lehmann, Homa Keshavarz, Javier Bracchiglione, Eric Harvey | 2024-01-19 comment: I would say a bias is not applied to the "coordination" (this sounds more like an administrative issue) | MMAT: Mixed methods (MM) research involves combining qualitative (QUAL) and quantitative (QUAN) methods. In this tool, to be considered MM, studies have to meet the following criteria (Creswell and Plano Clark, 2017): (a) at least one QUAL method and one QUAN method are combined; (b) each method is used rigorously in accordance to the generally accepted criteria in the area (or tradition) of research invoked; and (c) the combination of the methods is carried out at the minimum through a MM design (defined a priori, or emerging) and the integration of the QUAL and QUAN phases, results, and data | |||||
3 | SEVCO:00361 | bias in mixed methods research design | A mixed methods research bias in which the mixed methods approach used in a study is not appropriate for the research question and problem. | This signaling question in the Mixed Methods Assessment Tool (MMAT) is 5.1. Is there an adequate rationale for using a mixed methods design to address the research question? Common mixed methods designs include: Convergent design The QUAL and QUAN components are usually (but not necessarily) concomitant. The purpose is to examine the same phenomenon by interpreting QUAL and QUAN results (bringing data analysis together at the interpretation stage), or by integrating QUAL and QUAN datasets (e.g., data on same cases), or by transforming data (e.g., quantization of qualitative data). Sequential explanatory design Results of the phase 1 - QUAN component inform the phase 2 - QUAL component. The purpose is to explain QUAN results using QUAL findings. E.g., the QUAN results guide the selection of QUAL data sources and data collection, and the QUAL findings contribute to the interpretation of QUAN results. Sequential exploratory design Results of the phase 1 - QUAL component inform the phase 2 - QUAN component. The purpose is to explore, develop and test an instrument (or taxonomy), or a conceptual framework (or theoretical model). E.g., the QUAL findings inform the QUAN data collection, and the QUAN results allow a statistical generalization of the QUAL findings. Key references: Creswell et al. (2011); Creswell and Plano Clark, (2017); O'Cathain (2010) Reference - Hong QN, Pluye P, Fàbregues S, Bartlett G, Boardman F, Cargo M, Dagenais P, Gagnon M-P, Griffiths F, Nicolau B, O’Cathain A, Rousseau M-C, Vedel I. Mixed Methods Appraisal Tool (MMAT), version 2018. Registration of Copyright (#1148552), Canadian Intellectual Property Office, Industry Canada. Accessed at: http://mixedmethodsappraisaltoolpublic.pbworks.com/w/file/fetch/127916259/MMAT_2018_criteria-manual_2018-08-01_ENG.pdf | Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel | 2024-02-23 vote 5-0 by Homa Keshavarz, Harold Lehmann, Brian S. Alper, Lenny Vasanthan, Eric Harvey | ||||||||
3 | SEVCO:00362 | ineffective integration of qualitative and quantitative study components | A mixed methods research bias in which the qualitative research and quantitative research components are not adequately combined. | Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins | 2024-02-23 vote 5-0 by Homa Keshavarz, Harold Lehmann, Brian S. Alper, Lenny Vasanthan, Eric Harvey | |||||||||
3 | SEVCO:00363 | inappropriate interpretation of integration of qualitative and quantitative findings | A mixed methods research bias in which the process of combining the results of the constituent analyses is flawed. | This criterion is related to meta-inference, which is defined as the overall interpretations derived from integrating qualitative and quantitative findings (Teddlie and Tashakkori, 2009). Meta-inference occurs during the interpretation of the findings from the integration of the qualitative and quantitative components, and shows the added value of conducting a mixed methods study rather than having two separate studies. (Pluye et al 2018) | Harold Lehmann, Joanne Dehnbostel, Caue Monaco, Muhammad Afzal, Homa Keshavarz, Brian S. Alper, Kenneth Wilkins | 2024-02-23 vote 5-0 by Homa Keshavarz, Harold Lehmann, Brian S. Alper, Lenny Vasanthan, Eric Harvey | 2024-01-19 vote 3-0 by Harold Lehmann, Eric Harvey, Homa Keshavarz BUT then the definition was changed in conference | Pluye, P., Bengoechea, E.G., Granikov, V., Kaur, N., & Tang, D.L. (2018). A World of Possibilities in Mixed Methods: Review of the Combinations of Strategies Used to Integrate Qualitative and Quantitative Phases, Results and Data. INTERNATIONAL JOURNAL OF MULTIPLE RESEARCH APPROACHES. https://www.semanticscholar.org/paper/A-World-of-Possibilities-in-Mixed-Methods%3A-Review-Pluye-Bengoechea/21f292d0cb5cc07a982b240b17ff077fe4646632 Teddlie, C. and Tashakkori, A. (2009) Foundations of Mixed Methods Research: Integrating Quantitative and Qualitative Approaches in the Social and Behavioral Sciences. Sage, London. | ||||||
3 | SEVCO:00364 | inadequate handling of inconsistency between qualitative and quantitative findings | A mixed methods research bias in which discrepancies in the results from the qualitative and quantitative components are not adequately addressed. | When integrating the findings from the qualitative and quantitative components, divergences and inconsistencies (also called conflicts, contradictions, discordances, discrepancies, and dissonances) can be found. It is not sufficient to only report the divergences; they need to be explained. Different strategies to address the divergences have been suggested such as reconciliation, initiation, bracketing and exclusion (Pluye et al., 2009b). (Reference - Hong QN, Pluye P, Fàbregues S, Bartlett G, Boardman F, Cargo M, Dagenais P, Gagnon M-P, Griffiths F, Nicolau B, O’Cathain A, Rousseau M-C, Vedel I. Mixed Methods Appraisal Tool (MMAT), version 2018. Registration of Copyright (#1148552), Canadian Intellectual Property Office, Industry Canada. Accessed at: http://mixedmethodsappraisaltoolpublic.pbworks.com/w/file/fetch/127916259/MMAT_2018_criteria-manual_2018-08-01_ENG.pdf) | Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel | 2024-02-23 vote 5-0 by Homa Keshavarz, Harold Lehmann, Xing Song, Lenny Vasanthan, Eric Harvey | ||||||||
2 | SEVCO:00030 | bias in validation assessment | A bias in the design, conduct or reporting of studies or analyses intended to evaluate the reliability and/or performance of a procedure for a specific predictive, classification, measurement, or communication purpose. | Bias in validation assessment is often used for predictive model research and diagnostic research where optimal research design includes derivation studies and external validation studies. A 'validation study' has a validation goal where validation goal {SEVCO:01098} is defined as a study goal with the intent to determine the reliability and/or performance of a procedure for a specific predictive, classification, measurement, or communication purpose. Procedures that may be assessed in validation studies include predictive algorithms, measurement instruments, and educational materials. Internal validation is tested in populations from the source used for derivation of the procedure. External validation is tested in populations that differ from the source used for derivation of the procedure. | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins | 2023-10-27 vote 5-0 by Brian S. Alper, Eric Harvey, Yasser Sami Amer, Janice Tufte, Harold Lehmann | PROBAST = ROB, which was defined to occur when shortcomings in study design, conduct, or analysis lead to systematically distorted estimates of model predictive performance. PROBAST enables a focused and transparent approach to assessing the ROB and applicability of studies that develop, validate, or update prediction models for individualized predictions. Prediction models are sometimes described as risk prediction models, predictive models, prediction indices or rules, or risk scores. | |||||||
3 | SEVCO:00368 | bias in external validation assessment | A bias in validation assessment using a sample source that differs from those used in the derivation of the procedure. | Validation assessment is often used for predictive model research and diagnostic research where optimal research design includes derivation studies and external validation studies. A 'validation study' has a validation goal where validation goal {SEVCO:01098} is defined as a study goal with the intent to determine the reliability and/or performance of a procedure for a specific predictive, classification, measurement, or communication purpose. Procedures that may be assessed in validation studies include predictive algorithms, measurement instruments, and educational materials. Internal validation is tested in populations from the source used for derivation of the procedure. External validation is tested in populations that differ from the source used for derivation of the procedure. Bias in validation assessment is defined as a bias in the design, conduct or reporting of studies or analyses intended to evaluate the reliability and/or performance of a procedure for a specific predictive, classification, measurement, or communication purpose. Bias in external validation assessment may be used for absence of any external validation assessment or inadequacy in external validation assessment. | Brian S. Alper, Harold Lehmann, Muhammad Afzal | 2023-11-26 vote 5-0 by Harold Lehmann, Muhammad Afzal, Janice Tufte, Jesus Lopez-Alcalde, Eric Harvey | ||||||||
3 | SEVCO:00367 | bias in internal validation assessment | A bias in validation assessment specific to a validation assessment that uses the same sample source that was used in the derivation of the procedure. | Validation assessment is often used for predictive model research and diagnostic research where optimal research design includes derivation studies and external validation studies. A 'validation study' has a validation goal where validation goal {SEVCO:01098} is defined as a study goal with the intent to determine the reliability and/or performance of a procedure for a specific predictive, classification, measurement, or communication purpose. Procedures that may be assessed in validation studies include predictive algorithms, measurement instruments, and educational materials. Internal validation is tested in populations from the source used for derivation of the procedure. Model derivation is often based on a portion of data available from a sample source, and internal validation is performed using the same sample data but a different set of data. Whereas external validation is tested in populations that differ from the source used for derivation of the procedure, internal validation is tested in the same population. Bias in validation assessment is defined as a bias in the design, conduct or reporting of studies or analyses intended to evaluate the reliability and/or performance of a procedure for a specific predictive, classification, measurement, or communication purpose. Bias in internal validation assessment may be used for absence of any internal validation assessment or inadequacy in internal validation assessment. A common cause of bias in internal validation assessment is validation using the same data that was used for derivation. | Brian S. Alper, Kenneth Wilkins, Harold Lehmann, Muhammad Afzal | 2023-12-01 vote 5-0 by Xing Song, Javier Bracchiglione, Harold Lehmann, Eric Harvey, Caue Monaco | ||||||||
2 | SEVCO:00018 | DEFERRED: choice-of-question bias | https://www.ijcmr.com/uploads/7/7/4/6/77464738/_ijcmr_628_may_23.pdf "What biases can occur during the planning phase of an RCT? I. Biases that can arise, even before the trial is conducted 1. Choice-of-Question Bias It is one of the most unrecognized types of bias that occur in RCTs. This bias is concealed within the question that the study intends to answer. This bias may not have a stronger impact on the strength of the study but it may affect the generalizability of the study outcomes.6 This bias can take many forms: i. Hidden agenda bias: It occurs when a trial is mounted, not in order to answer a question, but rather to demonstrate a pre-required answer. ii. Cost and convenience bias: It occurs when a study is done on a basis of what we can afford to study, or what is convenient to study, rather than what we really want to study. It can seriously compromise what we choose to study. iii. Funding availability Bias: It occurs where studies tend to concentrate on questions that are more readily fundable, often for a vested or commercial interest.7 Choice-of-question bias Perhaps one of the least recognized forms of bias in an RCT is hidden in the choice of the question that the trial intends to answer. This would not necessarily affect the internal validity of a trial, but may have profound effects on its external validity, or generalizability. This bias can take many forms. Hidden agenda bias occurs when a trial is mounted, not in order to answer a question, but in order to demonstrate a pre-required answer. The unspoken converse may be ‘Don’t do a trial if it won’t show you what you want to find’. This could be called the vested interest bias. 14 Closely related to this is the self fulfiling prophecy bias in which the very carrying out of a trial ensures the desired result. The cost and convenience bias can seriously compromise what we choose to study. When we study what we can afford to study, or what is convenient to study, rather than what we really want to study, or should study, we take resources away from what we know is important. Closely related to this is the funding availability bias where studies tend to concentrate on questions that are more readily fundable, often for a vested or commercial interest. We should always look for the secondary gains search bias which can influence the choice of study, the methodology used, and the ascertainment and dissemination of the results." Chapter 3, p.36 of Wiley text Chapter Title: Bias in randomized controlled trials Jadad AR, Enkin MW. Randomized Controlled Trials Questions, Answers, and Musings Second edition. Published by Blackwell Publishing 2007. Print ISBN:9781405132664. Online ISBN:9780470691922. doi: 10.1002/9780470691922. | 2023-11-03 | ||||||||||
3 | SEVCO:00251 | predetermined result bias | Self-fulfilling prophecy bias, Shape the result bias | |||||||||||
3 | SEVCO:00256 | wrong design bias | ||||||||||||
3 | SEVCO:00257 | population choice bias | ||||||||||||
3 | SEVCO:00258 | intervention choice bias | ||||||||||||
3 | SEVCO:00259 | comparator choice bias | ||||||||||||
3 | SEVCO:00260 | outcome choice bias | ||||||||||||
3 | SEVCO:00391 | predictor choice bias | from PROBAST 2.3 Are all predictors available at the time the model is intended to be used? (for the explanatory variable) | |||||||||||
2 | SEVCO:00370 | early study termination bias | A bias due to the decision to end the study earlier than planned. | Child terms (types of Early Study Termination Bias) may be used to report the reasons for bias in the decision to end the study earlier than planned. Bias resulting from the early study termination may be described with other terms in the code system. | Brian S. Alper, Kenneth Wilkins, Muhammad Afzal, Mario Tristan, Khalid Shahin, Harold Lehmann, Joanne Dehnbostel | 2022-04-08 vote 6-0 by nelle.stocquart, nisha mathew, Mario Tristan, Robin Ann Yurk, Harold Lehmann, Joanne Dehnbostel | 2022-04-01 vote 4-1 by Jesus Lopez-Alcalde, Harold Lehmann, Cauê Monaco, Robin Ann Yurk, Mario Tristan | 2022-04-01 comment: Term Definition: Simplify so it reads. A bias in the reported results due to early termination of a study resulting in incomplete data collection. | ||||||
3 | SEVCO:00371 | early study termination bias due to competing interests | An early study termination bias due to the decision to end the study being influenced by financial, commercial, legal, political, social, professional, or intellectual interests. | Brian S. Alper, Kenneth Wilkins, Muhammad Afzal, Mario Tristan, Khalid Shahin | 2022-04-01 vote 6-0 by Brian S. Alper, Joanne Dehnbostel, Jesus Lopez-Alcalde, Harold Lehmann, Cauê Monaco, Mario Tristan | |||||||||
3 | SEVCO:00372 | early study termination bias due to unplanned use of interim analysis | An early study termination bias due to awareness of study results without following a preplanned protocol for how interim results will influence the decision to terminate the study. | Brian S. Alper, Kenneth Wilkins, Muhammad Afzal, Mario Tristan, Khalid Shahin, Joanne Dehnbostel | 2022-04-01 vote 5-0 by Joanne Dehnbostel, Jesus Lopez-Alcalde, Harold Lehmann, Cauê Monaco, Mario Tristan | |||||||||
3 | SEVCO:00373 | early study termination bias due to inappropriate statistical stopping rule | An early study termination bias due to use of an inappropriate model or threshold in the analysis used for determination to end the study. | An example of an inappropriate statistical stopping rule is one that does not account for multiple analyses (i.e. does not use a lower p value threshold) for a conclusion of benefit warranting early termination of the study. | Brian S. Alper, Muhammad Afzal, Mario Tristan, Khalid Shahin, Joanne Dehnbostel | 2022-04-01 vote 6-0 by Joanne Dehnbostel, Jesus Lopez-Alcalde, Harold Lehmann, Cauê Monaco, Mario Tristan, Robin Ann Yurk | ||||||||
3 | SEVCO:00374 | early study termination bias due to external factors | An early study termination bias due to a decision to end the study based on factors other than the results of interim analysis. | Examples of external factors may include cessation of funding, and safety or efficacy results reported by other studies. | Brian S. Alper, Muhammad Afzal, Mario Tristan, Khalid Shahin, Joanne Dehnbostel | 2022-04-01 vote 6-0 by Joanne Dehnbostel, Jesus Lopez-Alcalde, Harold Lehmann, Cauê Monaco, Mario Tristan, Robin Ann Yurk | ||||||||
2 | SEVCO:00398 | cognitive bias influencing study design | A bias in the study design due to the subjective nature of human decision-making. | [Study design](https://fevir.net/resources/CodeSystem/27270#SEVCO:01000) is defined as "a plan specification for how and what kinds of data will be gathered as part of an investigation which may produce testable explanations, conclusions and predictions or test a hypothesis." This plan specification is the result of human decision-making. Any human decision-making may be influenced by cognitive bias, recognized or not. | Brian S. Alper, Harold Lehmann, Airton Stein, Joanne Dehnbostel, Homa Keshavarz, Khalid Shahin, Cauê Monaco | 2024-12-06 vote 7-1 by Homa Keshavarz, Bhagvan Kommadi, Sheyu Li, Cauê Monaco, Javier Bracchiglione, Lara Kahaleh, Saphia Mokrane, Airton Tetelbom Stein 2024-12-13 vote 4-1 by Saphia Mokrane, Lara Kahaleh, Bhagvan Kommadi, Paul Whaley, Airton Tetelbom Stein | 2024-12-06 comment re: "cognitive bias influencing study design" = "A bias in the study design due to the subjective nature of human decision-making."1No) The comments are very confusing. 2024-12-13 comment re: "cognitive bias influencing study design" = "A bias in the study design due to the subjective nature of human decision-making."1No) Not intuitive or self explanatory | This term was added after evaluation of [A comprehensive item bank of internal validity issues of relevance to in vitro toxicology studies](https://www.tandfonline.com/doi/full/10.1080/2833373X.2024.2418045) which included a concept of 'Metabias' defined as 'A general distorting influence on investigator decision-making that may bias the results or findings of a study' | DEFERRED | |||||
2 | SEVCO:00399 | late study termination bias | A bias due to the decision to end the study later than planned. | When the study is allowed to continue longer than a pre-planned stopping point, then there is a risk of bias in that the results from the study may differ from the results that would occur according to the study plan. When the study termination is pre-planned to vary by events or results of interim analyses (e.g., in an adaptive design), this is not a 'late study termination bias'. | Brian S. Alper, Harold Lehmann, Airton Stein, Joanne Dehnbostel, Homa Keshavarz, Khalid Shahin, Cauê Monaco | 2024-12-13 vote 5-0 | 2024-12-06 vote 7-1 by Homa Keshavarz, Sheyu Li, Cauê Monaco, Javier Bracchiglione, Lara Kahaleh, Bhagvan Kommadi, Saphia Mokrane, Airton Tetelbom Stein | 2024-12-06 comment re: "late study termination bias" = "A bias due to the decision to end the study later than planned."1No) This bias highly relies on the reason of the late termination, which warrants discussion. It is also helpful to differentiate two situations, i.e., post-trial follow up (analysis based on the follow up data after the termination of the trial) and delayed termination based on the adaptive design (it is not a real delay). | This term was added after evaluation of [A comprehensive item bank of internal validity issues of relevance to in vitro toxicology studies](https://www.tandfonline.com/doi/full/10.1080/2833373X.2024.2418045) which included a concept of 'Timing of study termination bias' defined as a bias due to the decision to end the study earlier or later than planned, and modified from the original SEVCO term “[early study termination bias](https://fevir.net/resources/CodeSystem/27270#SEVCO:00370),” defined as “a bias due to the decision to end the study earlier than planned” | |||||
1 | SEVCO:00027 | conflict of interest | A risk factor for bias in which persons influencing research design, conduct, analysis or reporting have motivations that could compromise their impartiality. | Motivations may be explicit or implicit. Motivations may be unconscious or unrecognized. Conflict of interest is sometimes phrased "potential conflict of interest" or "perceived conflict of interest". | Brian S. Alper, Harold Lehmann, Kenneth Wilkins, Javier Bracchiglione, Janice Tufte, Muhammad Afzal, Caue Monaco | 2023-12-01 vote 6-0 by Cauê Monaco, Xing Song, Javier Bracchiglione, Harold Lehmann, Janice Tufte, Eric Harvey | 2023-11-10 vote 4-1 by Brian S. Alper, Harold Lehmann, Janice Tufte, Eric Harvey, Javier Bracchiglione | 2023-11-10 comment: I do not think the term should be limited to goals and motivations this seems judgmental and manipulative. COI can be based on intellectual property and or current research work along the same subject where a researcher or partner is too involved with a project or paper on the same subject | MASTER-28. Conflict of interests were declared and absent | |||||
2 | SEVCO:00355 | financial conflict of interest | A risk factor for bias in which persons influencing research design, conduct, analysis or reporting have financial motivations that could compromise their impartiality. | Motivations may be explicit or implicit. Motivations may be unconscious or unrecognized. The financial motivations may be direct (e.g. salary or consulting fees) or indirect (e.g. stock interests or spousal financial interests). Conflict of interest is sometimes phrased "potential conflict of interest" or "perceived conflict of interest". | Brian S. Alper, Harold Lehmann, Kenneth Wilkins, Javier Bracchiglione, Janice Tufte, Muhammad Afzal, Caue Monaco | 2023-12-01 vote 6-0 by Cauê Monaco, Xing Song, Javier Bracchiglione, Harold Lehmann, Janice Tufte, Eric Harvey | 2023-12-01 comment: I agree with the definition of the term, but I think it will be better to further explicit what "financial" means in the comments for application (e.g. salary, stocks, paid assistance to congress) | |||||||
2 | SEVCO:00252 | nonfinancial conflict of interest | A risk factor for bias in which persons influencing research design, conduct, analysis or reporting have non-financial motivations that could compromise their impartiality. | Motivations may be explicit or implicit. Motivations may be unconscious or unrecognized. The non-financial motivations may be related to social, political, professional, ideological, or other factors. Conflict of interest is sometimes phrased "potential conflict of interest" or "perceived conflict of interest". | Brian S. Alper, Harold Lehmann, Kenneth Wilkins, Javier Bracchiglione, Janice Tufte, Muhammad Afzal, Caue Monaco | 2023-12-01 vote 6-0 by Cauê Monaco, Xing Song, Javier Bracchiglione, Harold Lehmann, Janice Tufte, Eric Harvey | 2023-12-01 comment: I agree with the definition of the term, but I think it will be better to further explicit what "non-financial" means in the comments for application (e.g. intellectual) | |||||||
1 | SEVCO:00007 | rating of bias risk | The result of a qualitative assessment of the likelihood and potential impact of systematic distortion in research results. | [Bias](https://fevir.net/resources/CodeSystem/27270#SEVCO:00001) is defined as a systematic distortion in research results (estimation of effect, association, or inference). Distortions in research results means differences between the reported results (findings, conclusions, effect estimates) and the actuality (the truth, the estimand [the quantity targeted for estimation]). All terms defined here are qualitative classifications of the risk of bias. Although a risk of bias can conceptually be quantified, there is no common agreed method on how to quantify a risk of bias. | Brian S. Alper, Joanne Dehnbostel, Carlos Alva-Diaz, Homa Keshavarz | 2024-09-20 vote 6-0 by Javier Bracchiglione, C P Ooi, Bhagvan Kommadi, Lenny Vasanthan, Eric Harvey, Airton Tetelbom Stein | 2024-08-30 vote 7-0 by Saphia Mokrane, Sheyu Li, Lenny Vasanthan, Eric Harvey, Janice Tufte, Homa Keshavarz, Airton Tetelbom Stein BUT THEN DEFINITION CHANGED to add "The result of " based on feedback on child terms. | |||||||
2 | SEVCO:00186 | low risk of bias | The result of a qualitative assessment that there is a low likelihood and potential impact of systematic distortion in research results. | A 'low risk of bias' rating denotes a judgment that there are no serious concerns for bias. The 'potential impact of systematic distortion in research results' may include the impact on the clinical interpretation or conclusion of the study findings. | Brian S. Alper, Joanne Dehnbostel, Homa Keshavarz | 2024-09-20 vote 8-0 by Carlos Alva-Diaz, Saphia Mokrane, Javier Bracchiglione, C P Ooi, Bhagvan Kommadi, Lenny Vasanthan, Eric Harvey, Airton Tetelbom Stein | 2024-09-06 vote 5-1 by Carlos Alva-Diaz, Cauê Monaco, Sheyu Li, janice tufte, Eric Harvey, Javier Bracchiglione | 2024-09-06 comment re: "low risk of bias" = "A qualitative assessment that there is a low likelihood and potential impact of systematic distortion in research results."1N) A qualitative judgement that the the systematic distortion is unlikely to impact the clinical interpretation or conclusion of the study findings. The rating of low, moderate, and high are subjective judgement or the result of an assessment based on a particular context. It is thus not a asessment per se. As a clinician, I would prefer to the term ending up with a clinical interpretation or decision. But other conclusion can be possible even for clinical research. | In the Cochrane Handbook (https://training.cochrane.org/handbook/current/chapter-08#section-8-7): Once the signalling questions are answered, the next step is to reach a risk-of-bias judgement, and assign one of three levels to each domain: Low risk of bias; Some concerns; or High risk of bias. | |||||
2 | SEVCO:00187 | moderate risk of bias | The result of a qualitative assessment that there is some likelihood and potential impact of systematic distortion in research results, and this likelihood and potential impact is between low and high. | The 'potential impact of systematic distortion in research results' may include the impact on the clinical interpretation or conclusion of the study findings. The term 'unclear risk of bias' is not explicitly defined because it could be used to represent 'moderate risk of bias' or [undetermined risk of bias](https://fevir.net/resources/CodeSystem/27270#SEVCO:00192). | Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel, Harold Lehmann, Carlos Alva-Diaz | 2024-09-20 vote 6-0 by Carlos Alva-Diaz, Saphia Mokrane, Javier Bracchiglione, C P Ooi, Bhagvan Kommadi, Lenny Vasanthan | 2024-09-06 vote 5-2 by Airton Tetelbom Stein, Carlos Alva-Diaz, Cauê Monaco, Sheyu Li, janice tufte, Eric Harvey, Javier Bracchiglione | 2024-09-06 comments re: "moderate risk of bias" = "A qualitative assessment that there is some likelihood and potential impact of systematic distortion in research results, but this likelihood and potential impact is neither low nor high."1N) I think that the term' undetermined risk of bias' is opposed to any other positive assertions of concern (low, moderate or high risk of bias). 2N) Same with above. But it is a bit confusing for stating as 'neither low nor high', which leads people have to refer to the definition of low to high. | In the Cochrane Handbook (https://training.cochrane.org/handbook/current/chapter-08#section-8-7): Once the signalling questions are answered, the next step is to reach a risk-of-bias judgement, and assign one of three levels to each domain: Low risk of bias; Some concerns; or High risk of bias. | |||||
2 | SEVCO:00188 | high risk of bias | The result of a qualitative assessment that there is a high likelihood and potential impact of systematic distortion in research results. | A 'high risk of bias' rating denotes a judgment that there are serious concerns for bias. The 'potential impact of systematic distortion in research results' may include the impact on the clinical interpretation or conclusion of the study findings. | Brian S. Alper, Joanne Dehnbostel, Homa Keshavarz | 2024-09-20 vote 6-0 by Carlos Alva-Diaz, Saphia Mokrane, Javier Bracchiglione, C P Ooi, Bhagvan Kommadi, Lenny Vasanthan | 2024-09-06 vote 6-1 by Airton Tetelbom Stein, Carlos Alva-Diaz, Cauê Monaco, Sheyu Li, janice tufte, Eric Harvey, Javier Bracchiglione | 2024-09-06 comment re: "high risk of bias" = "A qualitative assessment that there is a high likelihood and potential impact of systematic distortion in research results."1N) A qualitative judgement that the the systematic distortion is unlikely to impact the clinical interpretation or conclusion of the study findings. The rating of low, moderate, and high are subjective judgement or the result of an assessment based on a particular context. It is thus not a asessment per se. As a clinician, I would prefer to the term ending up with a clinical interpretation or decision. But other conclusion can be possible even for clinical research. | In the Cochrane Handbook (https://training.cochrane.org/handbook/current/chapter-08#section-8-7): Once the signalling questions are answered, the next step is to reach a risk-of-bias judgement, and assign one of three levels to each domain: Low risk of bias; Some concerns; or High risk of bias. | |||||
2 | SEVCO:00190 | critical risk of bias | The result of a qualitative assessment that there is such a high likelihood and potential impact of systematic distortion in research results that the findings are not valid. | The colloquial term 'fatal flaw' is sometimes used to report a critical risk of bias, signifying no further use of the study being assessed. | Brian S. Alper, Harold Lehmann, Kenneth Wilkins, Airton Stein, Muhammad Afzal | 2024-09-20 vote 7-0 by Carlos Alva-Diaz, Saphia Mokrane, Javier Bracchiglione, C P Ooi, Bhagvan Kommadi, Lenny Vasanthan, Eric Harvey | ||||||||
2 | SEVCO:00192 | undetermined risk of bias | There is insufficient information to make a qualitative assessment regarding the likelihood and potential impact of systematic distortion in research results. | Unlike the terms [low risk of bias](https://fevir.net/resources/CodeSystem/27270#SEVCO:00186), [moderate risk of bias](https://fevir.net/resources/CodeSystem/27270#SEVCO:00187), and [high risk of bias](https://fevir.net/resources/CodeSystem/27270#SEVCO:00188), the term 'undetermined risk of bias' does not express a positive assertion of concern. The term 'unclear risk of bias' is not explicitly defined because it could be used to represent 'undetermined risk of bias' or [moderate risk of bias](https://fevir.net/resources/CodeSystem/27270#SEVCO:00187). Some guidance has suggested to use 'unclear risk of bias' to mean 'undetermined risk of bias' but others have interpreted it to mean [moderate risk of bias](https://fevir.net/resources/CodeSystem/27270#SEVCO:00187). The 'potential impact of systematic distortion in research results' may include the impact on the clinical interpretation or conclusion of the study findings. | Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel, Harold Lehmann, Carlos Alva-Diaz | 2024-09-20 vote 6-0 by Carlos Alva-Diaz, Saphia Mokrane, Javier Bracchiglione, C P Ooi, Bhagvan Kommadi, Lenny Vasanthan | 2024-09-06 vote 5-2 by Airton Tetelbom Stein, Carlos Alva-Diaz, Cauê Monaco, Sheyu Li, janice tufte, Eric Harvey, Javier Bracchiglione | 2024-09-06 comments re: "undetermined risk of bias" = "A qualitative assessment that there is insufficient information to make a qualitative assessment regarding the likelihood and potential impact of systematic distortion in research results."1N) The term' undetermined risk of bias' is opposed to any other positive assertions of concern (low, moderate or high risk of bias). I think that a better explication is "As opposed to the term low, moderate or high risk of bias, the term 'undetermined risk of bias' does not express a positive assertion of concern." 2N) Same with above. 3Y) is this a cop out though possibly? 4Y) I would suggest "unclear risk of bias" as an alternative term. 2024-09-20 comment re: "undetermined risk of bias" = "There is insufficient information to make a qualitative assessment regarding the likelihood and potential impact of systematic distortion in research results."Yes As I understand, at least the Cochrane RoB1 tool guidance specified that "unclear risk of bias" referred to undetermined and not moderate. Its use, however, was often seen as moderate risk of bias, but this was not the guidance. I don't know if you are thinking of other tools that also specify "unclear risk of bias" as a category. | ||||||
1 | SEVCO:00193 | rating of factor presence | The result of a qualitative assessment of the likelihood of a situation or event. | For approaches to assessment of the risk of bias (which includes bias presence and potential impact) in which multiple factors are separately considered, the 'rating of factor presence' may be applied to each of the factors considered. | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Carlos Alva-Diaz, Homa Keshavarz | 2024-10-04 vote 5-0 by Joanne Dehnbostel, Saphia Mokrane, Airton Tetelbom Stein, Homa Keshavarz, Eric Harvey | 2024-09-27 vote 5-1 by Joanne Dehnbostel, Brian S. Alper, Carlos Alva-Diaz, Homa Keshavarz, Arnav Agarwal, Airton Tetelbom Stein | 2024-09-27 comment re: "rating of presence of cause of bias" = "The result of a qualitative assessment of the likelihood of a situation leading to systematic distortion in research results."1Yes) Possible alternatives: "rating of presence of biasing factor" "judgment regarding presence of factor introducing possible bias" "presence or absence of possible biasing factor" 2 No) Rating of presence of cause of bias just doesn't read well. Rating of factor presence was better, rating of bias presence would work. Likelihood of bias presence would also work. | ||||||
2 | SEVCO:00194 | factor present | The result of a qualitative assessment that a situation exists or an event has occurred. | Brian S. Alper, Joanne Dehnbostel, Homa Keshavarz | 2024-10-04 vote 5-0 by Joanne Dehnbostel, Saphia Mokrane, Airton Tetelbom Stein, Homa Keshavarz, Eric Harvey | |||||||||
2 | SEVCO:00195 | factor likely present | The result of a qualitative assessment that a situation probably exists or an event has probably occurred. | Brian S. Alper, Joanne Dehnbostel, Homa Keshavarz | 2024-10-04 vote 5-0 by Joanne Dehnbostel, Saphia Mokrane, Airton Tetelbom Stein, Homa Keshavarz, Eric Harvey Yes | |||||||||
2 | SEVCO:00196 | factor likely absent | The result of a qualitative assessment that a situation probably does not exist or an event has probably not occurred. | Brian S. Alper, Joanne Dehnbostel, Homa Keshavarz | 2024-10-04 vote 5-0 by Joanne Dehnbostel, Saphia Mokrane, Airton Tetelbom Stein, Homa Keshavarz, Eric Harvey | |||||||||
2 | SEVCO:00197 | factor absent | The result of a qualitative assessment that a situation does not exist or an event has not occurred. | Brian S. Alper, Joanne Dehnbostel, Homa Keshavarz | 2024-10-04 vote 5-0 by Joanne Dehnbostel, Saphia Mokrane, Airton Tetelbom Stein, Homa Keshavarz, Eric Harvey | |||||||||
2 | SEVCO:00199 | undetermined factor presence or absence | There is insufficient information to make a qualitative assessment regarding whether a situation exists or an event has occurred. | The presence or absence of a factor may be undetermined because a qualitative assessment was attempted and concluded that there was insufficient information to rate the factor presence, or because a qualitative assessment was not done. | Brian S. Alper, Joanne Dehnbostel, Homa Keshavarz, Harold Lehmann | 2024-10-18 vote 5-0 by Saphia Mokrane, Homa Keshavarz, Airton Tetelbom Stein, Lenny Vasanthan, Eric Harvey | 2024-10-11 vote 7-2 by Javier Bracchiglione, Saphia Mokrane, Harold Lehmann, Arnav Agarwal, Lenny Vasanthan, Joanne Dehnbostel, Airton Tetelbom Stein, Homa Keshavarz, Eric Harvey | 2024-10-11 comments re: "factor presence or absence unclear" = "There is insufficient information to make a qualitative assessment regarding whether a situation exists or an event has occurred."1No) To be consistent with the previous terms under "rating of factor presence", I would suggest rephrasing the term, as for example : "factor unclearly present of absent" Suggestion for the definition : "The result of a qualitative assessment of the presence or the absence of a situation or an event due to insufficient information to make the assessment." 2No) "Situation" is especially vague; "event" mildly suggestive. Sounds like we need a Comment for application---either here or in the parent term---that puts these terms into perspective. E.g., "Assessment of presence of a factor depends on the definition of the factor. Assessing such presence depends on whether a situation | ||||||
1 | SEVCO:00200 | rating of bias direction | The result of a qualitative assessment of the direction of influence on research results. | The meaning of direction of influence on research results changes with the type of estimation. For an estimation of comparative effect, the direction may be favoring a side being compared or towards no effect. For an estimation of prevalence, the direction may be overestimation or underestimation. Bias is defined as a systematic distortion of research results. | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel | 2024-11-08 vote 6-0 by Saphia Mokrane, Homa Keshavarz, Lara Kahaleh, Bhagvan Kommadi, Airton Tetelbom Stein, Eric Harvey | https://training.cochrane.org/handbook/current/chapter-08 RoB 2 includes optional judgements of the direction of the bias for each domain and overall. For some domains, the bias is most easily thought of as being towards or away from the null. For example, high levels of switching of participants from their assigned intervention to the other intervention may have the effect of reducing the observed difference between the groups, leading to the estimated effect of adhering to intervention (see Section 8.2.2) being biased towards the null. For other domains, the bias is likely to favour one of the interventions being compared, implying an increase or decrease in the effect estimate depending on which intervention is favoured. Examples include manipulation of the randomization process, awareness of interventions received influencing the outcome assessment and selective reporting of results. If review authors do not have a clear rationale for judging the likely direction of the bias, they should not guess it and can leave this response blank. Cite this chapter as: Higgins JPT, Savović J, Page MJ, Elbers RG, Sterne JAC. Chapter 8: Assessing risk of bias in a randomized trial [last updated October 2019]. In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA (editors). Cochrane Handbook for Systematic Reviews of Interventions version 6.5. Cochrane, 2024. Available from www.training.cochrane.org/handbook. https://onlinelibrary-wiley-com.ezproxy3.library.arizona.edu/doi/10.1111/acem.12255 | |||||||
2 | SEVCO:00201 | risk of bias favoring experimental exposure | The result of a qualitative assessment that a bias, if present, would influence the effect estimate in a direction that suggests greater benefit or less harm from the experimental exposure compared to the comparator exposure. | The experimental exposure is the exposure of interest, whether the study is interventional or observational, the exposure of interest is an intervention or an exposure, or the outcome of interest is desirable or undesirable. The desirability of the outcome of interest determines whether an increase or decrease in the value of the effect estimate is considered beneficial or harmful. | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Airton Stein | 2024-11-22 vote 6-0 by Saphia Mokrane, Sheyu Li, Homa Keshavarz, Eric Harvey, Lara Kahaleh, Airton Tetelbom Stein | ||||||||
2 | SEVCO:00202 | risk of bias favoring comparator exposure | The result of a qualitative assessment that a bias, if present, would influence the effect estimate in a direction that suggests greater benefit or less harm from the comparator exposure compared to the experimental exposure. | The experimental exposure is the exposure of interest, whether the study is interventional or observational, the exposure of interest is an intervention or an exposure, or the outcome of interest is desirable or undesirable. The desirability of the outcome of interest determines whether an increase or decrease in the value of the effect estimate is considered beneficial or harmful. | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Airton Stein | 2024-11-22 vote 5-0 by Sheyu Li, Homa Keshavarz, Eric Harvey, Lara Kahaleh, Airton Tetelbom Stein | ||||||||
2 | SEVCO:00203 | risk of bias towards the null hypothesis | The result of a qualitative assessment that a bias, if present, would increase the likelihood of failing to reject the null hypothesis. | A risk of bias towards the null hypothesis increases the likelihood of a type 2 error (mistaken failure to reject the null hypothesis when the null hypothesis is false). | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Airton Stein, Homa Keshavarz | 2024-11-22 vote 6-0 by Saphia Mokrane, Sheyu Li, Homa Keshavarz, Eric Harvey, Lara Kahaleh, Airton Tetelbom Stein | "there are many exceptions to the heuristic for which bias towards the null cannot be assumed." Yland JJ, Wesselink AK, Lash TL, Fox MP. Misconceptions About the Direction of Bias From Nondifferential Misclassification. Am J Epidemiol. 2022 Jul 23;191(8):1485-1495. doi: 10.1093/aje/kwac035. Erratum in: Am J Epidemiol. 2022 Nov 19;191(12):2123. doi: 10.1093/aje/kwac129. PMID: 35231925; PMCID: PMC9989338. https://pmc.ncbi.nlm.nih.gov/articles/PMC9989338/ | |||||||
2 | SEVCO:00204 | risk of bias away from the null hypothesis | The result of a qualitative assessment that a bias, if present, would increase the likelihood of rejecting the null hypothesis. | A risk of bias away from the null hypothesis increases the likelihood of a type 1 error (mistaken rejection of the null hypothesis when the null hypothesis is true). | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Airton Stein, Homa Keshavarz | 2024-11-22 vote 6-0 by Saphia Mokrane, Sheyu Li, Homa Keshavarz, Eric Harvey, Lara Kahaleh, Airton Tetelbom Stein | ||||||||
2 | SEVCO:00205 | risk of bias direction unpredictable | The result of a qualitative assessment that a bias, if present, cannot be classified as to whether the bias would lead to an increase or a decrease in the estimate. | Brian S. Alper | 2024-11-22 vote 5-0 by Saphia Mokrane, Sheyu Li, Homa Keshavarz, Lara Kahaleh, Airton Tetelbom Stein | |||||||||
1 | SEVCO:00206 | rating of potential influence on research results | The result of a qualitative assessment of, if a cause of bias were present, the potential impact on the research results. | For approaches to assessment of the risk of bias (which includes bias presence and potential impact) in which multiple factors are separately considered, the 'rating of potential influence' may be applied to each of the factors considered. | Brian S. Alper, Airton Stein, Joanne Dehnbostel | 2024-10-25 vote 7-0 by Saphia Mokrane, Harold Lehmann, Lenny Vasanthan, Bhagvan Kommadi, Homa Keshavarz, Eric Harvey, Airton Tetelbom Stein | 2024-10-18 vote 5-0 by Saphia Mokrane, Homa Keshavarz, Airton Tetelbom Stein, Lenny Vasanthan, Eric Harvey BUT THEN THE TERM CHANGED | 2024-10-18 comment re: "rating of potential influence" = "The result of a qualitative assessment of, if a cause of bias were present, the potential impact on the research results."1Yes) the 'rating of potential influence'' => the 'rating of potential influence' | ||||||
2 | SEVCO:00207 | high potential to influence research results | High certainty that a cause of bias, if present, has an impact on the research results. | The rating of potential influence on research results is a qualitative assessment; there is no agreed-upon quantitative scale of certainty. The rating of 'high potential to influence research results' matches an answer of Yes (from an option list of Yes/Likely Yes/Likely No/No) in the ROB2 risk of bias assessment tool. | Brian S. Alper, Joanne Dehnbostel | 2024-11-08 vote 6-0 by Saphia Mokrane, Homa Keshavarz, Lara Kahaleh, Bhagvan Kommadi, Airton Tetelbom Stein, Eric Harvey | 2024-11-01 vote 5-0 by Harold Lehmann, Saphia Mokrane, Eric Harvey, Homa Keshavarz, Airton Tetelbom Stein BUT THEN COMMENT ADDED | 2024-11-01 comment: 1Yes) Consider a Comment for application that says something like, "There is no agreed-upon quantitative scale on which to base this qualitative assessment." | ||||||
2 | SEVCO:00208 | some potential to influence research results | Low to moderate certainty that a cause of bias, if present, has an impact on the research results. | The rating of potential influence on research results is a qualitative assessment; there is no agreed-upon quantitative scale of certainty. The rating of 'some potential to influence research results' matches an answer of Likely Yes (from an option list of Yes/Likely Yes/Likely No/No) in the ROB2 risk of bias assessment tool. | Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Khalid Shahin | 2024-11-22 vote 6-0 by Saphia Mokrane, Sheyu Li, Homa Keshavarz, Eric Harvey, Airton Tetelbom Stein, Lara Kahaleh | 2024-11-01 vote 5-0 by Harold Lehmann, Saphia Mokrane, Eric Harvey, Homa Keshavarz, Airton Tetelbom Stein BUT THEN COMMENT ADDED 2024-11-08 vote 6-0 by Saphia Mokrane, Homa Keshavarz, Lara Kahaleh, Bhagvan Kommadi, Airton Tetelbom Stein, Eric Harvey BUT THEN TERM CHANGED DUE TO COMMENT 2024-11-15 vote 6-1 by Brian S. Alper, Cauê Monaco, Airton Tetelbom Stein, Homa Keshavarz, Lenny Vasanthan, Yaowaluk Ngoenwiwatkul, Eric Harvey | 2024-11-01 comment: 1Yes) Consider a Comment for application that says something like, "There is no agreed-upon quantitative scale on which to base this qualitative assessment." 2024-11-08 comment re: "some potential to influence research results" = "Low to moderate certainty that a cause of bias, if present, has an impact on the research results."1Yes) instead of some, it can be worded as significant potential 2024-11-15 comment re: "significant potential to influence research results" = "Low to moderate certainty that a cause of bias, if present, has an impact on the research results."1No) Significant can be interpreted as substantial or important -- that does not match the described use of 'Likely Yes' as distinct from 'Yes' in response to whether there is a potential to impact research results. | ||||||
2 | SEVCO:00209 | limited potential to influence research results | Low to moderate certainty that a cause of bias, if present, does NOT have an impact on the research results. | The rating of potential influence on research results is a qualitative assessment; there is no agreed-upon quantitative scale of certainty. The rating of 'limited potential to influence research results' matches an answer of Likely No (from an option list of Yes/Likely Yes/Likely No/No) in the ROB2 risk of bias assessment tool. | Brian S. Alper, Joanne Dehnbostel | 2024-11-08 vote 5-0 by 2024-11-08 vote 6-0 by Saphia Mokrane, Homa Keshavarz, Lara Kahaleh, Bhagvan Kommadi, Airton Tetelbom Stein, Eric Harvey | 2024-11-01 vote 4-1 by Harold Lehmann, Saphia Mokrane, Eric Harvey, Homa Keshavarz, Airton Tetelbom Stein | 2014-11-01 comment re: "Low to moderate certainty that a cause of bias, if present, has little to no impact on the research results.": 1No) Definition is too similar to 'some potential' | ||||||
2 | SEVCO:00210 | no potential to influence research results | High certainty that a cause of bias, if present, has NO impact on the research results. | The rating of potential influence on research results is a qualitative assessment; there is no agreed-upon quantitative scale of certainty. The rating of 'no potential to influence research results' matches an answer of No (from an option list of Yes/Likely Yes/Likely No/No) in the ROB2 risk of bias assessment tool. | Brian S. Alper, Joanne Dehnbostel | 2024-11-08 vote 6-0 by Saphia Mokrane, Homa Keshavarz, Lara Kahaleh, Bhagvan Kommadi, Airton Tetelbom Stein, Eric Harvey | 2024-11-01 vote 4-1 by Harold Lehmann, Saphia Mokrane, Eric Harvey, Homa Keshavarz, Airton Tetelbom Stein | 2024-11-01 comment re: "no potential to influence research results" = "High certainty that a cause of bias, if present, has no impact on the research results."1No) Weird that both "High potential" and "No potential" start with "High certainty..." The negation in the definition here is too buried | ||||||
1 | STATO:0000039 | statistic | An information content entity that is a formalization of relationships between variables and value specification. | The 'statistic' does not include the numerical value for which the statistic is used--that would be the statistic value, and the 'statistic' does not include the model characteristics. | Brian S. Alper, Philippe Rocca-Serra, Harold Lehmann, Joanne Dehnbostel, Muhammad Afzal, Janice Tufte, Bhagvan Kommadi, Khalid Shahin | revision 6/6 as of 10/4/2021: Alejandro Piscoya, Bhagvan Kommadi, Brian S. Alper, Janice Tufte, Louis Leff, Sorana D. Bolboaca; original approval 6/6 as of 9/27/2021: Harold Lehmann, Bhagvan Kommadi, Louis Leff, Janice Tufte, Joanne Dehnbostel, Mario Tristan | ||||||||
2 | STATO:0000668 | absolute value | A statistic that represents the distance of a value from zero. | The | symbol is used around the value to denote the absolute value, e.g. |x|, such that if x = -3, then |x| = 3. | Brian S. Alper, Kenneth Wilkins, Harold Lehmann | 2024-04-15 vote 5-0 by Harold Lehmann, Janice Tufte, Eric Harvey, Homa Keshavarz, Lenny Vasanthan | ||||||||
2 | STATO:0000047 | count | A statistic that represents the number of instances or occurrences of something. | A count can only be denoted by non-negative integer values. | Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Janice Tufte, Kenneth Wilkins | 6/6 as of 9/27/2021: Harold Lehmann, Bhagvan Kommadi, Louis Leff, Janice Tufte, Joanne Dehnbostel, Mario Tristan | ||||||||
2 | STATO:0000669 | sum | A statistic that represents the result of adding all the values in a collection of values. | Brian S. Alper, Joanne Dehnbostel,Muhammad Afzal, Janice Tufte, Bhagvan Kommadi, Khalid Shahin | 6/6 as of 10/4/2021: Alejandro Piscoya, Bhagvan Kommadi, Brian S. Alper, Janice Tufte, Louis Leff, Sorana D. Bolboaca | |||||||||
2 | STATO:0000151 | maximum observed value | A statistic that represents the largest non-null value in a collection of values that can be ordered by magnitude. | Brian S. Alper, Joanne Dehnbostel,Muhammad Afzal, Janice Tufte, Bhagvan Kommadi, Khalid Shahin | 6/6 as of 10/4/2021: Alejandro Piscoya, Bhagvan Kommadi, Brian S. Alper, Janice Tufte, Louis Leff, Sorana D. Bolboaca | |||||||||
2 | STATO:0000150 | minimum observed value | A statistic that represents the smallest non-null value in a collection of values that can be ordered by magnitude. | Brian S. Alper, Joanne Dehnbostel,Muhammad Afzal, Janice Tufte, Bhagvan Kommadi, Khalid Shahin | 6/6 as of 10/4/2021: Alejandro Piscoya, Bhagvan Kommadi, Brian S. Alper, Janice Tufte, Louis Leff, Sorana D. Bolboaca | |||||||||
2 | STATO:0000666 | maximum possible value | A statistic that represents the largest value that could occur. | This term may be used to denote the upper limit of a scale or score. | Brian S. Alper, Joanne Dehnbostel,Muhammad Afzal, Janice Tufte, Bhagvan Kommadi, Khalid Shahin | 6/6 as of 10/4/2021: Alejandro Piscoya, Bhagvan Kommadi, Brian S. Alper, Janice Tufte, Louis Leff, Sorana D. Bolboaca | ||||||||
2 | STATO:0000667 | minimum possible value | A statistic that represents the smallest value that could occur. | This term may be used to denote the lower limit of a scale or score. | Brian S. Alper, Joanne Dehnbostel,Muhammad Afzal, Janice Tufte, Bhagvan Kommadi, Khalid Shahin | 6/6 as of 10/4/2021: Alejandro Piscoya, Bhagvan Kommadi, Brian S. Alper, Janice Tufte, Louis Leff, Sorana D. Bolboaca | ||||||||
2 | STATO:0000029 | measure of central tendency | A statistic that represents a central value for a set of data. | Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Janice Tufte, Kenneth Wilkins, Philippe Rocca-Serra | 6/6 as of 10/27/2021: Janice Tufte, Louis Leff, Vignesh Subbian, Robin Ann Yurk, Harold Lehmann, Muhammad Afzal, Pentti Nieminen | |||||||||
3 | STATO:0000573 | mean | A measure of central tendency calculated as the sum of a set of values divided by the number of values in the set. | A=sum[Ai] / n where i ranges from 1 to n and Ai represents the value of individual observations. | Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Janice Tufte, Kenneth Wilkins, Philippe Rocca-Serra | 6/6 as of 10/27/2021: Janice Tufte, Louis Leff, Vignesh Subbian, Robin Ann Yurk, Harold Lehmann, Muhammad Afzal, Pentti Nieminen | Measure of Central Tendency | keskiarvo | ||||||
4 | STATO:0000664 | mean of differences | A mean of values in which each value is the subtraction of one quantity from another. | The primary use of this term is in analyzing within-individual differences. | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Khalid Shahin, Janice Tufte, Muhammad Afzal, Kenneth Wilkins | 2021-12-15 vote 5-0 by Robin Ann Yurk, Harold Lehmann, Janice Tufte, Paola Rosati, Brian S. Alper | 2021-12-01 vote 6-1 by Louis Leff, Paola Rosati, Robin Ann Yurk, Joanne Dehnbostel, Jesus Lopez-Alcalde, Janice Tufte, C P Ooi | 2021-12-01 comment: 'Difference in means' may be more appropriate. 'Mean value from one population subtract the mean value of another population' may be clearer reflecting the definition | http://purl.obolibrary.org/obo/STATO_0000664 | Measure of Central Tendency | ||||
4 | STATO:0000658 | mean time-to-event | A mean of values in which each value is the duration of time between the start of observation and the occurrence of an event. | Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Yuan Gao | 2022-10-19 vote 5-0 by Joanne Dehnbostel, Muhammad Afzal, Janice Tufte, Mario Tristan, Eric Harvey | |||||||||
3 | STATO:0000396 | geometric mean | A measure of central tendency calculated as the nth root of the product of all of the observations in a data set (n being the number of all observations). | For n observations with values x1, x2, … xn, the product of all the values P = x1 * x2 … xn [also expressed as P = (x1)(x2)...(xn)]. The nth root of the product = (P)^(1/n). | Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Janice Tufte, Kenneth Wilkins, Philippe Rocca-Serra | 6/6 as of 10/27/2021: Janice Tufte, Louis Leff, Vignesh Subbian, Robin Ann Yurk, Harold Lehmann, Muhammad Afzal, Pentti Nieminen | Measure of Central Tendency | |||||||
3 | STATO:0000574 | median | A measure of central tendency equal to the middle value (or mean of the two middle values) of a set of ordered data. | The median value is equal to the middle value of a set of ordered data with an odd number of values. The median value is calculated as the mean of the two middle values of a set of ordered data with an even number of values. The median is sometimes called the second quartile or fiftieth percentile. | Brian S. Alper, Harold Lehmann, Kenneth Wilkins, Joanne Dehnbostel, Muhammad Afzal, Janice Tufte | 2021-12-15 vote 6-0 by Robin Ann Yurk, Muhammad Afzal, Harold Lehmann, Janice Tufte, Paola Rosati, Khalid Shahin | 6-1 on 2021-11-01 by Louis Leff, Vignesh Subbian, Pentti Nieminen, Bhagvan Kommadi, Janice Tufte, Sorana D. Bolboaca, Robin Ann Yurk 2021-12-01 vote 5-1 by Philippe Rocca-Serra, Paola Rosati, Robin Ann Yurk, Joanne Dehnbostel, Jesus Lopez-Alcalde, Janice Tufte | 2021-11-01 comment: the definition is appropriate. Suggest use Alternative terms: center value, statistical median or middle value. I don't recommend using fiftieth percentile or second quartile 2021-12-01 comment: I would change definition to: A measure of central tendency equal to the middle value of a set of ordered data with an odd number of values. It could be calculated also as the mean of the two middle values of a set of ordered data with an even number of values. ((Perhaps simpler as: A measure of central tendency equal to the middle value of a set of ordered data. In a set of ordered data with an even number of values, the middle value is calculated as the mean of the two middle values.)) | Measure of Central Tendency | |||||
4 | STATO:0000659 | median time-to-event | A median of values in which each value is the duration of time between the start of observation and the occurrence of an event. | Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Yuan Gao | 2022-10-19 vote 5-0 by Joanne Dehnbostel, Muhammad Afzal, Janice Tufte, Mario Tristan, Eric Harvey | |||||||||
3 | STATO:0000033 | mode | A measure of central tendency that is the most frequently occurring value in a data set. If no value is repeated, there is no mode. If more than one value occurs with the same greatest frequency, each of these values is a mode. | Brian S. Alper, Harold Lehmann, Kenneth Wilkins, Joanne Dehnbostel, Muhammad Afzal, Janice Tufte | 7/7 on 2021-11-01 by Louis Leff, Vignesh Subbian, Pentti Nieminen, Bhagvan Kommadi, Janice Tufte, Sorana D. Bolboaca, Robin Ann Yurk | Measure of Central Tendency | ||||||||
3 | STATO:0000397 | harmonic mean | A measure of central tendency calculated by dividing the total number of observations by the sum of the reciprocals of each observed value. | Harmonic Mean = N/(1/a1+1/a2+1/a3+1/a4+...+1/aN) where a(i)= Individual observed value and N = Sample size (Number of observations) | Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal | 2022-10-19 vote 6-0 by Joanne Dehnbostel, Muhammad Afzal, Janice Tufte, Mario Tristan, Eric Harvey, Yuan Gao | STATO: The harmonic mean is a kind of mean which is calculated by dividing the total number of observations by the reciprocal of each number in a series. Harmonic Mean = N/(1/a1+1/a2+1/a3+1/a4+.......+1/aN) where a(i)= Individual score and N = Sample size (Number of scores) | |||||||
2 | STATO:0000291 | quantile | A statistic that represents the value for which the number of data points at or below it constitutes a specific portion of the total number of data points. | Quantile is a type of statistic but not used without specification of the portion it represents. Typically, the specification of the portion it represents includes both the number of equal portions (e.g., percentile for 100 equal portions, or quartile for 4 equal portions) and selection of one of these portions (e.g., 25th percentile or first quartile). For common uses in communicating statistic values, more specific types of quantiles (such as percentile, decile, or quartile) would be used instead of the term *quantile*. | Brian S. Alper, Kenneth Wilkins, Harold Lehmann, Muhammad Afzal | 2024-04-29 vote 6-0 by Lenny Vasanthan, Brian S. Alper, Harold Lehmann, Eric Harvey, Homa Keshavarz, Sean Grant | 2024-04-15 vote 2-1 by Lenny Vasanthan, Harold Lehmann, Eric Harvey 2024-04-22 vote 5-1 by Lenny Vasanthan, Homa Keshavarz, Eric Harvey, Harold Lehmann, Sheyu Li, Khalid Shahin | 2024-04-15 comment: Can we add information about quartiles also here. eg. Quartile is a type of quantile that divides the distribution into four equal halves (Q1, Q2, Q3, Q4) 2024-04-22 comment: It is confusing in the comments whether the quantile refers to 25% percentile, 50% percentile and 75% percentile. What is the difference between quantile and percentile? | STATO-a quantile is a data item which corresponds to specific elements x in the range of a variate X. the k-th n-tile P_k is that value of x, say x_k, which corresponds to a cumulative frequency of Nk/n (Kenney and Keeping 1962). If n=4, the quantity is called a quartile, and if n=100, it is called a percentile. | |||||
3 | STATO:0000293 | percentile | A quantile in which the specific portion of the number of data points is expressed as a percentage. | Quantile is defined as a statistic that represents the value for which the number of data points at or below it constitutes a specific portion of the total number of data points. Percentile is a type of statistic but not used to define a statistic value without specification of the portion it represents. For example, one may report a fortieth percentile (40%ile) but one does not report a percentile without specification of which percentile. 40% of the data is at or below the 40%ile. Most statistic values can be reported in FHIR Evidence Resources with a statisticType element including the SEVCO term as a CodeableConcept. To report a specific percentile (such as the fortieth percentile), one may use the attributeEstimate element containing a type element with the SEVCO term for percentile as a CodeableConcept and a level element with the corresponding value (such as 40). | Brian S. Alper, Kenneth Wilkins, Harold Lehmann, Muhammad Afzal | 2024-04-29 vote 6-0 by Lenny Vasanthan, Brian S. Alper, Harold Lehmann, Eric Harvey, Homa Keshavarz, Sean Grant | 2024-04-22 vote 5-1 by Lenny Vasanthan, Homa Keshavarz, Eric Harvey, Harold Lehmann, Sheyu Li, Khalid Shahin | 2024-04-22 comment: It is unclear if it is OK to say 50% percentile, or it is recommended to say 50% percentile. | STATO-a percentile is a quantile which splits data into sections accrued of 1% of data, so the first percentile delineates 1% of the data, the second quartile delineates 2% of the data and the 99th percentile, 99 % of the data | |||||
3 | STATO:0000292 | decile | A quantile in which the specific portion of the number of data points is expressed as a number of tenths. | Quantile is defined as a statistic that represents the value for which the number of data points at or below it constitutes a specific portion of the total number of data points. Decile is a type of statistic but not used to define a statistic value without specification of the portion it represents. For example, one may report a fourth decile but one does not report a decile without specification of which decile. 40% of the data is at or below the fourth decile. Most statistic values can be reported in FHIR Evidence Resources with a statisticType element including the SEVCO term as a CodeableConcept. To report a specific decile (such as the fourth decile), one may use the attributeEstimate element containing a type element with the SEVCO term for decile as a CodeableConcept and a level element with the corresponding value (such as 4). | Brian S. Alper, Kenneth Wilkins, Harold Lehmann, Muhammad Afzal | 2024-04-22 vote 6-0 by Lenny Vasanthan, Homa Keshavarz, Eric Harvey, Harold Lehmann, Sheyu Li, Khalid Shahin 2024-04-29 vote 6-0 by Lenny Vasanthan, Brian S. Alper, Harold Lehmann, Eric Harvey, Homa Keshavarz, Sean Grant | STATO-a decile is a quantile where n=10 and which splits data into sections accrued of 10% of data, so the first decile delineates 10% of the data, the second decile delineates 20% of the data and the nineth decile, 90 % of the data | |||||||
3 | STATO:0000152 | quartile | A quantile in which the specific portion of the number of data points is expressed as a number of fourths. | Quantile is defined as a statistic that represents the value for which the number of data points at or below it constitutes a specific portion of the total number of data points. Quartile is a type of statistic but not used to define a statistic value without specification of the portion it represents. For example, one may report a third quartile but one does not report a quartile without specification of which quartile. 75% of the data is at or below the third quartile. The second quartile is also called the median. To report the first quartile, use the SEVCO term for <a href="https://fevir.net/resources/CodeSystem/27270#TBD:first-quartile" target="_blank">first quartile</a>. To report the second quartile, use the SEVCO term for <a href="https://fevir.net/resources/CodeSystem/27270#STATO:0000574" target="_blank">median</a>. To report the third quartile, use the SEVCO term for <a href="https://fevir.net/resources/CodeSystem/27270#TBD:third-quartile" target="_blank">third quartile</a>. To report the fourth quartile, use the SEVCO term for maximum observed value. | Brian S. Alper, Kenneth Wilkins, Harold Lehmann, Muhammad Afzal | 2024-04-22 vote 6-0 by Lenny Vasanthan, Homa Keshavarz, Eric Harvey, Harold Lehmann, Sheyu Li, Khalid Shahin 2024-04-29 vote 6-0 by Lenny Vasanthan, Brian S. Alper, Harold Lehmann, Eric Harvey, Homa Keshavarz, Sean Grant | STATO-a quartile is a quantile which splits data into sections accrued of 25% of data, so the first quartile delineates 25% of the data, the second quartile delineates 50% of the data and the third quartile, 75% of the data | |||||||
4 | STATO:0000167 | first quartile | A quantile for which the number of data points at or below it constitutes a 25% of the total number of data points. | Brian S. Alper, Kenneth Wilkins, Harold Lehmann, Muhammad Afzal | 2024-04-29 vote 6-0 by Lenny Vasanthan, Brian S. Alper, Harold Lehmann, Eric Harvey, Homa Keshavarz, Sean Grant | |||||||||
4 | STATO:0000170 | third quartile | A quantile for which the number of data points at or below it constitutes a 75% of the total number of data points. | Brian S. Alper, Kenneth Wilkins, Harold Lehmann, Muhammad Afzal | 2024-04-29 vote 6-0 by Lenny Vasanthan, Brian S. Alper, Harold Lehmann, Eric Harvey, Homa Keshavarz, Sean Grant | |||||||||
2 | STATO:0000613 | difference | A statistic that is a subtraction of one quantity from another. | Harold Lehmann, Brian S. Alper, Muhammad Afzal, Khalid Shahin, Philippe Rocca-Serra, Kenneth Wilkins, Joanne Dehnbostel, Janice Tufte | 2021-12-01 vote 5-0 by Philippe Rocca-Serra, Paola Rosati, Joanne Dehnbostel, Jesus Lopez-Alcalde, Janice Tufte | |||||||||
3 | STATO:0000614 | absolute difference | A statistic that is a subtraction of one quantity from another, with no modification of the resulting value. | As a type of statistic, "Absolute Difference" is the actual difference between two quantities and can be positive or negative depending on the order of subtraction. The term "Absolute Difference" should not be confused with the mathematical term 'absolute value' which is a numerical value without a negative sign. | Brian S. Alper, Harold Lehmann, Kenneth Wilkins, Joanne Dehnbostel, Muhammad Afzal, Khalid Shahin, Janice Tufte | 2021-12-01 vote 5-0 by Philippe Rocca-Serra, Paola Rosati, Joanne Dehnbostel, Jesus Lopez-Alcalde, Janice Tufte | ||||||||
4 | STATO:0000616 | count difference | A statistic that is a subtraction of one count from another. | The term Count Difference is used to specify the Absolute Difference is with respect to a count or number of items (such as number of events, platelet counts, sample size e.g. number of people in the group) to distinguish from differences in other types of statistics (mean difference, median difference, risk difference, etc.) | Harold Lehmann, Brian S. Alper, Muhammad Afzal, Khalid Shahin, Philippe Rocca-Serra, Kenneth Wilkins, Joanne Dehnbostel, Janice Tufte | 2021-12-15 vote 6-0 by Robin Ann Yurk, Janice Tufte, Paola Rosati, Harold Lehmann, Joanne Dehnbostel, Muhammad Afzal | 2021-12-01 vote 5-1 by Philippe Rocca-Serra, Paola Rosati, Robin Ann Yurk, Joanne Dehnbostel, Jesus Lopez-Alcalde, Janice Tufte 2021-12-08 vote 6-1 by Philippe Rocca-Serra, Paola Rosati, Robin Ann Yurk, Joanne Dehnbostel, Jesus Lopez-Alcalde, Janice Tufte, Harold Lehmann | 2021-12-01 comment: Suggest include as an Alternative term under difference and remove this term as unclear on distinction as a separate term. 2021-12-08 comment: Suggest removing this term and adding as an Alternative term to Difference {(atlernative term and Comment for application added in response}} | ||||||
4 | STATO:0000457 | difference in means | A statistic that is a subtraction of one mean from another. | The primary use of this term is in analyzing between-group differences. | Harold Lehmann, Brian S. Alper, Muhammad Afzal, Khalid Shahin, Philippe Rocca-Serra, Kenneth Wilkins, Joanne Dehnbostel, Janice Tufte | 2021-12-01 vote 6-0 by Philippe Rocca-Serra, Paola Rosati, Robin Ann Yurk, Joanne Dehnbostel, Jesus Lopez-Alcalde, Janice Tufte 2021-12-01 Steering group added comment for application and decided not to send out for vote again. | ||||||||
4 | STATO:0000617 | difference in medians | A statistic that is a subtraction of one median from another. | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Khalid Shahin, Janice Tufte, Muhammad Afzal, Kenneth Wilkins | 2021-12-01 vote 6-0 by Philippe Rocca-Serra, Paola Rosati, Robin Ann Yurk, Joanne Dehnbostel, Jesus Lopez-Alcalde, Janice Tufte | |||||||||
4 | STATO:0000424 | risk difference | A measure of association that is the subtraction of the risk of an event in one group from the risk of the same event in another group. | Brian S. Alper, Muhammad Afzal, Joanne Dehnbostel, Harold Lehmann, Janice Tufte | 2022-06-08 vote 5-0 by Harold Lehmann, Mario Tristan, Eric M Harvey, Muhammad Afzal, Paola Rosati | STATO: risk difference = The risk difference is the difference between the observed risks (proportions of individuals with the outcome of interest) in the two groups. The risk difference is straightforward to interpret: it describes the actual difference in the observed risk of events between experimental and control interventions. | Measure of Association | |||||||
4 | STATO:0000665 | difference-in-differences | A statistic that is a subtraction of one difference from another. | The term 'Difference-in-differences' may be used to assess the incremental benefit or harm of an intervention or exposure, where the effect of the exposure is measured as a difference (for example, pre-post testing comparison of values before and after the exposure) in two groups being compared. | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Muhammad Afzal, Janice Tufte | 2022-06-08 vote 6-0 by Robin Ann Yurk, Harold Lehmann, Mario Tristan, Eric M Harvey, Muhammad Afzal, Paola Rosati | 2022-06-08 comment: do you want to add to comment for application, pre-post testing or as an Alternative term? | |||||||
3 | STATO:0000615 | relative difference | A statistic that is a difference between 1 and a ratio of the two quantities being compared. | Relative Difference = 1 - ( a / b ). Because 1 - ( a / b ) is not equal to 1 - ( b / a ), Relative Difference may be expressed as "Relative Difference with respect to b" when referring to 1 - ( a / b ). The relative difference can also be defined as a statistic that is a ratio of the absolute difference (of the two quantities being compared) to the reference value (one of the quantities being compared). Relative Difference = ( b - a ) / ( b ) where b is the reference value and this may also be called "Relative Difference with respect to b" | Brian S. Alper, Muhammad Afzal, Kenneth Wilkins, Joanne Dehnbostel | 2022-06-29 vote 5-0 by Jesus Lopez-Alcalde, Philippe Rocca-Serra, Mario Tristan, Harold Lehmann, Eric Harvey | 2022-06-15 vote 2-2 by Jesus Lopez-Alcalde, Robin Ann Yurk, Paola Rosati, Janice Tufte 2022-06-22 vote 4-2 by Mario Tristan, Eric M Harvey, Jesus Lopez-Alcalde, Robin Ann Yurk, Paola Rosati, Janice Tufte | 2022-06-15 comments: Suggest combining this set of terms(relative difference, relative mean difference and relative risk difference and summarizing the comment for applications so it is one term. To me this definition is unclear...sorry, what it means? Is it weird to a have a ratio of a difference to a reference value? Sorry, but I am unable to understand this definition. relative and absolute difference seems confusing to me | Example of a relative difference (relative to placebo) that is not a relative mean difference or a relative risk difference: Relative median difference (%) = [(active median - placebo median) / placebo median] x 100. This can be transformed to: Relative median difference = (active median / placebo median) - 1. | |||||
4 | STATO:0000625 | relative mean difference | A statistic that is a difference between 1 and a ratio of the two mean values being compared. | Relative Mean Difference = 1 - ( a / b ) where a and b are mean values. The relative mean difference can also be defined as a statistic that is a ratio of the difference in means to the reference mean value. Relative Mean Difference = ( b - a ) / ( b ) where b is the reference mean value and a is another mean value. | Brian S. Alper, Muhammad Afzal, Kenneth Wilkins, Joanne Dehnbostel | 2022-06-29 vote 6-0 by Jesus Lopez-Alcalde, Philippe Rocca-Serra, Mario Tristan, Janice Tufte, Harold Lehmann, Eric Harvey | 2022-06-15 vote 3-1 by Jesus Lopez-Alcalde, Robin Ann Yurk, Paola Rosati, Janice Tufte 2022-06-22 vote 5-1 by Mario Tristan, Eric M Harvey, Jesus Lopez-Alcalde, Robin Ann Yurk, Paola Rosati, Janice Tufte | 2022-06-15 comment: Suggest combining this set of terms(relative difference, relative mean difference and relative risk difference and summarizing the comment for applications so it is one term. 2022-06-29 comment: Relative Mean Difference is_a kind of Relative difference where the quantities being compared are two means, one of which is or acts as reference mean value (additional comment: define 'reference mean value' if it refers to something more specific | ||||||
4 | STATO:0000626 | relative risk difference | A statistic that is a difference between 1 and a ratio of the two risk values being compared. | Relative Risk Difference = 1 - ( a / b ) where a and b are risk values. The relative risk difference can also be defined as a statistic that is a ratio of the risk difference to the risk used as a reference. Relative Risk Difference = ( b - a ) / ( b ) where b is the reference risk value and a is another risk value. | Brian S. Alper, Muhammad Afzal, Kenneth Wilkins, Joanne Dehnbostel | 2022-06-22 vote 6-0 by Mario Tristan, Eric M Harvey, Jesus Lopez-Alcalde, Robin Ann Yurk, Paola Rosati, Janice Tufte | 2022-06-15 vote 3-1 by Jesus Lopez-Alcalde, Robin Ann Yurk, Paola Rosati, Janice Tufte 2022-06-22 vote 5-1 by Mario Tristan, Eric M Harvey, Jesus Lopez-Alcalde, Robin Ann Yurk, Paola Rosati, Janice Tufte | 2022-06-15 comment: Suggest combining this set of terms(relative difference, relative mean difference and relative risk difference and summarizing the comment for applications so it is one term. | ||||||
3 | STATO:0000100 | standardized mean difference | A statistic that is a difference between two means, divided by a statistical measure of dispersion. | In English, "standardized" is often used to express relative comparison to any reference value. However, in SEVCO, "standardized" is used to express relative comparison to a statistical measure of dispersion. In SEVCO, the term Standardized Mean Difference is a description of the concept without an explicit type of statistical measure of dispersion. If the statistical measure of dispersion is specified, then a type (child term) of Standardized Mean Difference is preferred. For example, in Cohen's d statistic, the statistical measure of dispersion is specified as the square root of an average of the variances of the two groups being compared. | Brian S. Alper, Kenneth Wilkins, Harold Lehmann, Muhammad Afzal, Joanne Dehnbostel, Khalid Shahin | 2022-07-20 vote 6-0 by Janice Tufte, Philippe Rocca-Serra, Mario Tristan, Jesus Lopez-Alcalde, Harold Lehmann, Eric Harvey | 2022-07-06 vote 3-1 by Paola Rosati, Harold Lehmann, Robin Ann Yurk, Eric Harvey | 2022-07-06 comment: Consider listing Cohen's D statistic as an Alternative term or selecting one of the two terms to be the term to evaluate as the definition are similar. | STATO: standardized mean difference (Cohen's d statistic, SMD) = standardized mean difference is data item computed by forming the difference between two means, divided by an estimate of the within-group standard deviation. It is used to provide an estimatation of the effect size between two treatments when the predictor (independent variable) is categorical and the response(dependent) variable is continuous | |||||
4 | STATO:0000618 | Cohen’s d statistic | A standardized mean difference which is calculated as a difference between two means, divided by a square root of an average of the variances of the two groups. | A standardized mean difference is a statistic that is a difference between two means, divided by a statistical measure of dispersion. In SEVCO, the term Standardized Mean Difference is a description of the concept without an explicit type of statistical measure of dispersion. If the statistical measure of dispersion is specified, then a type (child term) of Standardized Mean Difference is preferred. In Cohen's d statistic, the statistical measure of dispersion is specified as the square root of an average of the variances of the two groups being compared. The variances of the two groups are based on within-group standard deviations. For sample sizes < 50, a correction factor is used. | Harold Lehmann, Brian S. Alper, Kenneth Wilkins, Muhammad Afzal, Joanne Dehnbostel | 2022-07-20 vote 5-0 by Philippe Rocca-Serra, Mario Tristan, Jesus Lopez-Alcalde, Harold Lehmann, Eric Harvey | 2022-07-06 vote 3-1 by Paola Rosati, Harold Lehmann, Robin Ann Yurk, Eric Harvey | 2022-07-06 comment: Consider listing SMD statistic as an Alternative term or selecting one of the two terms to be the term to evaluate as the definition are similar and the other term to be the Alternative term. | STATO: standardized mean difference (Cohen's d statistic, SMD) = standardized mean difference is data item computed by forming the difference between two means, divided by an estimate of the within-group standard deviation. It is used to provide an estimatation of the effect size between two treatments when the predictor (independent variable) is categorical and the response(dependent) variable is continuous | |||||
4 | STATO:0000135 | strictly standardized mean difference | A standardized mean difference which is calculated as a difference between two means, divided by the standard error of the difference between the two means. | A standardized mean difference is a statistic that is a difference between two means, divided by a statistical measure of dispersion. In SEVCO, the term Standardized Mean Difference is a description of the concept without an explicit type of statistical measure of dispersion. If the statistical measure of dispersion is specified, then a type (child term) of Standardized Mean Difference is preferred. In Strictly standardized mean difference, the statistical measure of dispersion is specified as the standard error of the difference between means [SEVCO TBD:0000063]. | Harold Lehmann, Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel | 2022-07-20 vote 6-0 by Janice Tufte, Philippe Rocca-Serra, Mario Tristan, Jesus Lopez-Alcalde, Harold Lehmann, Eric Harvey | STATO: strictly standardized mean difference (SSMS) is a standardized mean difference which corresponds to the ratio of mean to the standard deviation of the difference between two groups. SSMD directly measures the magnitude of difference between two groups. SSMD is widely used in High Content Screen for hit selection and quality control. When the data is preprocessed using log-transformation as normally done in HTS experiments, SSMD is the mean of log fold change divided by the standard deviation of log fold change with respect to a negative reference. In other words, SSMD is the average fold change (on the log scale) penalized by the variability of fold change (on the log scale). For quality control, one index for the quality of an HTS assay is the magnitude of difference between a positive control and a negative reference in an assay plate. For hit selection, the size of effects of a compound (i.e., a small molecule or an siRNA) is represented by the magnitude of difference between the compound and a negative reference. SSMD directly measures the magnitude of difference between two groups. Therefore, SSMD can be used for both quality control and hit selection in HTS experiments. | |||||||
4 | STATO:0000319 | Hedges’s g | A standardized mean difference which is calculated as a difference between two means, divided by the pooled standard deviation. | A standardized mean difference is a statistic that is a difference between two means, divided by a statistical measure of dispersion. In SEVCO, the term Standardized Mean Difference is a description of the concept without an explicit type of statistical measure of dispersion. If the statistical measure of dispersion is specified, then a type (child term) of Standardized Mean Difference is preferred. In Hedges’s g, the statistical measure of dispersion is specified as the pooled standard deviation. There is a correction factor for small sample sizes. | Harold Lehmann, Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel | 2022-07-20 vote 5-0 by Philippe Rocca-Serra, Mario Tristan, Jesus Lopez-Alcalde, Harold Lehmann, Eric Harvey | STATO: Hedges's g = Hedges's g is an estimator of effect size, which is similar to Cohen's d and is a measure based on a standardized difference. However, the denominator, corresponding to a pooled standard deviation, is computed differently from Cohen's d coefficient, by applying a correction factor (which involves a Gamma function). | |||||||
4 | STATO:0000320 | Glass’s delta | A standardized mean difference which is calculated as a difference between two means (of an experimental group and a control group), divided by the standard deviation of the control group. | A standardized mean difference is a statistic that is a difference between two means, divided by a statistical measure of dispersion. In SEVCO, the term Standardized Mean Difference is a description of the concept without an explicit type of statistical measure of dispersion. If the statistical measure of dispersion is specified, then a type (child term) of Standardized Mean Difference is preferred. In Glass's delta, the statistical measure of dispersion is specified as the standard deviation of the control group. There is a correction factor for small sample sizes. | Harold Lehmann, Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel | 2022-07-20 vote 5-0 by Philippe Rocca-Serra, Mario Tristan, Jesus Lopez-Alcalde, Harold Lehmann, Eric Harvey | STATO: Glass's delta is an estimator of effect size which is similar to Cohen's d but where the denominator corresponds only to the standard deviation of the control group (or second group). It is considered less bias than the Cohen's d for estimating effect sizes based on means and distances between means. | |||||||
2 | STATO:0000634 | reciprocal of difference | A statistic that is a quotient of one and a difference. | A difference is a statistic that is a subtraction of one quantity from another. | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel | 2022-07-20 vote 5-0 by Philippe Rocca-Serra, Mario Tristan, Jesus Lopez-Alcalde, Harold Lehmann, Eric Harvey | ||||||||
3 | STATO:0000635 | number needed to treat | A statistic that represents the number of units that needs to be treated to prevent one additional undesired outcome. The Number Needed to Treat is calculated as the reciprocal of a treatment effect estimate, where the effect estimate is expressed as a risk difference. | The Number Needed to Treat (NNT) value is often rounded up to the next highest whole integer. | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte | 2022-07-20 vote 7-0 by Cauê Monaco, Janice Tufte, Philippe Rocca-Serra, Mario Tristan, Jesus Lopez-Alcalde, Harold Lehmann, Eric Harvey | Centre for Evidence-Based Medicine Number Needed to Treat (NNT) The Number Needed to Treat (NNT) is the number of patients you need to treat to prevent one additional bad outcome (death, stroke, etc.). https://www.cebm.ox.ac.uk/resources/ebm-tools/number-needed-to-treat-nnt | |||||||
3 | STATO:0000637 | number needed to screen to detect | A statistic that represents the number of units that needs to be tested to identify one additional case. The Number Needed to Screen to Detect is calculated as the reciprocal of a difference in rate of detected cases with and without screening. | The Number Needed to Screen (NNS) value is often rounded up to the next highest whole integer. The Number Needed to Screen to Detect is distinct from the Number Needed to Screen to Prevent as the formulas to calculate are different, even though both may be abbreviated as NNS. | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte | 2022-08-03 vote 5-0 by Jesus Lopez-Alcalde, Janice Tufte, Eric Harvey, Philippe Rocca-Serra, Harold Lehmann | ||||||||
3 | STATO:0000636 | number needed to screen to prevent | A statistic that represents the number of units that needs to be tested to prevent one additional adverse outcome, assuming that positive testing will lead to preventive intervention. The Number Needed to Screen to Prevent is calculated as the Number Needed to Treat divided by the prevalence. | The Number Needed to Screen (NNS) value is often rounded up to the next highest whole integer. The Number Needed to Screen to Detect is distinct from the Number Needed to Screen to Prevent as the formulas to calculate are different, even though both may be abbreviated as NNS. The formula may be adjusted for test performance characteristics (e.g. dividing by the sensitivity) or assumptions regarding acceptance or adherence of interventions. | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte | 2022-08-03 vote 5-0 by Jesus Lopez-Alcalde, Janice Tufte, Eric Harvey, Philippe Rocca-Serra, Harold Lehmann | BMJ 1998;317: 307 Number needed to screen: development of a statistic for disease screening Number needed to screen is defined as the number of people that need to be screened for a given duration to prevent one death or adverse event. Number needed to screen was then calculated by dividing the number needed to treat for treating risk factors by the prevalence of disease that was unrecognised or untreated. https://www.bmj.com/content/317/7154/307.long | |||||||
3 | STATO:0000638 | number needed to harm | A statistic that represents the number of units that, if treated or exposed to the intervention, to lead to one additional undesired outcome. The Number Needed to Harm is calculated as the reciprocal of a treatment effect estimate, where the effect estimate is expressed as a risk difference. | The Number Needed to Harm (NNH) value is often rounded down to the next lowest whole integer. | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte | 2022-07-20 vote 7-0 by Cauê Monaco, Janice Tufte, Philippe Rocca-Serra, Mario Tristan, Jesus Lopez-Alcalde, Harold Lehmann, Eric Harvey | Centre for Evidence-Based Medicine Number needed to treat (NNT): The number of patients who need to be treated to prevent one bad outcome. It is the inverse of the ARR: NNT=1/ARR. Numbers needed to harm (NNH)-the number of patients who, if they received the experimental treatment, would lead to one additional person being harmed compared with patients who receive the control treatment; calculated as 1/ARI. https://www.cebm.ox.ac.uk/resources/ebm-tools/glossary | |||||||
2 | STATO:0000184 | ratio | A statistic that is a quotient of two quantities. | Although some definitions for Ratio include "with the same units of measurement" and some definitions for Ratio include "a dimensionless quotient", not all definitions have these concepts, and there are ratios with units of measurement that are different for numerator and denominator such as event rate, body mass index, and cost-effectiveness ratio. | Harold Lehmann, Kenneth Wilkins, Brian S. Alper, Joanne Dehnbostel, Janice Tufte, Muhammad Afzal | 2022-01-05 vote 6-0 by Harold Lehmann, Robin Ann Yurk, Janice Tufte, C P Ooi, Louis Leff, Jesus Lopez-Alcalde | 2021-12-22 vote 3-1 by Robin Ann Yurk, Harold Lehmann, Janice Tufte, Jesus Lopez-Alcalde 2021-12-29 vote 3-1 by Robin Ann Yurk, Harold Lehmann, Janice Tufte, C P Ooi | 2021-12-22 comment: I suggest adding "with the same measurement units"2021-12-29 comment: I agree with the term definition. However, the comment could be improved and I would not include BMI as an example as an index may not necessarily be a ratio but a more complex statistic or calculation 2022-01-05 comment: Comment, I would remove body mass index from the comment section as an example as an index is a unique statistical defnition. | STATO: A ratio is a data item which is formed with two numbers r and s is written r/s, where r is the numerator and s is the denominator. The ratio of r to s is equivalent to the quotient r/s. NCIt: The quotient of one quantity divided by another, with the same units of measurement. UMLS: Quotient of quantities of the same kind for different components within the same system. OECD: A ratio is a number that expresses the relative size of two other numbers. OCRe: A ratio is a quotient of quantities of the same kind for different components within the same system. SCO: A ratio is a relationship between two numbers of the same kind expressed arithmetically as a dimensionless quotient of the two which explicitly indicates how many times the first number contains the second. Quotient of quantities of the same kind for different components within the same system. | |||||
3 | STATO:0000639 | percentage | A ratio that is multiplied by 100, and has the same units of measurement in the numerator and the denominator. | When a percentage is a fraction of hundred or proportion per hundred, then the percentage is the proportion multiplied by 100. However, a percentage can be greater than 100% so the definition is a ratio that is multiplied by 100. Proportion is SEVCO code of TBD:0000018, Ratio is SEVCO code of STATO:0000184 | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal, Khalid Shahin | 2022-01-07 vote 7-0 by Harold Lehmann, Robin Ann Yurk, Janice Tufte, C P Ooi, Louis Leff, Jesus Lopez-Alcalde, Mario Tristan | 2022-01-05 vote 5-1 by Harold Lehmann, Robin Ann Yurk, Janice Tufte, C P Ooi, Louis Leff, Jesus Lopez-Alcalde | 2022-01-05 comment: Instead of "A ratio" I would propose "A proportion that is multiplied by 100, [...]" | NCIt-A fraction or ratio with 100 understood as the denominator. Alt definition One hundred times the quotient of one quantity divided by another, with the same units of measurement. OECD-A percentage is a special type of proportion where the ratio is multiplied by a constant, 100, so that the ratio is expressed per 100. SCO-A fraction or ratio with 100 understood as the denominator. UMLS-A unit for expressing a number as a fraction of hundred (on the basis of a rate or proportion per hundred)-NCI | |||||
4 | STATO:0000705 | measurement accuracy | A percentage in which the numerator represents the absolute value of one minus the difference between the true value and the observed value, and the denominator represents the true value. | Brian S. Alper, Harold Lehmann, Muhammad Afzal, Joanne Dehnbostel | 2022-08-24 vote 5-0 by Jesus Lopez-Alcalde, Harold Lehmann, Mario Tristan, Cauê Monaco, Eric Harvey | from https://www.sciencedirect.com/topics/engineering/measurement-accuracy Measurement Accuracy Measurement accuracy is defined as the closeness of agreement between a measured quantity value and a true quantity value of a measurand (i.e., the quantity intended to be measured) (ISO-JCGM 200, 2008), and is often limited by calibration errors. | ||||||||
3 | STATO:0000607 | proportion | A ratio in which the numerator represents a part, fraction or share of the amount represented by the denominator. | The value of a proportion must be between 0 and 1 (inclusive). Proportions may represent the frequency of some phenomenon of interest within a population, or may represent a subset of a whole. | Harold Lehmann, Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins, Muhammad Afzal, Khalid Shahin | 2022-01-07 vote 7-0 by Harold Lehmann, Robin Ann Yurk, Janice Tufte, C P Ooi, Louis Leff, Jesus Lopez-Alcalde, Mario Tristan | 2021-12-29 vote 3-1 by Robin Ann Yurk, Harold Lehmann, Janice Tufte, C P Ooi | 2021-12-29 comment: I agree with the term definition. However, for the comment, I would edit to include the OCRe defintion: A proportion is a measure of the frequency of some phenomenon of interest within a population. | STATO: observed risk [as a data item STATO_0000423] = the proportion of individuals in a population with the outcome of interest NCIt: A part, fraction, share, or number considered in relation to the whole amount or number. OECD Definition: A proportion is a special type of ratio in which the denominator includes the numerator. An example is the proportion of deaths that occurred to males which would be deaths to males divided by deaths to males plus deaths to females (i.e. the total population). OCRe: A proportion is a measure of the frequency of some phenomenon of interest within an average population | |||||
4 | STATO:0000413 | incidence | A proportion in which the numerator represents new events. | Outside of the Scientific Evidence Code System (SEVCO), there is substantial inconsistency in the terms and definitions used for incidence and related concepts. Within SEVCO, Incidence is a proportion in which the numerator represents new events. The denominator may represent the entire population or may represent that population at risk (i.e., those without prior events). Disease incidence is the ratio of the number of new cases of a disease divided by the number of persons at risk for the disease. Incidence is a proportion because the persons in the numerator, those who develop disease, are all included in the denominator (the entire population). When a time period or a duration of time is used to define the period of time in which the incidence is measured, the statistic type is Incidence. Examples include 1-year incidence, in-hospital incidence, and cumulative incidence. When time is considered as a variable in the formalization of the statistic, such as incidence per unit of time, then the statistic type is Incidence Rate (SEVCO code of TBD:0000024) | Brian S. Alper, Muhammad Afzal, Joanne Dehnbostel, Janice Tufte | 2022-01-12 vote 6-0 by Harold Lehman, Mario Tristan, janice tufte, Andrew Beck, Robin Ann Yurk, Paul Harris | 2022-01-05 vote 2-2 by Robin Ann Yurk, janice tufte, Jesus Lopez-Alcalde, Harold Lehmann | 2022-01-05 comments: I propose "The number of new occurrences of an event (for example, infection) in a population at risk over a particular period of time. Doesn't denominator need to include a time component? Even if not, the time component should be referenced in the Comment. Also, do note that, in public health, the numerator *attempts* to be a subset of the denominator, but that relationship cannot be assured. (E.g., fertility incidence may be number of births (vital statistics) with denominator of number of women of child bearing age (census). 2022-06-15 Expert Working Group/Steering Committee removed 'Risk' as Alternative term as we created a separate term for 'Risk' (TBD:0000185) | STATO: Incidence is the ratio of the number of new cases of a disease divided by the number of persons at risk for the disease. NCIt The relative frequency of occurrence of something. OBCS A data item that refers to the number of new events that have occurred in a specific time interval divided by the population at risk at the beginning of the time interval. The result gives the likelihood of developing an event in that time interval. UMLS The number of new cases of a given disease during a given period in a specified population. It also is used for the rate at which new events occur in a defined population. It is differentiated from PREVALENCE, which refers to all cases in the population at a given time. (MSH) The relative frequency of occurrence of something. (NCI) The number of new cases of a disease diagnosed each year. (NCI) CDC: https://www.cdc.gov/csels/dsepd/ss1978/lesson3/section2.html Incidence refers to the occurrence of new cases of disease or injury in a population over a specified period of time. Although some epidemiologists use incidence to mean the number of new cases in a community, others use incidence to mean the number of new cases per unit of population. Two types of incidence are commonly used — incidence proportion and incidence rate. Incidence proportion or risk Synonyms for incidence proportion Attack rate Risk Probability of developing disease Cumulative incidence Definition of incidence proportion Incidence proportion is the proportion of an initially disease-free population that develops disease, becomes injured, or dies during a specified (usually limited) period of time. Synonyms include attack rate, risk, probability of getting disease, and cumulative incidence. Incidence proportion is a proportion because the persons in the numerator, those who develop disease, are all included in the denominator (the entire population). | |||||
4 | STATO:0000412 | prevalence | A proportion in which the numerator represents all events of interest (for example, both new and preexisting cases of a disease) in the population, which is represented by the denominator. | Prevalence is the proportion of persons in a population who have a particular disease or attribute at a specified point in time or over a specified period of time. Prevalence differs from incidence in that prevalence includes all cases, both new and preexisting, in the population at the specified time, whereas incidence is limited to new cases only. In Bayesian calculations, the prevalence value is often used as the pre-test probability or prior probability value, but these probability values are not always based on or derived from the prevalence value. | Brian S. Alper, Muhammad Afzal, Joanne Dehnbostel, Janice Tufte, Kenneth Wilkins, Harold Lehmann | 2022-02-02 vote 5-0 by Paola Rosati, Mario Tristan, Robin Ann Yurk, Janice Tufte, Brian S. Alper | 2022-01-12 vote 6-1 by Robin Ann Yurk, janice tufte, Jesus Lopez-Alcalde, Harold Lehmann, Mario Tristan, Andrew Beck, Paul Harris 2022-01-19 vote 2-1 by Harold Lehmann, Robin Ann Yurk, Alejandro Piscoya 2022-01-26 vote 7-1 by Janice Tufte, Harold Lehmann, Philippe Rocca-Serra, Paola Rosati, Mario Tristan, Robin Ann Yurk, Brian S. Alper, Jesus Lopez-Alcalde | 2022-01-12 comments: I propose "A proportion in which the numerator represents all events (new and preexisting)." I think it is very important to detail "new and preexisting"2022-01-03 comment: The comment here is better than for incidence (why not copy this comment into "Incidence," and edit?). But it still feels like the denominator should be called out in the definition. 2022-01-19 comment: I would edit the term definition to include ...as part of a denominator of a broader population. 2022-01-26 comments: (1) suggestion: alter the definition to: A proportion in which the numerator represents all events of interest (e.g. both new and preexisting cases of a disease) in the population, which is represented by the denominator. (2) I would delete this sentence from the comment for application. " Prevalence is a proportion because the persons in the numerator, those who develop or have disease, are all included in the denominator (the entire population)"(3) Probability should be a type of Proportion but distinct from Prevalence. Probability relates to the likelihood of something, but in that sense incidence and prevalence are both probabilities. If Prevalence and Probability were considered synonyms then one would still not call it the same as "Pre-test" or "Prior" probability. The term pre-test probability could be a type of (child of) probability. 2022-02-02 comment: I would remove the statement ..In Bayesian calculations, as the pre-test probability is a formula with new variables. | STATO: prevalence is a ratio formed by the number of subjects diagnosed with a disease divided by the total population size. Period prevalence: The ratio (for a given time period) of the number of occurrences of a disease or event to the number of units at risk in the population. a prevalence rate that occurs at a specific period of time Point prevalence: NCIt The ratio (for a given time period) of the number of occurrences of a disease or event to the number of units at risk in the population. OBCS a prevalence rate that occurs at a specific point of time UMLS: The total number of cases of a given disease in a specified population at a designated time. It is differentiated from INCIDENCE, which refers to the number of new cases in the population at a given time. (MSH) The ratio (for a given time period) of the number of occurrences of a disease or event to the number of units at risk in the population. (NCI) Proportion of the people having a certain disease or condition in a given population (CHV) CDC https://www.cdc.gov/csels/dsepd/ss1978/lesson3/section2.html Point prevalence = Number of current cases (new and preexisting) at a specified point in time / Population at the same specified point in time Period prevalence = Number of current cases (new and preexisting) over a specified period of time / Average or mid-interval population Definition of prevalence Prevalence, sometimes referred to as prevalence rate, is the proportion of persons in a population who have a particular disease or attribute at a specified point in time or over a specified period of time. Prevalence differs from incidence in that prevalence includes all cases, both new and preexisting, in the population at the specified time, whereas incidence is limited to new cases only. Point prevalence refers to the prevalence measured at a particular point in time. It is the proportion of persons with a particular disease or attribute on a particular date. Period prevalence refers to prevalence measured over an interval of time. It is the proportion of persons with a particular disease or attribute at any time during the interval. | |||||
4 | STATO:0000233 | sensitivity | A proportion in which the numerator represents the detected items within the denominator that represents all items with the targeted attribute. | In a population of people with and without a disease, and a test which is positive (suggesting the disease) or negative (suggesting not having the disease), the sensitivity is the proportion of true positives (all people with the disease who test positive) within all people with the disease (true positives plus false negatives). Sn = TP / (TP + FN). In information retrieval, recall is the proportion of items correctly retrieved within all relevant items. True positive rate (TPR) is listed as an Alternative term because of common usage, but TPR is not a Rate as defined in SEVCO. | Harold Lehmann, Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins, Muhammad Afzal | 2022-01-26 vote 10-0 by Paul Harris, Harold Lehmann, Robin Ann Yurk, Alejandro Piscoya, Janice Tufte, Philippe Rocca-Serra, Paola Rosati, Mario Tristan, Brian S. Alper, Jesus Lopez-Alcalde | 2022-01-19 vote 3-1 by Paul Harris, Harold Lehmann, Robin Ann Yurk, Alejandro Piscoya | 2022-01-19 comment: I would remove recall from Alternative terms and comment for applications, as it is a specialized informatics measures and list it as a separate term. (EWG discussion: This comment is not persuasive. If the same statistic type (formula) has different names in different contexts we still want one common code for the concept. This consolidation of terms is the purpose of a standardized terminology or controlled vocabulary where we are controlling the code for the concept, not the name for common use.) | STATO: true positive rate (recall, sensitivity) = sensitivity is a measurement datum qualifying a binary classification test and is computed by substracting the false negative rate to the integral numeral 1 NCIt diagnostic sensitivity The probability that a test will produce a true positive result when used on effected subjects as compared to a reference or "gold standard". The sensitivity of a test can be determined by calculating: number of true positive results divided by the sum of true positive results plus number of false negative results. OBCS- a data item that measures the proportion of actual positives which are correctly identified as such (e.g. the percentage of sick people who are correctly identified as having the condition). OCRe An index of performance of a discriminant test calculated as the percentage of correct positives in all true positives STATO sensitivity is a measurement datum qualifying a binary classification test and is computed by subtracting the false negative rate to the integral numeral 1 NICE glossary-Sensitivity of a test-How well a test detects what it is testing for. It is the proportion of people with the disease or condition that are correctly identified by the study test. For example, a test with a sensitivity of 96% will, on average, correctly identify 96 people in every 100 who truly have the condition, but incorrectly identify as not having the condition 4 people in every 100 who truly have it. It is different from positive predictive value. MeSH scope note-sensitivity and specificity-Scope Note Binary classification measures to assess test results. Sensitivity or recall rate is the proportion of true positives. Specificity is the probability of correctly determining the absence of a condition. (From Last, Dictionary of Epidemiology, 2d ed) | |||||
4 | STATO:0000134 | specificity | A proportion in which the numerator represents the non-detected items within the denominator that represents all items without the targeted attribute. | In a population of people with and without a disease, and a test which is positive (suggesting the disease) or negative (suggesting not having the disease), the specificity is the proportion of true negatives (all people without the disease who test negative) within all people without the disease (true negatives plus false positives). Sp = TN / (TN + FP). True Negative Rate (TNR) is listed as an Alternative term because of common usage, but TNR is not a Rate as defined in SEVCO. | Harold Lehmann, Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins, Muhammad Afzal | 2022-01-19 vote 5-0 by Paul Harris, Harold Lehmann, Robin Ann Yurk, Alejandro Piscoya, Janice Tufte | STATO: true negative rate (specificity) = specificity is a measurement datum qualifying a binary classification test and is computed by substracting the false positive rate to the integral numeral 1 NCIt The probability that a test will produce a true negative result when used on non-effected subjects as compared to a reference or "gold standard". The specificity of a test can be determined by calculating: number of true negative results divided by the sum of true negative results plus number of false positive results. OBCS a data item that refers to the proportion of negatives in a binary classification test which are correctly identified OCRe An index of performance of a discriminant test calculated as the percentage of negatives in all true negatives NICE glossary-Specificity (of a test) How well a test correctly identifies people who do not have what it is testing for. It is the proportion of people without the disease or condition that are correctly identified by the study test. For example, a test with a specificity of 96% will, on average, correctly identify 96 people in every 100 who truly do not have the condition, but incorrectly identify as having the condition 4 people in every 100 who truly do not have it. It is different from negative predictive value. MeSH scope note-sensitivity and specificity-Scope Note Binary classification measures to assess test results. Sensitivity or recall rate is the proportion of true positives. Specificity is the probability of correctly determining the absence of a condition. (From Last, Dictionary of Epidemiology, 2d ed) | |||||||
4 | STATO:0000416 | positive predictive value | A proportion in which the numerator represents the correctly detected items within the denominator that represents all items detected. | In a population of people with and without a disease, and a test which is positive (suggesting the disease) or negative (suggesting not having the disease), the positive predictive value is the proportion of true positives (all people with the disease who test positive) within all the people with a positive test (true positives plus false positives). PPV = TP / (TP + FP). In information retrieval, 'precision' is the proportion of items correctly retrieved within all retrieved items. In Bayesian calculations, the 'Positive Predictive Value' is equivalent to the 'post-test probability' or 'posterior probability' following a positive test. | Harold Lehmann, Kenneth Wilkins, Phillippe Rocca-Serra, Joanne Dehnbostel | 2022-02-02 vote 5-0 by Paola Rosati, Mario Tristan, Robin Ann Yurk, Janice Tufte, Brian S. Alper | 2022-01-26 vote 7-1 by Robin Ann Yurk, Janice Tufte, Harold Lehmann, Philippe Rocca-Serra, Paola Rosati, Mario Tristan, Brian S. Alper, Jesus Lopez-Alcalde | 2022-01-26 comments: (1) I would remove precision from Alternative terms (2) minor change=quote terms of interest: In information retrieval, `precision` is the proportion of items correctly retrieved within all retrieved items. The terms `post-test probability` and `posterior probability` are used in Bayesian calculations. (3) Post-test probability is not fully synonymous with positive predictrive value. A negative predictive value is also the "post-test" probability of a true negative if the test has a negative result. And a test with a continuous rather than binary result could have a post-test probability that is neither positive nor negative predictive value. Post-test probability (and posterior probability) should become a child of probability. 2022-02-02 comment: I would remove the alternate term Precision and the comment for application for precision. | NCIt The probability that an individual is affected with the condition when a positive test result is observed. Predictive values should only be calculated from cohort studies or studies that legitimately reflect the number of people in the population who have the condition of interest at that time since predictive values are inherently dependent upon the prevalence. PPVDT can be determined by calculating: number of true positive results divided by the sum of true positive results plus number of false positive results. | |||||
4 | STATO:0000619 | negative predictive value | A proportion in which the numerator represents the correctly non-detected items within the denominator that represents all items not detected. | In a population of people with and without a disease, and a test which is positive (suggesting the disease) or negative (suggesting not having the disease), the negative predictive value is the proportion of true negatives (all people without the disease who test negative) within all the people with a negative test (true negatives plus false negatives). NPV = TN / (TN + FN). | Harold Lehmann, Ken Wilkins, Phillippe Rocca-Serra, Joanne Dehnbostel | 2022-01-26 vote 8-0 by Robin Ann Yurk, Janice Tufte, Harold Lehmann, Philippe Rocca-Serra, Paola Rosati, Mario Tristan, Brian S. Alper, Jesus Lopez-Alcalde | NCIt The probability that an individual is not affected with the condition when a negative test result is observed. This measure of accuracy should only be used if the data on the prevalence of condition of interest in given population is available. NPVDT can be determined by calculating: number of true negative results divided by the sum of true negative results plus number of false negative results. | |||||||
4 | STATO:0000621 | diagnostic yield | A proportion in which the numerator represents the correctly detected items within the denominator that represents all items tested. | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins | 2022-08-10 vote 5-0 by Philippe Rocca-Serra, Mario Tristan, Harold Lehmann, Eric Harvey, Paola Rosati | 2022-08-10 comment: I would simply suggest to simplify the definition to: A proportion obtained by dividing the number of correctly detected items (numerator) by the number of all items tested (denominator) | "Diagnostic yield was defined as the number of participants with positive findings for advanced neoplasia relative to all participants" in https://pubs.rsna.org/doi/10.1148/radiol.12112486 Other 'definitions' found include synonymous use with sensitivity, and 'diagnostic yield' describing the statistic array of TP, FP, TN, and FN data. https://medical-dictionary.thefreedictionary.com/diagnostic+yield Diagnostic yield The likelihood that a test or procedure will provide the information needed to establish a diagnosis. | |||||||
4 | STATO:0000620 | risk | A proportion in which the numerator represents the cases in which an event or characteristic occurs and the denominator represents all possible cases. | In the English language, 'risk' may be used synonymously with 'hazard', 'chance', 'likelihood', 'relative likelihood', 'probability' and many other terms. In SEVCO the term 'risk' is explicitly defined for how it is used in other terms such as 'Risk Ratio' and 'Relative Risk Difference' The statistical definition of 'risk' does not have a negative or undesirable connotation. Risk may be conditioned on many factors. In such cases the statistic type is Risk and the statistic may be reported as a conditional risk (for example, predicted risk). When a time period or a duration of time is used to define the period of time in which the risk is measured, the statistic type is Risk. Examples include 1-year risk, in-hospital risk, and cumulative risk. In frequentist statistics, the risk is a ratio of the number of events to the number of possible cases. In subjective Bayesian statistics, the risk is a proportion as a whole that represents degree of belief, where 0 represents certainty that an event will not occur and 1 represents certainty that the event will occur. | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Muhammad Afzal, Janice Tufte | 2022-06-29 vote 6-0 by Jesus Lopez-Alcalde, Philippe Rocca-Serra, Mario Tristan, Janice Tufte, Harold Lehmann, Eric Harvey | 2022-06-22 vote 4-1 by Eric Harvey, Janice Tufte, Mario Tristan, Muhammad Afza, Eric M Harvey | 2022-06-22 comment: It is confusing to consider "conditional probability" as an Alternative term for "risk." Generally; "Conditional probability" refers to a probability whose value is dependent upon the occurrence of some process/event. In contrast, "risk" refers to the probability that an event will occur. Mathematically; "Conditional probability" is a measure of the probability of an event occurring, given that another event has already occurred. Let us have two events, A and B, and we want to know P(A) given P(B); notationally, P(A|B). Here the word 'given' defines a subset of the population of events because it applies condition on B. For example, if we care about the incidence of COVID-19 in men only, we might want to know P(COVID-19 | male). This means that first, pick out all the males, and second, figure out the probability they will get COVID-19. More formally, what P(A|B) says is: pick out the events to which both P(A) and P(B) apply and consider them as part of the subset of events to which only P(B) applies: hence P(A/B) = P(A and B)/P(B). In simple words, what we are doing with P(A|B) = P(A and B) | P(B) is selecting out the same subset of the event population in both the the numerator and the denominator: in this case, only men. While "risk" by definition involves no condition. Taking the same example, we can say, "what is the risk of COVID-19?" here, we refer to the whole population; however, we can apply can make a condition over it like "what is the risk of COVID-19 in males?" This risk may be taken as "conditional risk," and it could be taken as an Alternative term to conditional probability. Conclusion: Let us define two terms, "risk" and "conditional risk," as a subset of "risk." Then "conditional probability" shall be taken as an Alternative term to "conditional risk." One more important point about the current definition of "Risk," i.e., Risk = A proportion in which the numerator represents the probability that an event or characteristic occurs and the denominator represents the probability that the event or characteristic occurs or does not occur. If we write symbolically, it will look like this; P(A)/P(A or B), where A indicates "positive," which is the occurrence of something, and B indicates "negative," which is the non-occurrence of the same. We can write it formally as P(A) / P(AUB). In set theory, when there is "OR," in other words, "Union" infer the True value when either of them is True. It means A is true, or B is true, or both are true; we will get the true result. Interestingly, occurrence and non-occurrence are mutually exclusive, so two situations arise. I) when the event occurs: P(A) / P(AUB) --> P(A)/P(A) = 1 II) when the event does not occur: P(A) / P(AUB) --> P(A)/P(B) = Odds Therefore the definition needs to be revised for the correct meaning of the denominator. I believe the denominator refers to the whole population where some people are at risk and some are not, while the numerator refers to only those at risk. | ||||||
3 | STATO:0000627 | odds | A ratio in which the numerator represents the probability that an event will occur and the denominator represents the probability that an event will not occur. | 'Odds' and 'Odds ratio' are different terms. 'Odds' is a ratio of probabilities. 'Odds ratio' is a ratio of two different odds. Odds are calculated as p / (1-p) where p is the probability of event occurrence. When p = 0, the odds = 0. When p = 1, the odds may be expressed as not calculable or as "odds against = 0". Odds may be expressed as p:(1-p). Odds may be expressed as p:q where q = 1-p. Odds may be expressed as a:b where a and b are multiples of p and (1-p). Examples of different expressions of the same odds include 3:2, 3/2, 0.6:0.4, 0.6/0.4, and 1.5. Odds may be expressed as "odds for" or "odds in favor" (e.g. 1:5 for a "3" on a 6-sided die) or "odds against" (e.g. 5:1 against a "3" on a 6-sided die). The term "betting odds" used in gambling that involves financial amounts in the formulation is not an "Odds" in the definition of the Scientific Evidence Code System. | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Khalid Shahin, Kenneth Wilkins, Muhammad Afzal | 2022-03-22 vote 5-0 by Muhammad Afzal, Jesus Lopez-Alcalde, Philippe Rocca-Serra, Mario Tristan, Robin Ann Yurk | 2022-02-16 vote 4-2 by Janice Tufte, Paola Rosati, Eric Moyer, Harold Lehmann, Robin Ann Yurk, Jesus Lopez-Alcalde 2022-02-23 vote 5-2 by nisha mathew, Harold Lehman, Paola Rosati, Sunu Alice Cherian, Robin Ann Yurk, Joanne Dehnbostel, Sumalatha A 2022-03-09 vote 3-1 by Robin Ann Yurk, Janice Tufte, Eric Moyer, nisha mathew 2022-03-16 vote 8-1 by Robin Ann Yurk, Janice Tufte, Eric Moyer, nisha mathew, Harold Lehmann, Philippe Rocca-Serra, Louis Leff, Paola Rosati, Mario Tristan | 2022-02-16 comments: The term definition and comment for application are clear and well written. It would help to have a discussion on the Parent and Child relationships for this term as right now you only have Statistic, Ratio, Odds. In statistics for the scientific code system is Statistic, Ratio, Odds Ratio a better sequence and put the Odds under comment for application. This term needs two Alternative terms: "Odds For" and "Odds in Favor." The definition needs to deal with the cases p=1 and p=0. (I can think of 3 questions regarding these cases. (1) Are they defined? (2) Is p=1 the same as 8. (3) Does 3:0=1:0?) We should mention that this term does not include gambling odds. (As I understand it, gambling odds are the ratio of stake to winnings with several representations and frequently have a "rounding" factor to ensure a profit for the bookmaker). Another issue is whether to represent "Odds Against" in the vocabulary. It could come up when annotating an immutable pre-existing source that gives odds as odds against; for example, an NLP system that scans published works to output labels for sections of the text. A term related to "Odds" missing from the parent branch, "Ratio," is "Log Odds." (Not unique to this term, but I noticed it here) The children of "Statistic" should inherit the application comment from "Statistic" about distinguishing between the statistic and statistic value. That way, a reader will not need to read the whole tree to know that 1.5 is not "Odds"; it is "Odds statistic value." (However, I do not see a place for "Odds statistic value" in the tree.) Finally, the repetition of "Odds may be expressed as" is awkward. 2022-02-23 comments: "Odds may be expressed as p:(1-p). Odds may be expressed as p:q where q = 1-p. " Sounds redundant. Alternative terms: Probablity likelihood, chance ---{{Group meeting decided that 'probability' and 'likelihood' are terms we may consider adding to the SEVCO but they are not Alternative terms for odds, 'chance' is considered a lay term and not a specific statistical term for the code system}} Odds is a computational function such as addition, subtraction, multiplication. Odds Ratio may be better term for the term definition. This comment is based on your term definition and comment for application. 2022-03-09 comment: Edit the term definition: A ratio of probabilities in which the numerator represents the probability of the number of times an event will occur and the denominator represents the probability of the number of time an event will not occur. (Steering group 2022-03-09 considers the suggested change does not add clarification or improved understanding.) 2022-03-16 comment: I would delete likelihood from the term definition as in statistics it introduces a different formula such as likelihood ratio. My suggestion is to simplify to a ration in which the numerator represents the number of times an event will occur and the denominator represents the number of times an event will not occur. (Steering group 2022-03-16 again considers the suggested change to include "number of times" not persuasive, but changed "relative likelihood" to "probability" in the definition to avoid the potential confusion with likelihood ratio.) | OCRe: Odds is a quotient in which the relative likelihood that an event will occur is divided by the the relative likelihood that it won't. In probability theory and statistics, where the variable "p" is the probability in favor of the event, and the probability against the event is 1-p, "the odds" of the event are the quotient of the two, or p / (1-p) | |||||
3 | STATO:0000645 | rate | A ratio in which the numerator represents any quantity and the denominator represents an interval of time. | When the numerator represents a count, the rate is an Event Rate. | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Muhammad Afzal | 2022-05-12 vote 9-0 by Eric Harvey, raradhikaag@gmail.com, Jesus Lopez-Alcalde, Mario Tristan, Harold Lehmann, Paola Rosati, Muhammad Afzal, nisha mathew, Janice Tufte | 2022-03-30 vote 4-2 by Cauê Monaco, Muhammad Afzal, Jesus Lopez-Alcalde, Philippe Rocca-Serra, Mario Tristan, Robin Ann Yurk 2022-04-06 vote 3-2 by Mario Tristan, Robin Ann Yurk, Paola Rosati, Jesus Lopez-Alcalde, nisha mathew | 2022-03-30 comments: A rate does not necessarily represent time. "In math, a rate is a special ratio in which the two terms are in different units"Edit term definition: A proportion represented by a rate of an event count for another quantified measure. My comments are more focused on in the term definition and comment you have incomplete definitions or comments as you only describe in the term definition the denominator and in the comment you only describe the numerator. Improvement Suggestion: By definition, a rate would have both a numerator and denominator so it is important for your to include in a definition both numerator and denominator. The term definition should read: A ratio in which the numerator represents an event count and the denominator represents the total sum of the events considered as a count and non count. The underlying concept for Rate is that the Denominator is a measure of time. So we need a definition where the numerator is X and the denominator is a measure time. Our approach to definitions has been: Ratio = A statistic that is a quotient of two quantities. [[By definition any statistic that is a ratio has a numerator and a denominator. Any statistic that has a numerator and a denominator is a Ratio, and may be given a more specific term when it is a type of Ratio.]] The Ratio definition inherits the Statistic definition so we do not re-define statistic. Rate = A ratio in which the denominator represents a duration of time. This means that when we constrain the definition of ratio to limit to statistics where the denominator represents a duration of time, then the type of Ratio is a Rate. There is a logic to this approach to setting a definition, but your comment shows that it feel lacking because it does not mention the numerator. There is no constraint or modification being applied to the numerator. Perhaps we can try “Rate = A ratio in which the numerator represents any quantity and the denominator represents a duration of time.” Would that help clarify this item? 2022-04-06 comments: I would insert in the term definition, the numerator represents a quantity defined as a unit which is a smaller part of the denominator divided by the total sum of units in the denominator. the concepts, "frequency of events" and "over a specified period of time" are not reflected in this definition 2022-04-27 comment: Edit term definition: A proportion represented by a rate of an event count or another quantified measure divided by the total sum of units. {{Discussion by Expert Working Group: The proposed definition describes a Proportion, but a Rate is NOT a Proportion.}} | NCIt Rate = A measurement of degree, speed, or frequency relative to time. OBCS rate= A quality of a single process inhering in a bearer by virtue of the bearer's occurrence per unit time. OCRe Rate = A rate is a quantity per unit of time. | |||||
4 | STATO:0000670 | incidence rate | A rate in which the number of new events per total at risk is divided by an interval of time. | Incidence is defined as a proportion in which the numerator represents new events and the denominator represents the total at risk for events. Rate is defined as a ratio in which the numerator represents any quantity and the denominator represents an interval of time. The interval of time used for the denominator may be data-dependent when the duration of observation varies across the observations. In the method for calculating incidence rate (described at https://www.cdc.gov/csels/dsepd/ss1978/lesson3/section2.html), the numerator is the "Number of new cases of disease or injury during the specified period" and the denominator is the "Time each person was observed, totaled for all persons" | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Muhammad Afzal | 2022-05-25 vote 6-0 by Jesus Lopez-Alcalde, Brian S. Alper, Joanne Dehnbostel, Eric M Harvey, Mario Tristan, Harold Lehmann | 2022-05-11 vote 7-1 by Eric Harvey, raradhikaag@gmail.com, Jesus Lopez-Alcalde, Mario Tristan, Janice Tufte, Harold Lehmann, Paola Rosati, Robin Ann Yurk | 2022-05-11 comment: Suggest improving current term definition with the definition in comment for application. The Alternative terms I am not sure fit here--you may want to add more detail for the alternate terms to the comment for application. 2022-05-25 comment: The definition defines the ideal ("at risk"); very often, however, incidence rates are calculated more grossly. While they are semantically wrong, they are quantitatively correct. Classic: birth incidence. The proper denominator would be fertile women, but *could * be calculated "per woman" or even "per capita". | NCIt Incidence Rate = The frequency of new occurrences of an event during a specified time period. CDC: https://www.cdc.gov/csels/dsepd/ss1978/lesson3/section2.html Incidence refers to the occurrence of new cases of disease or injury in a population over a specified period of time. Although some epidemiologists use incidence to mean the number of new cases in a community, others use incidence to mean the number of new cases per unit of population. Two types of incidence are commonly used — incidence proportion and incidence rate. Synonyms for incidence rate Person-time rate Definition of incidence rate Incidence rate or person-time rate is a measure of incidence that incorporates time directly into the denominator. A person-time rate is generally calculated from a long-term cohort follow-up study, wherein enrollees are followed over time and the occurrence of new cases of disease is documented. Typically, each person is observed from an established starting time until one of four “end points” is reached: onset of disease, death, migration out of the study (“lost to follow-up”), or the end of the study. Similar to the incidence proportion, the numerator of the incidence rate is the number of new cases identified during the period of observation. However, the denominator differs. The denominator is the sum of the time each person was observed, totaled for all persons. This denominator represents the total time the population was at risk of and being watched for disease. Thus, the incidence rate is the ratio of the number of cases to the total time the population is at risk of disease. Alternative terms for incidence rate (incidence density, average hazard) noted at https://www.sjsu.edu/faculty/gerstman/eks/formula_sheet.pdf | |||||
4 | STATO:0000671 | hazard rate | A conditional instantaneous rate in which the numerator represents an incidence conditioned on survival to a specified time, and the denominator represents a time interval with a duration approaching zero. | In the definition of Hazard Rate, the term "survival" is not literally about life and death but is used to represent existence without experiencing the event. "Hazard" as a statistical term is not specific to "bad" or "dangerous" events. A hazard rate is expressed as a unitless numerator per unit of time, occurring at a specified time, and conditioned on survival to that time. A hazard rate is mathematically the negative derivative of the log of the survival function. The survival function is the probability of surviving past a specified point in time, expressed as Pr{ T >= t }. A hazard rate is also mathematically defined as lim(dt -> 0) [ Pr{ ( t <= T < t + dt ) | ( T >= t ) } / dt ]. | Harold Lehmann, Kenneth Wilkins, Joanne Dehnbostel, Brian S. Alper | 2022-05-12 vote 8-0 by Eric Harvey, raradhikaag@gmail.com, Jesus Lopez-Alcalde, Mario Tristan, Harold Lehmann, Paola Rosati, Muhammad Afzal, nisha mathew | 2022-04-06 vote 4-3 by Mario Tristan, Robin Ann Yurk, Cauê Monaco, Harold Lehmann, Paola Rosati, Jesus Lopez-Alcalde, nisha mathew | 2022-04-06 comments: An instantaneous rate in which the numerator represents an incidence and the denominator represents a time interval conditioned on survival to a specified time with a duration approaching zero A hazard is any danger or peril. It does not necessarily represent a survival/death relationship. I would add a vote choice: No Comment-Specialized Term or Not Applicable or some other choice as this is specialized formula. | A Dictionary of Epidemiology (5 ed.) by Miquel Porta Hazard rate = A theoretical measure of the probability of occurrence of an event per unit time at risk; e.g., death or new disease, at a point in time, t, defined mathematically as the limit, as ?t approaches zero, of the probability that an individual well at time t will experience the event by t + ?t, divided by ?t. formula expressed at https://data.princeton.edu/wws509/notes/c7s1 | |||||
4 | STATO:0000672 | event rate | The number of occurrences per unit of time. | An event rate is a ratio in which the numerator represents a count and the denominator represents an interval of time. When the numerator represents a count: --If the denominator includes an interval of time, the type of ratio is an Event Rate. --If the denominator includes a count without an interval of time, the type of ratio is an Event Frequency. --If the denominator includes a count and an interval of time, the type of ratio is an Event Frequency Rate. --If the denominator includes an interval of space, the type of ratio is a Number Density | Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Kenneth Wilkins, Harold Lehmann | 2022-05-12 vote 8-0 by Eric Harvey, raradhikaag@gmail.com, Jesus Lopez-Alcalde, Mario Tristan, Harold Lehmann, Paola Rosati, Muhammad Afzal, nisha mathew | 2022-04-27 comment: The term definition and comment for application are comprehensive. However, for your comment for application I would only use the following... An event rate is a ratio in which the numerator represents a count and the denominator represents an interval of time. When the numerator represents a count: --If the denominator includes an interval of time, the type of ratio is an Event Rate. {{Expert Working Group discussion: the comment providing instructions for choosing among 4 related and confusing terms is considered useful for guidance, and purposefully mentions other terms that may be more appropriate.}} | |||||||
4 | STATO:0000673 | event frequency rate | A ratio in which the numerator represents an event frequency and the denominator represents an interval of time. | When the numerator represents a count: --If the denominator includes an interval of time, the type of ratio is an Event Rate. --If the denominator includes a count without an interval of time, the type of ratio is an Event Frequency. --If the denominator includes a count and an interval of time, the type of ratio is an Event Frequency Rate. --If the denominator includes an interval of space, the type of ratio is a Number Density | Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Kenneth Wilkins, Harold Lehmann | 2022-05-12 vote 9-0 by Eric Harvey, raradhikaag@gmail.com, Jesus Lopez-Alcalde, Mario Tristan, Harold Lehmann, Paola Rosati, Muhammad Afzal, nisha mathew, Janice Tufte | ||||||||
3 | STATO:0000674 | event frequency | A ratio in which the numerator represents a count and the denominator represents a count (without involving an interval of time). | When the numerator represents a count: --If the denominator includes an interval of time, the type of ratio is an Event Rate. --If the denominator includes a count without an interval of time, the type of ratio is an Event Frequency. --If the denominator includes a count and an interval of time, the type of ratio is an Event Frequency Rate. --If the denominator includes an interval of space, the type of ratio is a Number Density | Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Kenneth Wilkins, Harold Lehmann | 2022-05-12 vote 9-0 by Eric Harvey, raradhikaag@gmail.com, Jesus Lopez-Alcalde, Mario Tristan, Harold Lehmann, Paola Rosati, Muhammad Afzal, nisha mathew, Janice Tufte | ||||||||
3 | STATO:0000675 | density | A ratio in which the numerator represents any quantity and the denominator represents an interval of space (distance, area, or volume). | Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Kenneth Wilkins, Harold Lehmann | 2022-05-08 vote 7-0 by Mario Tristan, Janice Tufte, Harold Lehmann, Paola Rosati, Muhammad Afzal, nisha mathew, Eric M Harvey | 2022-05-08 comment: Examples would be nice, since "linear density" is not a traditional measure | ||||||||
4 | STATO:0000676 | number density | A ratio in which the numerator represents a count and the denominator represents an interval of space (distance, area, or volume). | When the numerator represents a count: --If the denominator includes an interval of time, the type of ratio is an Event Rate. --If the denominator includes a count without an interval of time, the type of ratio is an Event Frequency. --If the denominator includes a count and an interval of time, the type of ratio is an Event Frequency Rate. --If the denominator includes an interval of space, the type of ratio is a Number Density | Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Kenneth Wilkins, Harold Lehmann | 2022-05-27 vote 10-0 by Khalid Shahin, Joanne Dehnbostel, Eric Harvey, raradhikaag@gmail.com, Jesus Lopez-Alcalde, Mario Tristan, Harold Lehmann, Muhammad Afzal, nisha mathew, Janice Tufte | 2022-05-12 vote 8-1 by Eric Harvey, raradhikaag@gmail.com, Jesus Lopez-Alcalde, Mario Tristan, Harold Lehmann, Paola Rosati, Muhammad Afzal, nisha mathew, Janice Tufte | 2022-05-12 comment: I wonder if we could define better "Number Density" as Density ratio {{2022-05-18 discussion found 2 instances of the term 'Number density' matching our definition, and the term 'density ratio' defines a density divided by a density which does not match this concept.}} | Wikipedia https://en.wikipedia.org/wiki/Number_density The number density (symbol: n or ?N) is an intensive quantity used to describe the degree of concentration of countable objects (particles, molecules, phonons, cells, galaxies, etc.) in physical space: three-dimensional volumetric number density, two-dimensional areal number density, or one-dimensional linear number density. Population density is an example of areal number density. IUPAC Gold Book https://goldbook.iupac.org/terms/view/N04262 number density, n Number of particles divided by the volume they occupy. | |||||
3 | STATO:0000704 | concentration | A ratio in which the numerator is a measure of the solute and the denominator is a measure of the solvent. | Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Kenneth Wilkins, Harold Lehmann | 2022-05-08 vote 6-0 by Mario Tristan, Harold Lehmann, Paola Rosati, Muhammad Afzal, nisha mathew, Eric M Harvey | |||||||||
2 | STATO:0000610 | measure of association | A statistic that quantitatively represents a relationship between two or more variables. | Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins, Muhammad Afzal, Neeraj Ojha | 2022-03-16 vote 7-0 by Mario Tristan, Paola Rosati, Louis Leff, nisha mathew, Philippe Rocca-Serra, Harold Lehmann, Janice Tufte | 2022-02-24 comment: There are measures of association between more than two variables, for example, an estimator of interaction information. So, this should be "two or more variables" (or just "variables"). Also, I don't like the term "represents", I'd prefer to say "A statistic that quantifies a relationship between variables." | ||||||||
3 | STATO:0000622 | ratio-based measure of association | A measure of association expressed as a ratio. | This categorical (parent) term can be used for a statistic that is a ratio, quantifies a relationship between two variables, and is not found in the child terms. | Harold Lehmann, Kenneth Wilkins, Brian S. Alper, Joanne Dehnbostel, Janice Tufte, Muhammad Afzal | 2022-12-28 vote 7-0 by Janice Tufte, Mario Tristan, Joanne Dehnbostel, Harold Lehman, Yuan Gao, Jesus Lopez-Alcalde, Eric Harvey | 2022-02-24 comment: I think we should replace "represents" with "quantifies" and remove the restriction to two variables. "A statistic that is a ratio and quantifies a relationship between variables." Second, I think you want a more restrictive definition than a statistic that is a ratio. For example, the uncertainty coefficient, I(X;Y)/H(Y), is a ratio and a measure of association, but I don't think you'd consider it a ratio-based measure of association (maybe you would, in which case this is OK). You should also consider whether monotonic transformations of ratios count as ratio-based measures. It is common for people to take logarithms of ratios. I'm not sure what the utility is of this category. When does someone need it? Could we just put all its children Measure of Association? | |||||||
4 | STATO:0000677 | hazard ratio | A measure of association that is the ratio of the hazard rate of an event in one group to the hazard rate of the same event in another group. | Hazard rate (SEVCO TBD:0000025) is defined as: A conditional instantaneous rate in which the numerator represents an incidence conditioned on survival to a specified time, and the denominator represents a time interval with a duration approaching zero. The groups being compared are often the exposed group versus the unexposed group, but hazard ratio can also be applied to comparisons of one exposure relative to another exposure. A hazard ratio of one means there is no difference between two groups in terms of their hazard rates, based on whether or not they were exposed to a certain substance or factor, or how they responded to two interventions being compared. A hazard ratio of greater than one implies an association of greater risk, and a hazard ratio of less than one implies an association of lower risk. The hazard ratio can be calculated from studies in which the proportion of exposed participants who had the event is known, the proportion of unexposed participants who had the event is known, and the timing of events for each participant is known or estimable, such as a cohort study or clinical trial. | Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel, Muhammad Afzal | 2022-06-08 vote 5-0 by Harold Lehmann, Mario Tristan, Eric M Harvey, Muhammad Afzal, Paola Rosati | NCIt: Hazard ratio = A measure of how often a particular event happens in one group compared to how often it happens in another group, over time. In cancer research, hazard ratios are often used in clinical trials to measure survival at any point in time in a group of patients who have been given a specific treatment compared to a control group given another treatment or a placebo. A hazard ratio of one means that there is no difference in survival between the two groups. A hazard ratio of greater than one or less than one means that survival was better in one of the groups. https://www.statisticshowto.com/hazard-ratio/ The hazard ratio is a comparison between the probability of events in a treatment group, compared to the probability of events in a control group. Hazard Ratio in Clinical Trials (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC478551/) The hazard ratio is an estimate of the ratio of the hazard rate in the treated versus the control group. The hazard rate is the probability that if the event in question has not already occurred, it will occur in the next time interval, divided by the length of that interval. The time interval is made very short, so that in effect the hazard rate represents an instantaneous rate. The Hazards of Hazard Ratios (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3653612/) The hazard ratio (HR) is the main, and often the only, effect measure reported in many epidemiologic studies. For dichotomous, non–time-varying exposures, the HR is defined as the hazard in the exposed groups divided by the hazard in the unexposed groups. For all practical purposes, hazards can be thought of as incidence rates and thus the HR can be roughly interpreted as the incidence rate ratio. The HR is commonly and conveniently estimated via a Cox proportional hazards model, which can include potential confounders as covariates. | Measure of Association | ||||||
4 | STATO:0000680 | incidence rate ratio | A measure of association that is the ratio of two incidence rates. | Incidence Rate (SEVCO TBD:0000024) is defined as: A rate in which the number of new events per total at risk is divided by an interval of time. The incidence rates may refer to the same event comparing two different groups, or the same group comparing two different events. | Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel, Muhammad Afzal | 2022-06-08 vote 5-0 by Harold Lehmann, Mario Tristan, Eric M Harvey, Muhammad Afzal, Paola Rosati | Measure of Association | |||||||
5 | STATO:0000681 | standardized incidence ratio | An incidence rate ratio in which the numerator is the incidence rate in a group and the denominator is the incidence rate for a reference population. | The incidence rate used for the denominator may be an expected incidence rate for a reference population. The reference population may refer to a general population of the geographic area from which the cohort was selected. | Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel, Muhammad Afzal | 2022-06-08 vote 5-0 by Harold Lehmann, Mario Tristan, Eric M Harvey, Muhammad Afzal, Paola Rosati | 2022-06-08 comment: ... and the denominator is the incidence rate or expected incidence rate for a reference population. Comment for application: The reference population may refer to a general population of the geographic area from which the cohort was selected. | Measure of Association | ||||||
4 | STATO:0000182 | odds ratio | A measure of association that is the ratio of two odds. | Harold Lehmann, Kenneth Wilkins, Brian S. Alper, Joanne Dehnbostel, Janice Tufte | 2022-03-16 vote 8-0 by Mario Tristan, Paola Rosati, Louis Leff, nisha mathew, Philippe Rocca-Serra, Harold Lehmann, Janice Tufte, Eric Moyer | STATO: odds ratio (OR) = Odds ratio is a ratio that measures effect size, that is the strength of association between 2 dichotomous variables, one describing an exposure and one describing an outcome. It represents the odds that an outcome will occur given a particular exposure, compared to the odds of the outcome occurring in the absence of that exposure ( the probability of the event occuring divided by the probability of an event not occurring). The odds ratio is a ratio of describing the strength of association or non-independence between two binary data values by forming the ratio of the odds for the first group and the odds for the second group. Odds ratio are used when one wants to compare the odds of something occurring to two different groups. UMLS: The ratio of two odds. The exposure-odds ratio for case control data is the ratio of the odds in favor of exposure among cases to the odds in favor of exposure among noncases. The disease-odds ratio for a cohort or cross section is the ratio of the odds in favor of disease among the exposed to the odds in favor of disease among the unexposed. The prevalence-odds ratio refers to an odds ratio derived cross-sectionally from studies of prevalent cases. (MSH) A measure of the odds of an event happening in one group compared to the odds of the same event happening in another group. In cancer research, odds ratios are most often used in case-control (backward looking) studies to find out if being exposed to a certain substance or other factor increases the risk of cancer. For example, researchers may study a group of individuals with cancer (cases) and another group without cancer (controls) to see how many people in each group were exposed to a certain substance or factor. They calculate the odds of exposure in both groups and then compare the odds. An odds ratio of one means that both groups had the same odds of exposure and, therefore, the exposure probably does not increase the risk of cancer. An odds ratio of greater than one means that the exposure may increase the risk of cancer, and an odds ratio of less than one means that the exposure may reduce the risk of cancer. (NCI) The ratio of the odds of an event occurring in one group to the odds of it occurring in another group, or to a sample-based estimate of that ratio. (NCI) NICE: Compares the odds (probability) of something happening in 1 group with the odds of it happening in another. An odds ratio of 1 shows that the odds of the event happening (for example, a person developing a disease or a treatment working) is the same for both groups. An odds ratio of greater than 1 means that the event is more likely in the first group than the second. An odds ratio of less than 1 means that the event is less likely in the first group than in the second group. | Measure of Association | |||||||
4 | STATO:0000678 | prevalence ratio | A measure of association that is the ratio of two prevalences. | Prevalence (SEVCO STATO:0000412) is defined as: A proportion in which the numerator represents all events of interest (for example, both new and preexisting cases of a disease) in the population, which is represented by the denominator. The Prevalence Ratio indicates the magnitude of the prevalence of an event/outcome in one group of subjects/individuals (with characteristics/attribute) relative to another group (with different characteristics/attributes), such as the prevalence of the disease among the exposed persons to the prevalence of the disease among the unexposed persons. | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Muhammad Afzal, Janice Tufte | 2022-06-08 vote 5-0 by Harold Lehmann, Mario Tristan, Eric M Harvey, Muhammad Afzal, Paola Rosati | 2022-06-08 comment: Comment for application: The prevalence Ratio indicates how large is the prevalence of an event/outcome in one group of subjects/individuals (with characteristics/attribute) relative to another group (without the characteristics/attributes), such as the prevalence of the disease among the exposed persons to the prevalence of the disease among the unexposed persons. | https://www.ctspedia.org/do/view/CTSpedia/PrevalenceRatio#:~:text=Reference-,Definition%20of%20Prevalence%20Ratio,the%20proportion%20with%20the%20exposure. The ratio of the proportion of the persons with disease over the proportion with the exposure. Calculation is described here: https://sphweb.bumc.bu.edu/otlt/MPH-Modules/PH717-QuantCore/PH717-Module3-Frequency-Association/PH717-Module3-Frequency-Association12.html | Measure of Association | |||||
4 | STATO:0000245 | risk ratio | A measure of association that is the ratio of the risk of an event in one group to the risk of the same event in another group. | The groups being compared are often the exposed group versus the unexposed group, but risk ratio can also be applied to comparisons of one exposure relative to another exposure. A risk ratio of one means there is no difference between two groups in terms of their risk, based on whether or not they were exposed to a certain substance or factor, or how they responded to two interventions being compared. A risk ratio of greater than one implies an association of greater risk, and a risk ratio of less than one implies an association of lower risk. The risk ratio can be calculated from studies in which the proportion of exposed participants who had the event is known and the proportion of unexposed participants who had the event is known, such as a cohort study or clinical trial. | Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel, Muhammad Afzal | 2022-06-08 vote 6-0 by Robin Ann Yurk, Harold Lehmann, Mario Tristan, Eric M Harvey, Muhammad Afzal, Paola Rosati | STATO: relative risk (risk ratio) = Relative risk is a measurement datum which denotes the risk of an 'event' relative to an 'exposure'. Relative risk is calculated by forming the ratio of the probability of the event occurring in the exposed group versus the probability of this event occurring in the non-exposed group. NCIt Relative Risk A measure of the risk of a certain event happening in one group compared to the risk of the same event happening in another group. In cancer research, risk ratios are used in prospective (forward looking) studies, such as cohort studies and clinical trials. A risk ratio of one means there is no difference between two groups in terms of their risk of cancer, based on whether or not they were exposed to a certain substance or factor, or how they responded to two treatments being compared. A risk ratio of greater than one or of less than one usually means that being exposed to a certain substance or factor either increases (risk ratio greater than one) or decreases (risk ratio less than one) the risk of cancer, or that the treatments being compared do not have the same effects OBCS relative risk A data item that equals the incidence in exposed individuals divided by the incidence in unexposed individuals. The relative risk can be calculated from studies in which the proportion of patients exposed and unexposed to a risk is known, such as a cohort study. CDC https://www.cdc.gov/csels/dsepd/ss1978/lesson3/section5.html: A risk ratio (RR), also called relative risk, compares the risk of a health event (disease, injury, risk factor, or death) among one group with the risk among another group. It does so by dividing the risk (incidence proportion, attack rate) in group 1 by the risk (incidence proportion, attack rate) in group 2. The two groups are typically differentiated by such demographic factors as sex (e.g., males versus females) or by exposure to a suspected risk factor (e.g., did or did not eat potato salad). Often, the group of primary interest is labeled the exposed group, and the comparison group is labeled the unexposed group. | Measure of Association | ||||||
4 | STATO:0000411 | likelihood ratio positive | A measure of association that is the ratio of the probability of the test giving a positive result when testing an affected subject and the probability of the test giving a positive result when a subject is not affected. | The probability of the test giving a positive result when testing an affected subject is also called the sensitivity [SEVCO term STATO:0000233] or true positive rate. The probability of the test giving a positive result when a subject is not affected is called the false positive rate and is calculated as 1 minus the specificity [SEVCO term STATO:0000134]. The Likelihood Ratio Positive (LR+) is calculated as Sensitivity / (1 - Specificity). The Likelihood Ratio Positive may also be calculated as the posterior probability (positive predictive value) divided by the prior probability (prevalence). When the test result is a specific value on a continuous scale, the Likelihood Ratio Positive is the ratio of the likelihood of the test giving the specific value when testing an affected subject and the likelihood of the test giving the specific value when a subject is not affected. In the context of a probability distribution function, e.g. normal distribution, the x axis is the value and y axis is the likelihood. | Harold Lehmann, Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal | 2022-08-10 vote 5-0 by Robin Ann Yurk, Mario Tristan, Harold Lehmann, Eric Harvey, Paola Rosati | STATO: positive likelihood ratio (likelihood ratio for positive results) = the likelihood ratio of positive results is a ratio which is form by dividing the sensitivity value of a test by the difference between 1 and specificity of the test. This can be expressed also as dividing the probability of the test giving a positive result when testing an affected subject versus the probability of the test giving a positive result when a subject is not affected. AHRQ https://effectivehealthcare.ahrq.gov/products/test-performance-metrics/appendixes: The positive and negative likelihood ratios (LR+ and LR-, respectively) quantify the change in the certainty of the “diagnosis” conferred by test results. More specifically, the likelihood ratios transform the pretest odds to the posttest odds of a given (positive or negative) diagnosis: posttest odds = pretest odds x LR For a positive result with the medical test, the positive likelihood ratio would be used in the above relationship; for a negative result with the medical test portable monitor, the negative likelihood ratio would be used. If a given medical test has very good ability to predict the “true disease status,” its positive likelihood ratio will be high (i.e., will greatly increase the odds of a positive diagnosis) and its negative likelihood ratio will be low (i.e., will diminish substantially the likelihood of the positive diagnosis). A completely non-informative portable monitor would have likelihood ratios equal to 1 (i.e., does not transform the pre-test odds substantially in the equation above). Typically, a positive likelihood ratio of 10 or more and a negative likelihood ratio of 0.1 or less are considered to represent informative tests.3 We note that other, more lenient boundaries for LR+ and LR- can be used3 and that the choice of the boundaries is a subjective decision. It is interesting to note that studies with high LR+ and low LR- can be readily identified in the square sensitivity/100 percent-specificity plot, as shown in the Appendix Figure above. | Measure of Association | ||||||
4 | STATO:0000410 | likelihood ratio negative | A measure of association that is the ratio of the probability of the test giving a negative result when testing an affected subject and the probability of the test giving a negative result when a subject is not affected. | The probability of the test giving a negative result when testing an affected subject is also called the false negative rate and is calculated as 1 minus the sensitivity [SEVCO term STATO:0000233]. The probability of the test giving a negative result when a subject is not affected is called the specificity [SEVCO term STATO:0000134] or true negative rate. The Likelihood Ratio Negative (LR-) is calculated as (1 - Sensitivity ) / Specificity. The Likelihood Ratio Negative may also be calculated as the posterior probability (1 - negative predictive value) divided by the prior probability (prevalence). | Harold Lehmann, Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal | 2022-08-10 vote 5-0 by Robin Ann Yurk, Mario Tristan, Harold Lehmann, Eric Harvey, Paola Rosati | STATO: negative likelihood ratio (likelihood ratio for negative results) = the likelihood ratio of negative results is a ratio which is formed by dividing the difference between 1 and sensitivity of the test by the specificity value of a test.. This can be expressed also as dividing the probability of a person who has the disease testing negative by the probability of a person who does not have the disease testing negative. AHRQ https://effectivehealthcare.ahrq.gov/products/test-performance-metrics/appendixes: The positive and negative likelihood ratios (LR+ and LR-, respectively) quantify the change in the certainty of the “diagnosis” conferred by test results. More specifically, the likelihood ratios transform the pretest odds to the posttest odds of a given (positive or negative) diagnosis: posttest odds = pretest odds x LR For a positive result with the medical test, the positive likelihood ratio would be used in the above relationship; for a negative result with the medical test portable monitor, the negative likelihood ratio would be used. If a given medical test has very good ability to predict the “true disease status,” its positive likelihood ratio will be high (i.e., will greatly increase the odds of a positive diagnosis) and its negative likelihood ratio will be low (i.e., will diminish substantially the likelihood of the positive diagnosis). A completely non-informative portable monitor would have likelihood ratios equal to 1 (i.e., does not transform the pre-test odds substantially in the equation above). Typically, a positive likelihood ratio of 10 or more and a negative likelihood ratio of 0.1 or less are considered to represent informative tests.3 We note that other, more lenient boundaries for LR+ and LR- can be used3 and that the choice of the boundaries is a subjective decision. It is interesting to note that studies with high LR+ and low LR- can be readily identified in the square sensitivity/100 percent-specificity plot, as shown in the Appendix Figure above. | Measure of Association | ||||||
4 | TBD:0000029 | positive clinical utility index | DEFERRED | Mitchell AJ 2011 https://www.psycho-oncology.info/686.pdf https://link.springer.com/article/10.1007/s10654-011-9561-x positive clinical utility index = sensitivity x PPV Asberg 2019 A new index of clinical utility for diagnostic tests at https://www.tandfonline.com/doi/full/10.1080/00365513.2019.1677938 We propose a new clinical utility index (CUI), which is the expected gain in utility (EGU) of the test divided by the EGU of an ideal test, both adjusted for EGU of the optimal clinical action without testing. The index expresses the relative benefit of using the test compared to using an optimal test when making a clinical decision. Expected gain in utility (EGU) of a clinical option, at a certain probability of disease (p), is the difference between its expected utility and the expected utility of another option, for instance doing nothing [4]. The EGU of the option W at probability p is EGUp(W) = p×BW – (1 - p)×CW ......CUI is then a complicated equation. | 2022-08-10 discussion: Considering 2 source definitions that are incompatible and limited usage overall, decision made to defer this term to future consideration for SEVCO. | Measure of Association | ||||||||
4 | TBD:0000030 | negative clinical utility index | DEFERRED | see Positive Clinical Utility Index | 2022-08-10 discussion: Considering 2 source definitions that are incompatible and limited usage overall, decision made to defer this term to future consideration for SEVCO. | Measure of Association | ||||||||
4 | STATO:0000415 | diagnostic accuracy | A measure of association that is the ratio of the number of correct results to the total number tested. | Where results are reported as positive or negative, correct results are reported as true, and incorrect results are reported as false, the diagnostic accuracy is calculated as ( True Positives + True Negatives ) / ( True Positives + True Negatives + False Positives + False Negatives ). For continuous values, Measurement Accuracy (SEVCO term: TBD:MeasAccu) would be used instead of Diagnostic Accuracy. | Brian S. Alper, Harold Lehmann, Muhammad Afzal, Joanne Dehnbostel | 2022-08-24 vote 6-0 by Jesus Lopez-Alcalde, Harold Lehmann, Mario Tristan, Cauê Monaco, Eric Harvey, Janice Tufte | STATO: "accuracy (Rand accuracy, Rand index) = in the context of binary classification, accuracy is defined as the proportion of true results (both true positives and true negatives) to the total number of cases examined (the sum of true positive, true negative, false positive and false negative). It can be understood as a measure of the proximity of measurement results to the true value." | Measure of Association | ||||||
4 | STATO:0000679 | diagnostic odds ratio | A measure of association that is the ratio of the odds of a positive test in those with disease relative to the odds of a positive test in those without disease. | The Diagnostic Odds Ratio may be calculated as the Likelihood Ratio Positive divided by the Likelihood Ratio Negative. The Diagnostic Odds Ratio is an overall measure of the discriminatory power of a test and does not distinguish between the power to detect (rule in) or exclude (rule out). | Harold Lehmann, Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins | 2022-08-31 vote 6-0 by Janice Tufte, nisha mathew, Muhammad Afza, Harold Lehmann, Philippe Rocca-Serra, Eric Harvey | AHRQ https://effectivehealthcare.ahrq.gov/products/test-performance-metrics/appendixes: The diagnostic odds ratio (DOR) describes the odds of a positive test in those with disease relative to the odds of a positive test in those without disease.4 It can be computed in terms of sensitivity and specificity as well as in terms of positive and negative likelihood ratios (DOR = LR+/LR-). Thus this single measure includes information about both sensitivity and specificity and tends to be reasonably constant despite diagnostic threshold. However, it is impossible to use diagnostic odds ratios to weigh sensitivity and specificity separately, and to distinguish between tests with high sensitivity and low specificity and tests with low sensitivity and high specificity. Another disadvantage is that it is difficult for clinicians to understand and apply, limiting its clinical value. This is partly because they are not often exposed to diagnostic odds ratios. A diagnostic odds ratio is similar to an odds ratio that measures strength of association in an observational study or effect size in a trial. However, contrary to the typical effect size magnitudes of such odds ratios (often between 0.5 and 2), diagnostic odds ratios can attain much larger values (often greater than 100). | Measure of Association | ||||||
4 | STATO:0000524 | phi coefficient | A measure of association, ranging from -1 to 1, that measures the strength and direction of the linear relationship between two binary variables. | For a 2×2 contingency table <table align='center'> <thead> <tr> <th></th> <th>Variable 1 Value 1</th> <th>Variable 1 Value 2</th> </tr> </thead> <tbody> <tr> <td>Variable 2 Value 1</td> <td align='center'>A</td> <td align='center'>B</td> </tr> <tr> <td>Variable 2 Value 2</td> <td align='center'>C</td> <td align='center'>D</td> </tr> </tbody> </table> where A, B, C, and D represent the observation frequencies (the cell count), the formula for the phi coefficient ($\Phi$) is: $$\Phi = \frac{AD - BC}{\sqrt{(A+B)(C+D)(A+C)(B+D)}}$$ | Brian S. Alper, Harold Lehmann, Muhammad Afzal | 2023-01-25 vote 6-0 by Mario Tristan, Jesus Lopez-Alcalde, Joanne Dehnbostel, Harold Lehmann, Yuan Gao, Eric Harvey | STATO: Matthews correlation coefficient (MCC) = Matthews Correlation Coefficient (or MCC) is a correlation coefficient which is a measure of the quality of binary (two-class) classifications, introduced by biochemist Brian W. Matthews in 1975. | Measure of Correlation | ||||||
3 | STATO:0000623 | measure of agreement | A measure of association of two variables representing measurements of the same attribute of an entity. | The term 'Measure of Agreement' is primarily used as a class for types of measure of agreement listed in the hierarchy but may be used as the code for a measure of agreement that is not listed. | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel | 2022-12-21 vote 5-0 by Mario Tristan, Philippe Rocca-Serra, Eric Harvey, Janice Tuft, Harold Lehmann | ||||||||
4 | STATO:0000682 | kappa | A measure of agreement among categorical assessments, corrected for chance agreement. | In the literature, the same eponymic term (e.g., 'Cohen's kappa') is used with different formulas. In SEVCO, we define each term with a single formula, and recommend annotators to choose the SEVCO term based on the formula. This is a widely used term to measure inter-rater reliability. Refer to measures of association to see other terms: for example, intra-class correlation coefficient (ICC). | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Muhammad Afzal | 2022-09-14 (After deleting one "yes" vote at the request of the voter) vote 6-0 by Nisha Mathew, Philippe Rocca-Serra, Harold Lehmann, Eric Harvey, Jesus Lopez-Alcalde, Khalid Shahin | 2022-09-14 Comment "I recommend adding ....is a measure of interrater reliability or is this an Interrater reliability testing an alternate term." | OBCS kappa statistic = a generic term for several similar measures of agreement used with categorical data; typically used in assessing the degree to which two or more raters, examining the same data, agree on assigning data to categories | Measure of Agreement | |||||
5 | STATO:0000683 | simple chance-corrected agreement coefficient | A Kappa statistic in which the expected agreement by chance is based on an assumption that all possible categories for assignment are equally likely. | A Kappa statistic is a measure of agreement among categorical assessments, corrected for chance agreement. In the simple chance-corrected agreement coefficient, the expected chance agreement is modeled as the inverse of the number of categories (1/q) where q is the number of possible categories for assignment. The simple chance-corrected agreement coefficient is calculated as ( p[a] - 1/q ) / ( 1 - 1/q ) where p[a] is the observed percent agreement and q is the number of possible categories for assignment. | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Muhammad Afzal, Kenneth Wilkins | 2022-09-14 vote 6-0 by Nisha Mathew, Philippe Rocca-Serra, Harold Lehmann, Eric Harvey, Jesus Lopez-Alcalde, Khalid Shahin | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5965565 Gwet KL. Testing the Difference of Correlated Agreement Coefficients for Statistical Significance. Educ Psychol Meas. 2016 Aug;76(4):609-637. doi: 10.1177/0013164415596420. Epub 2015 Jul 28. PMID: 29795880; PMCID: PMC5965565. Brennan and Prediger (1981) proposed a simple chance-corrected agreement coefficient, which generalizes to multiple raters and multiple categories, the G-index previously proposed by Holley and Guilford (1964) for two raters and two categories. What is known as the Holley–Guilford G-index was previously proposed independently by various authors under different names. Among them are Guttman (1945), Bennett, Alpert, and Goldstein (1954), and Maxwell (1977). For an interrater reliability experiment involving r raters who classify n subjects into one of q possible categories, the Brennan-Prediger coefficient is given by ?[BP] = ( p[a] - 1/q ) / ( 1 - 1/q ), where the percent agreement p[a] is defined by Equation (3 -- see https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5965565/#disp-formula3-0013164415596420), and the percent chance agreement is a constant representing the inverse of the number of categories. | Measure of Agreement | ||||||
5 | STATO:0000630 | Cohen’s kappa | A Kappa statistic in which the expected agreement by chance is based on an assumption that the likelihood of each category for assignment is based on the proportion observed, and the number of raters is 2. | A Kappa statistic is a measure of agreement among categorical assessments, corrected for chance agreement. In Cohen's kappa, the expected chance agreement is modeled as the summation of the differences, between the square of the expected probability of the category and the quotient of its variance divided by 2 (the number of raters), for each category. Cohen's kappa is calculated as ( p[a] - p[e] ) / ( 1 - p[e] ) where p[a] is the observed percent agreement and p[e] is the expected chance agreement. | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Muhammad Afzal, Kenneth Wilkins | 2022-09-14 vote 6-0 by Nisha Mathew, Philippe Rocca-Serra, Harold Lehmann, Eric Harvey, Jesus Lopez-Alcalde, Khalid Shahin | OBCS Cohen's kappa measurement (inter-rater agreement, inter-annotator agreement; inter-rater agreement, inter-annotator agreement) = a statistical measure of agreement for categorical data; a measure of inter-rater agreement or inter-annotator agreement | Measure of Agreement | ||||||
5 | STATO:0000631 | modified Cohen’s kappa for more than 2 raters | A Kappa statistic in which the expected agreement by chance is based on an assumption that the likelihood of each category for assignment is based on the proportion observed, and the number of raters is more than 2. | A Kappa statistic is a measure of agreement among categorical assessments, corrected for chance agreement. In the modified Cohen's kappa for more than 2 raters, the expected chance agreement is modeled as the summation of the differences, between the square of the expected probability of the category and the quotient of its variance divided by the number of raters, for each category. The modified Cohen's kappa for more than 2 raters is calculated as ( p[a] - p[e] ) / ( 1 - p[e] ) where p[a] is the observed percent agreement and p[e] is the expected chance agreement. | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Muhammad Afzal, Kenneth Wilkins | 2022-09-14 vote 5-0 by Nisha Mathew, Philippe Rocca-Serra, Harold Lehmann, Eric Harvey, Jesus Lopez-Alcalde | OBCS Cohen's kappa measurement (inter-rater agreement, inter-annotator agreement; inter-rater agreement, inter-annotator agreement) = a statistical measure of agreement for categorical data; a measure of inter-rater agreement or inter-annotator agreement | Measure of Agreement | ||||||
5 | STATO:0000629 | Scott’s pi | A Kappa statistic where the expected agreement between two raters is expressed in terms of the square of arithmetic means of marginal proportions of each assessment category. | Scott's pi is a kappa statistic for two raters that assumes the likelihood of each category for assignment is based on the same distribution of rater responses, leading to the use of squared arithmetic means of the marginal proportion of each assessment category as its estimate of "chance agreement." Pr(expected) is calculated using squared "joint proportions" which are squared arithmetic means of the marginal proportions of each assessment category, in contrast to Cohen's Kappa which uses squared geometric means. Scott's pi = ( p[a] - p[e] ) / ( 1 - p[e] ) where p[a] is the observed percent agreement and p[e] is the expected chance agreement expressed as the squared joint proportions of the marginal sums. | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Muhammad Afzal, Kenneth Wilkins | 2022-10-19 vote 6-0 by Joanne Dehnbostel, Muhammad Afzal, Jesus Lopez-Alcalde, Mario Tristan, Eric Harvey, Harold Lehmann | 2022-09-21 comment: I think there should be a formula included in the comment for application as with all the other Kappa terms 2022-09-28 adjustment: Steering Group changed the first sentence of Comment for application to better represent the assumption. | Measure of Agreement | ||||||
4 | STATO:0000632 | misclassification rate | A ratio of the number of incorrect results to the total number tested. | Where results are reported as positive or negative, incorrect results are reported as false, and correct results are reported as true, the misclassification rate is calculated as ( False Positives + False Negatives ) / ( True Positives + True Negatives + False Positives + False Negatives ). | Brian S. Alper, Muhammad Afzal, Joanne Dehnbostel | 2022-10-19 vote 5-0 by Joanne Dehnbostel, Muhammad Afzal, Janice Tufte, Mario Tristan, Eric Harvey | ||||||||
4 | STATO:0000628 | F1-score | A ratio representing the harmonic mean of recall and precision. | The F1-score is used as a measure of quality for classification algorithms and information retrieval strategies, where 1 represents the best precision and recall and 0 represents the worst precision and recall. A harmonic mean of a set of quantities is the reciprocal of the arithmetic mean of the reciprocals of each quantity. The F score is thus calculated as 1 / (the arithmetic mean of the reciprocals), or: F = 1 / ( ( (1/recall) + (1/precision) ) / 2 ) F = 2*( (precision*recall) / (precision+recall) ) Recall is sensitivity STATO:0000233 Precision (PPV) is SEVCO TBD:0000022 [[F-beta will be defined elsewhere in the code system.]] | Harold Lehmann, Kenneth Wilkins, Muhammad Afzal, Joanne Dehnbostel, Brian S. Alper | 2022-10-19 vote 6-0 by Joanne Dehnbostel, Muhammad Afzal, Jesus Lopez-Alcalde, Mario Tristan, Eric Harvey, Harold Lehmann | OCRe F measure [not used due to inaccuracy in the definition] | Measure of Agreement | ||||||
3 | STATO:0000611 | measure of correlation | A measure of association between ordinal or continuous variables. | A value of 0 means no association. A positive value means a positive association (as one variable increases, the other variable increases). A negative value means a negative association (as one variable increases, the other variable decreases). For correlation coefficients, the possible values range from +1 (perfect positive association) to -1 (perfect negative association). | Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Yuan Gao, Kenneth Wilkins, Harold Lehmann | 2022-11-16 vote 5-0 by Brian S. Alper, Philippe Rocca-Serra, Harold Lehman, Jesus Lopez-Alcalde, Eric Harvey | 2022-10-26 vote 6-1 by Yuan Gao, Philippe Rocca-Serra, Eric Harvey, Paola Rosati, Harold Lehmann, Joanne Dehnbostel, Janice Tufte | 2022-10-26 comments: there are measures of correlation which characterise non-linear relation between 2 variables . so I was wondering if there was a need to specify "measure of linear correlation" , where a subclass would be 'correlation coefficient). The type 'measure of correlation' becoming a parent class for the 'measure of non-linear correlation' Should we say, "A value of 0 means no linear association, a value of +1 mean perfect positive linear (a positive slope) association, and a value of -1 means perfect negative association (a negative slope)." | ||||||
4 | STATO:0000301 | covariance | A measure of correlation that is not normalized by the variances of the variables. | A measure of correlation is a measure of association between ordinal or continuous variables. Covariance is used in the calculation of other measures of correlation. Covariance can only be calculated for interval or continuous variables. Because the covariance is not normalized by the variances of the variables, the magnitude of the covariance is not informative without consideration of the magnitude of the respective variances. Covariance is informative regarding whether both variables vary in the same direction (positive covariance) or in the opposite direction (negative covariance). Covariance for a sample is calculated as the mean of the products of deviations from the sample mean for the variables. Cov(X,Y) = S (($x_i – \overline{x}$) ($y_i – \overline{y}$)) / (n-1) where $x_i$ is one the observed values of X, $\overline{x}$ is the sample mean of X, $y_i$ is one the observed values of Y, and $\overline{y}$ is the sample mean of Y. Covariance as the population-level quantity is given by the expected value of the product of deviations from the mean for the variables. Cov(X, Y) = E [ (X - µ) (Y - ?) ] where µ = E(X) and ? = E(Y) Covariance is a continuous value with a range of negative infinity to positive infinity. | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Yuan Gao, Khalid Shahin, Muhammad Afzal | 2022-11-23 vote 5-0 by Mario Tristan, Yuan Gao, Jesus Lopez-Alcalde, Eric Harvey, Harold Lehmann | STATO: "covariance = The covariance is a measurement data item about the strength of correlation between a set (2 or more) of random variables. The covariance is obtained by forming: cov(X,Y)=E([X-E(X)][Y-E(Y)] where E(X), E(Y) is the expected value (mean) of variable X and Y respectively. covariance is symmetric so cov(X,Y)=cov(Y,X). The covariance is usefull when looking at the variance of the sum of the 2 random variables since: var(X+Y) = var(X) +var(Y) +2cov(X,Y) The covariance cov(x,y) is used to obtain the coefficient of correlation cor(x,y) by normalizing (dividing) cov(x,y) but the product of the standard deviations of x and y." | Measure of Correlation | ||||||
4 | STATO:0000280 | Pearson correlation coefficient | A measure of correlation, ranging from -1 to 1, that measures the strength and direction of the linear relationship between values of two continuous variables. | A measure of correlation is a measure of association between ordinal or continuous variables. Pearson correlation coefficient is designed to be used between continuous variables. Pearson correlation coefficient for a sample ($r$) is calculated as $r = \dfrac{\widehat{cov}(x,y)}{s_x*s_y}$ where $ \widehat{cov}(x,y)$ is the estimated covariance, and $s_x$ and $s_y$ are the sample standard deviations. Pearson correlation coefficient for a population ($\rho$) is defined as $\rho= \dfrac{cov(X,Y)}{\sigma_X*\sigma_Y}$ where cov(X,Y) is covariance of X and Y and $\sigma_X$ and $\sigma_Y$ are the population standard deviations. Assumptions for computing Pearson's correlation coefficient include a linear relationship between 2 continuous variables and each of the variables approximates a normal distribution. Covariance is [defined in SEVCO](https://fevir.net/resources/CodeSystem/27270#STATO:0000301). | Kenneth Wilkins, Muhammad Afzal, Yuan Gao, Khalid Shahin, Joanne Dehnbostel, Brian S. Alper, Harold Lehmann | 2022-12-07 vote 5-0 by Muhammad Afzal, Mario Tristan, Eric Harvey, Yuan Gao, Mahnoor Ahmed | STATO: "Pearson's correlation coefficient (Pearson product-moment correlation coefficient; Pearson's r; r statistics) = The Pearson's correlation coefficient is a correlation coefficient which evaluates two continuous variables for association strength in a data sample. It assumes that both variables are normally distributed and linearity exists. The coefficient is calculated by dividing their covariance with the product of their individual standard deviations. It is a normalized measurement of how the two are linearly related." | Measure of Correlation | ||||||
4 | STATO:0000201 | Spearman rank-order correlation coefficient | A measure of correlation, ranging from -1 to 1, that measures the strength and direction of the relationship between ranks by value of two ordinal or continuous variables, and is calculated as the Pearson correlation coefficient between the rank values. | A measure of correlation is a measure of association between ordinal or continuous variables. Spearman rank-order correlation coefficient is designed to be used between ordinal and/or continuous variables. The Spearman rank-order correlation coefficient can identify monotonic (i.e. consistently non-increasing or consistently non-decreasing) relationships, whether the relationships are linear or non-linear. The Spearman rank-order correlation coefficient between two variables is equal to the [Pearson correlation coefficient](https://fevir.net/resources/CodeSystem/27270#STATO:0000280) between the rank values of those two variables. The Spearman rank-order correlation coefficient is the nonparametric counterpart to the Pearson correlation coefficient and may be used when the assumptions for computing Pearson's correlation coefficient (include a linear relationship between 2 continuous variables and each of the variables approximates a normal distribution) are not met. The Spearman rank-order correlation coefficient is appropriate when either variable has outliers, is ordinal, or is not normally distributed; when the variances of the two variables are unequal; or when the apparent relationship between the variables is non-linear. The assumptions for computing Spearman rank-order correlation coefficient include a monotonic relationship between 2 continuous or ordinal variables. | Kenneth Wilkins, Muhammad Afzal, Yuan Gao, Joanne Dehnbostel, Brian S. Alper, Harold Lehmann, Noor Ahmed | 2022-12-14 vote 5-0 by Jesus Lopez-Alcalde, Yuan Gao, Mario Tristan, Eric Harvey, Harold Lehmann | 2022-12-07 comment: The fundamental difference between the two correlation coefficients is that the Pearson coefficient works with a linear relationship between the two variables whereas the Spearman Coefficient works with monotonic relationships as well. | STATO: "Spearman's rank correlation coefficient (Spearman's rho) = Spearman's rank correlation coefficient is a correlation coefficient which is a nonparametric measure of statistical dependence between two ranked variables. It assesses how well the relationship between two variables can be described using a monotonic function. If there are no repeated data values, a perfect Spearman correlation of +1 or -1 occurs when each of the variables is a perfect monotone function of the other. Spearman's coefficient may be used when the conditions for computing Pearson's correlation are not met (e.g linearity, normality of the 2 continuous variables) but may require a ranking transformation of the variables" | Measure of Correlation | |||||
4 | STATO:0000240 | Kendall correlation coefficient | A measure of correlation, ranging from -1 to 1, that measures the strength and direction of the relationship between ranks by value of two ordinal or continuous variables, and is calculated based on the difference in the number of concordant and discordant pairs of rankings divided by the number of all possible pairs of rankings. | A measure of correlation is a measure of association between ordinal or continuous variables. Kendall's correlation coefficient is designed to be used between ordinal variables (or continuous variables converted to ordinal variables). The Kendall's correlation coefficient can identify monotonic (i.e. consistently non-increasing or consistently non-decreasing) relationships, whether the relationships are linear or non-linear. The Kendall's correlation coefficient between two variables is calculated by determining the concordance or discordance of each pair of ranked values (whether or not two raters are concordant in one value being ranked equal or higher to the other value), and then dividing the difference between the number of concordant values ($n_c$) and the number of discordant values ($n_d$) by the number of pairs of ranked values ($\frac{1}{2}n(n-1)$). $$\tau = \dfrac{n_c - n_d}{\frac{1}{2}n(n-1)}$$ The Kendall's correlation coefficient is a nonparametric statistic and may be used when the assumptions for computing Pearson's correlation coefficient (include a linear relationship between 2 continuous variables and each of the variables approximates a normal distribution) are not met. The Kendall's correlation coefficient is appropriate when either variable has outliers, is ordinal, or is not normally distributed; when the variances of the two variables are unequal; or when the apparent relationship between the variables is non-linear. The assumptions for computing Kendall's correlation coefficient include a monotonic relationship between 2 ordinal variables. | Brian S. Alper, Kenneth Wilkins, Harold Lehmann, Joanne Dehnbostel | 2023-01-25 vote 6-0 by Mario Tristan, Jesus Lopez-Alcalde, Joanne Dehnbostel, Harold Lehmann, Yuan Gao, Eric Harvey | 2022-12-21 vote 5-0 by Joanne Dehnbostel, Mario Trista, Philippe Rocca-Serra, Eric Harvey, Harold Lehmann 2023-01-04 definition change by Steering Committee | STATO: Kendall's correlation coefficient (Kendall's tau (t) coefficient; Kendall rank correlation coefficient) = Kendall's correlation coefficient is a correlation coefficient between 2 ordinal variables (natively or following a ranking procedure) and may be used when the conditions for computing Pearson's correlation are not met (e.g linearity, normality of the 2 continuous variables) | Measure of Correlation | |||||
4 | STATO:0000612 | Goodman and Kruskal’s gamma | A measure of correlation, ranging from -1 to 1, that measures the strength and direction of the relationship between ranks by value of two ordinal or continuous variables, and is calculated based on the difference in the number of concordant and discordant pairs of rankings divided by the total number of pairs of rankings, where ties are not counted among the pairs of rankings. | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel | 2023-01-25 vote 5-0 by Mario Tristan, Jesus Lopez-Alcalde, Joanne Dehnbostel, Harold Lehmann, Eric Harvey | https://stats.stackexchange.com/questions/18112/how-do-the-Goodman-Kruskal-gamma-and-the-Kendall-tau-or-Spearman-rho-correlation | Measure of Correlation | |||||||
3 | STATO:0000565 | regression coefficient | A measure of association that is used as the coefficient of an independent variable in a regression model, of the dependent variable, which is linear in its parameters. | A value of zero means no association. The sign (positive or negative) reflects the direction of association. | Harold Lehmann, Brian S. Alper, Kenneth Wilkins, Muhammad Afzal | 2023-02-07 vote 5-0 by Cauê Monaco, Harold Lehmann, Mario Tristan, Jesus Lopez-Alcalde, Eric Harvey | STATO: regression coefficient = a regression coefficient is a data item generated by a type of data transformation called a regression, which aims to model a response variable by expression the predictor variables as part of a function where variable terms are modified by a number. A regression coefficient is one such number. | Measure of Association | ||||||
3 | STATO:0000685 | measure of calibration | A measure of association between a variable representing known or true values and a variable representing measured or predicted values. | Calibration is often used for measurement devices. The known or true values may be called the reference standard. Calibration is also used for predictive models and other contexts comparing computed or expected probabilities with empirical frequencies. A measure of calibration indicates the degree to which the computed values are underestimated or overestimated. | Brian S. Alper, Joanne Dehnbostel, Harold Lehmann, Kenneth Wilkins | 2023-12-18 vote 6-0 by Janice Tufte, Eric Harvey, Caue Monaco, Philippe Rocca-Serra, Harold Lehmann, Jesus Lopez-Alcalde | ||||||||
4 | STATO:0000686 | mean calibration | A measure of calibration that is the average of a function of the difference between the expected values and the observed values. | For predictive modeling of non-continuous variables, the mean calibration is a measure of calibration that is the average of a function of the difference between the expected probabilities and the observed frequencies. The expected values may be computed (as in predictive models) or may be derived from reference data (as typical for a measurement device). When the function is the square of the difference and the variables are binary (0 or 1), the measure is called the Brier score. | Harold Lehmann, Kenneth Wilkins, Muhammad Afzal, Brian S. Alper | 2024-01-22 vote 5-0 by Harold Lehmann, Homa Keshavarz, Eric Harvey, Janice Tufte, Cauê Monaco | Measure of Calibration | |||||||
4 | STATO:0000688 | calibration intercept | A measure of calibration that is the difference between the mean expected value and the mean observed value. | For calibration of binary outcome variables (0 or 1), the calibration intercept is computed from a statistical model where the log odds of the predicted probabilities is a linear function of the empirical frequencies. For calibration of count outcome variables (0, 1, 2, 3, ...), the calibration intercept may be computed from a statistical model where the log of the predicted mean counts is a linear function of the empirical frequencies determined by unique combinations of the covariates. There are other types of outcome variables for which the calibration-in-the-large measure may be obtained. The notion of calibration in the large is that the intercept is a gross assessment of whether the average prediction matches the average outcome, however, this interpretation is exquisitely sensitive to the choice of referent factors within the prediction model. | Harold Lehmann, Ken Wilkins, Muhammad Afzal, Joanne Dehnbostel, Khalid Shahin | 2024-02-12 vote 5-0 by Lenny Vasanthan, Xing Song, Eric Harvey, Harold Lehmann, Homa Keshavarz | 2024-02-05 vote 4-0 by Xing Song, Eric Harvey, Harold Lehmann, Homa Keshavarz | 2024-02-05 comment: The definition looks good to me. Only comment is that the first comment about "calibration of binary variable" made it is sounds like this concept is only for binary outcomes, while I think calibration is a generic measure for all generalized linear models. | Measure of Calibration | |||||
4 | STATO:0000687 | calibration slope | A measure of calibration that is the rate of change in the appropriately transformed value per unit change of the correspondingly transformed predicted value. | For calibration of binary outcome variables (0 or 1), the calibration slope is computed from a statistical model where the log odds of the predicted probabilities is a linear function of the empirical frequencies (logistical regression). The transformation is log odds (logit, the link function for a generalized linear model for the expected value of the outcome). For calibration of count outcome variables (0, 1, 2, 3, ...), the calibration slope may be computed from a statistical model where the log of the predicted mean counts is a linear function of the empirical frequencies determined by unique combinations of the covariates. The transformation is log, the link function, of the counts. There are other types of outcome variables for which the calibration slope may be obtained. Slopes further away from 1.0 indicate, at upper and lower values, over- or under-confidence in the prediction. | Harold Lehmann, Ken Wilkins, Muhammad Afzal, Joanne Dehnbostel, Khalid Shahin | 2024-03-11 vote 6-0 by Eric Harvey, Xing Song, Lenny Vasanthan, Elma OMERAGIC, Homa Keshavarz, Harold Lehmann | 2024-02-12 vote 5-0 by Lenny Vasanthan, Xing Song, Eric Harvey, Harold Lehmann, Homa Keshavarz BUT THEN term definition changed in response to the comment | 2024-02-12 comment: Again, the first comment about "calibration of binary variable" made it is sounds like this concept is only for binary outcomes. Logistic regression could be used as an example for this concept. | Measure of Correlation | |||||
2 | STATO:0000028 | measure of dispersion | A statistic that represents the variation or spread among data values in a dataset or data distribution. | This categorical (parent) term can be used for a statistic that is a measure of dispersion and is not found in the child terms. | Brian S. Alper, Kenneth Wilkins, Yuan Gao, Joanne Dehnbostel | 2023-02-07 vote 5-0 by Cauê Monaco, Harold Lehmann, Janice Tufte, Jesus Lopez-Alcalde, Eric Harvey | STATO: measure of variation (measure of dispersion) = measure of variation or statistical dispersion is a data item which describes how much a theoritical distribution or dataset is spread. NCIt: "Statistical dispersion- The variation between data values in a sample." UMLS: "Dispersion (C0332624) Definition: The variation between data values in a sample. Semantic Types: Spatial Concept" | |||||||
3 | STATO:0000035 | range | A measure of dispersion calculated as the difference between the maximum observed value and the minimum observed value. | A measure of dispersion is a statistic that represents the variation or spread among data values in a dataset or data distribution. The maximum observed value is a statistic that represents the largest non-null value in a collection of values that can be ordered by magnitude. The minimum observed value is a statistic that represents the smallest non-null value in a collection of values that can be ordered by magnitude. A range (as a statistic) is represented as a single value (the difference between maximum and minimum observed values) while, in common language, the term range is often expressed with two values (from the minimum to maximum values, or from the lower limit to the higher limit). | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Muhammad Afzal | 2023-02-20 vote 5-0 by Philippe Rocca-Serra, Janice Tufte, Harold Lehmann, Mario Tristan, Eric Harvey | STATO: range = the range is a measure of variation which describes the difference between the lowest score and the highest score in a set of numbers (a data set) | Measure of Dispersion | ||||||
3 | STATO:00000164 | interquartile range | A measure of dispersion calculated as the difference between the 75th percentile and the 25th percentile. | A measure of dispersion is a statistic that represents the variation or spread among data values in a dataset or data distribution. The 75th percentile is the median of the portion of the dataset or distribution with values greater than the median value. The 25th percentile is the median of the portion of the dataset or distribution with values lesser than the median value. An interquartile range (as a statistic) is represented as a single value (the difference between 75th and 25th percentiles) while, in common language, the term interquartile range is often expressed with two values (the 25th percentile and the 75th percentile). | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Muhammad Afzal | 2023-02-20 vote 5-0 by Philippe Rocca-Serra, Janice Tufte, Harold Lehmann, Mario Tristan, Eric Harvey | STATO: "inter quartile range = The interquartile range is a data item which corresponds to the difference between the upper quartile (3rd quartile) and lower quartile (1st quartile). The interquartile range contains the second quartile or median. The interquartile range is a data item providing a measure of data dispersion" | Measure of Dispersion | ||||||
3 | STATO:0000237 | standard deviation | A measure of dispersion that represents the average of the distances from the mean of the dataset to each data point in the dataset. | Standard deviation for sample is a standard deviation in which the dataset is a sample. Standard deviation for population, when used as a statistical model parameter, is not a standard deviation as a type of statistic. | Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Khalid Shahin, Kenneth Wilkins | 2023-05-15 vote 6-0 by Muhammad Afzal, Brian S. Alper, Jesus Lopez-Alcalde, Janice Tufte, Eric Harvey, Harold Lehmann | STATO: standard deviation (s) = The standard deviation of a random variable, statistical population, data set, or probability distribution is a measure of variation which correspond to the average distance from the mean of the data set to any given point of that dataset. It also corresponds to the square root of its variance. | Measure of Dispersion | ||||||
4 | STATO:0000684 | standard deviation for sample | A standard deviation that is the square root of the quotient of the summation across data points of the square of the distance from each data point to the sample mean, and the degrees of freedom (where the degrees of freedom is sample size minus one). | Standard deviation is defined as a measure of dispersion that represents the average of the distances from the mean of the dataset to each data point in the dataset. Standard deviation for sample is a standard deviation in which the dataset is a sample. The formula for the standard deviation for sample ($s$) is: $$s = \sqrt \frac{\sum\\{\substack{n\\i=1}} (x_i - \overline{x})^2}{n - 1}$$ where $n$ is the sample size (the number of independent observations, indexed by $i$), $x$ is observed value, and $\overline{x}$ is the sample mean. The formula to calculate degrees of freedom depends on the model. For the degrees of freedom for a sample standard deviation, given the sample mean, it is n-1, because the nth observation is no longer independent, given the n-1 other observations and the sample mean. | Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Khalid Shahin, Kenneth Wilkins | 2023-05-15 vote 5-0 by Muhammad Afzal, Brian S. Alper, Jesus Lopez-Alcalde, Eric Harvey, Harold Lehmann | Measure of Dispersion | |||||||
3 | STATO:0000113 | variance | A measure of dispersion that represents the square of the standard deviation. | Standard deviation is defined as a measure of dispersion that represents the average of the distances from the mean of the dataset to each data point in the dataset. Variance for sample is a variance in which the dataset is a sample. Variance for population, when used as a probability distribution parameter, is not a variance as a type of statistic. | Brian S. Alper, Muhammad Afzal, Joanne Dehnbostel | 2023-11-20 vote 5-0 by Brian S. Alper, Jesus Lopez-Alcalde, Harold Lehmann, Muhammad Afzal, Eric Harvey | STATO: variance (s2) = variance is a data item about a random variable or probability distribution. it is equivalent to the square of the standard deviation. It is one of several descriptors of a probability distribution, describing how far the numbers lie from the mean (expected value).The variance is the second moment of a distribution. | Measure of Dispersion | ||||||
4 | STATO:0000643 | variance for sample | A variance that is the quotient of the summation across data points of the square of the distance from each data point to the sample mean, and the degrees of freedom (where the degrees of freedom is sample size minus one). | Variance is defined as a measure of dispersion that represents the square of the standard deviation. Standard deviation is defined as a measure of dispersion that represents the average of the distances from the mean of the dataset to each data point in the dataset. Variance for sample is a variance in which the dataset is a sample. The formula for the variance for sample ($s^2$) is: $$s^2 = \frac{\sum\\{\substack{n\\i=1}} (x_i - \overline{x})^2}{n - 1}$$ where $n$ is the sample size (the number of independent observations, indexed by $i$), $x$ is observed value, and $\overline{x}$ is the sample mean. The formula to calculate degrees of freedom depends on the model. For the degrees of freedom for a sample variance, given the sample mean, it is n-1, because the nth observation is no longer independent, given the n-1 other observations and the sample mean. | Kenneth Wilkins, Brian S. Alper, Muhammad Afzal | 2023-11-20 vote 5-0 by Brian S. Alper, Jesus Lopez-Alcalde, Harold Lehmann, Muhammad Afzal, Eric Harvey | Measure of Dispersion | |||||||
3 | STATO:0000624 | Gini index | A measure of dispersion that is half the relative mean absolute difference between all pairs of observed values. | The Gini index is typically used as a measure of inequality for income, wealth, or resource distribution. | Brian S. Alper, Harold Lehmann, Muhammad Afzal, Kenneth Wilkins | 2023-12-04 vote 5-0 by Yasser Sami Amer, Xing Song, Eric Harvey, Harold Lehmann, Brian S. Alper | 2023-11-20 vote 5-0 by Brian S. Alper, Jesus Lopez-Alcalde, Harold Lehmann, Muhammad Afzal, Eric Harvey BUT comment of "between all pairs of observed values?" led to recognition of incorrect definition | Measure of Dispersion | ||||||
3 | STATO:0000562 | standard error | A measure of dispersion applied to estimates across hypothetical repeated random samples. | A standard error is used to quantify the uncertainty around a statistical estimate due to random sampling error. | Harold Lehmann, Kenneth Wilkins, Brian S. Alper, Xing Song, Joanne Dehnbostel, Muhammad Afzal | 2023-12-18 vote 7-0 by Xing Song, Muhammad Afzal, Yasser Sami Amer, Eric Harvey, Harold Lehmann, Janice Tufte, Caue Monaco | STATO: It is a measure of how precise is an estimate of the statistical parameter is. Standard error is the estimated standard deviation of an estimate. It measures the uncertainty associated with the estimate. Compared with the standard deviations of the underlying distribution, which are usually unknown, standard errors can be calculated from observed data. | |||||||
4 | STATO:0000037 | standard error of the mean | A measure of dispersion applied to means across hypothetical repeated random samples. | A standard error is used to quantify the uncertainty around a statistical estimate due to random sampling error. The standard error of the mean is calculated by dividing the sample standard deviation (STATO:0000237) by the square root of n, the size (number of observations) of the sample. | Brian S. Alper, Harold Lehmann, Muhammad Afzal, Xing Song, Kenneth Wilkins, Joanne Dehnbostel | 2024-01-22 vote 5-0 by Homa Keshavarz, Eric Harvey, Cauê Monaco, Harold Lehmann, Yasser Sami Amer | STATO: The standard error of the mean (SEM) is data item denoting the standard deviation of the sample-mean's estimate of a population mean. It is calculated by dividing the sample standard deviation (i.e., the sample-based estimate of the standard deviation of the population) by the square root of n , the size (number of observations) of the sample. | |||||||
4 | STATO:0000647 | standard error of the proportion | A measure of dispersion applied to proportions across hypothetical repeated random samples. | A standard error is used to quantify the uncertainty around a statistical estimate due to random sampling error. The formula for the standard error of the sample proportion ($SE(\hat{p})$) is: $$SE(\hat{p}) = \sqrt \frac{\hat{p}(1-\hat{p})} {n}$$ where $\hat{p}$ is the sample proportion and $n$ is the size (number of observations) of the sample. | Brian S. Alper, Kenneth Wilkins, Xing Song, Harold Lehmann, Joanne Dehnbostel, Muhammad Afzal | 2024-01-22 vote 5-0 by Homa Keshavarz, Eric Harvey, Brian S. Alper, Harold Lehmann, Yasser Sami Amer | ||||||||
4 | STATO:0000648 | standard error of the difference between independent means | A measure of dispersion applied to differences between means of independent groups across hypothetical repeated random samples. | A standard error is used to quantify the uncertainty around a statistical estimate due to random sampling error. In cases where the samples are assumed to have unequal population variances for X, the formula for the standard error of the sample difference between means $(SE_{unequal}(\overline{x}_{1} - \overline{x}_{2}))$ is: $$SE_{unequal}(\overline{x}_{1} - \overline{x}_{2}) = \sqrt{\frac{s^2_1}{n_1}+\frac{s^2_2}{n_2}}$$ where $\overline{x}_{1}$ and $\overline{x}_{2}$ are the sample means, $s^2_1$ and $s^2_2$ are the sample standard deviations, and $n_1$ and $n_2$ are the sizes (number of observations) of the samples. In cases where the samples are assumed to have the same (equal) population variance for X, the formula for the standard error of the sample difference between means $(SE_{equal}(\overline{x}_{1} - \overline{x}_{2}))$ is: $$SE_{equal}(\overline{x}_{1} - \overline{x}_{2}) = \sqrt{\frac{n_1 s^2_1 + n_2 s^2_2}{n_1 + n_2 - 2}}$$ where $\overline{x}_{1}$ and $\overline{x}_{2}$ are the sample means, $s^2_1$ and $s^2_2$ are the sample standard deviations, and $n_1$ and $n_2$ are the sizes (number of observations) of the samples. In cases where the samples are assumed to have the same (equal) population variance for X, the standard error of the sample difference between means is also called the pooled standard deviation. | Harold Lehmann, Kenneth Wilkins, Brian S. Alper | 2024-02-12 vote 5-0 by Lenny Vasanthan, Xing Song, Eric Harvey, Harold Lehmann, Homa Keshavarz | 2024-02-12 comment: We may consider including the term of "pooled standard deviation" to describe the SE_equal(x1-x2), SE when assuming equal variance. | |||||||
4 | STATO:0000649 | standard error of the difference between independent proportions | A measure of dispersion applied to differences between proportions arising from independent groups across hypothetical repeated random samples. | A standard error is used to quantify the uncertainty around a statistical estimate due to random sampling error. The formula for the standard error of the sample difference between proportions $(SE(\hat{p}_1 - \hat{p}_2))$ is: $$SE(\hat{p}_1 - \hat{p}_2) = \sqrt {\frac{\hat{p}_1(1-\hat{p}_1)} {n_1} + \frac{\hat{p}_2(1-\hat{p}_2)} {n_2}}$$ where $\hat{p}_1$ and $\hat{p}_2$ are the sample proportions and $n_1$ and $n_2$ are the sizes (number of observations) of the samples. | Harold Lehmann, Kenneth Wilkins, Brian S. Alper | 2024-02-12 vote 5-0 by Lenny Vasanthan, Xing Song, Eric Harvey, Harold Lehmann, Homa Keshavarz | 2024-02-12 comment: In hypothesis testing for p1 = p2, I think the typical way of calculating standard error of proportion difference, is using the formula with pooled proportion, should we also include that in the comment? | |||||||
3 | STATO:0000455 | credible interval | The range in which the value of the parameter of interest is likely to reside, typically within a posterior probability distribution. | The credible interval is used in Bayesian analysis and plays an analogous role to the confidence interval in frequentist statistics. | Brian S. Alper, Harold Lehmann, Kenneth Wilkins, Muhammad Afzal | 2023-11-27 vote 5-0 by Xing Song, Yasser Sami Amer, Harold Lehmann, Jesus Lopez-Alcalde, Eric Harvey | STATO: In Bayesian statistics context, a credible interval is an interval of a posterior distribution which is such that the density at any point inside the interval is greater than the density at any point outside and that the area under the curve for that interval is equal to a prespecified probability level. For any probability level there is generally only one such interval, which is also often known as the highest posterior density region. Unlike the usual confidence interval associated with frequentist inference, here the intervals specify the range within which parameters lie with a certain probability. The Bayesian counterparts of the confidence interval used in Frequentists Statistics. UMLS: "Interval (C1272706) Definition: The period of time or the distance separating two instances, events, or occurrences. Semantic Types: Temporal Concept" OBCS: A quantitative confidence value that is used in Bayesian analysis to describe the range in which a posterior probability estimate is likely to reside. OECD: calculated interval-The interval containing possible values for a suppressed cell in a table, given the table structure and the values published. SCO: interval-An interval is a set of real numbers that includes all numbers between any two numbers in the set. | |||||||
3 | STATO:0000196 | confidence interval | The estimated range of values that encompasses the point estimate and quantifies the uncertainty about that estimate in terms of a prespecified level of coverage, expected to include the true value between upper and lower bounds, across hypothetically repeated random samples, with all assumptions regarding the sampling distribution across random samples having been fully met. | The prespecified level of coverage is commonly 0.95 or 95%. Confidence cannot be directly interpreted as a probability. This is in contrast to credibility for credible intervals. Confidence only conveys uncertainty indirectly by reflecting a long term relative frequency across hypothetically repeated sample estimates. Width of a confidence interval can convey precision. This precision can be increased by increasing the sample size in most cases assuming variability in sample is only due to random sample-to-sample variation. | Brian S. Alper, Harold Lehmann, Ken Wilkins, Joanne Dehnbostel, Robin Ann Yurk, Janice Tufte | 2023-11-27 vote 5-0 by Xing Song, Yasser Sami Amer, Harold Lehmann, Jesus Lopez-Alcalde, Eric Harvey | STATO: A confidence interval is a data item which defines an range of values in which a measurement or trial falls corresponding to a given probability. also confidence interval calculation is a data transformation which determines a confidence interval for a given statistical parameter NCIt: A range of values for a parameter that may contain the parameter and the degree of confidence that it is in fact there. A measure of the precision of an estimated value. The interval represents the range of values, consistent with the data, that is believed to encompass the "true" value with high probability (usually 95%). The confidence interval is expressed in the same units as the estimate. Wider intervals indicate lower precision; narrow intervals, greater precision. [CONSORT Statement] OBCS: A quantitative confidence value that refers to an interval give values within which there is a high probability (95 percent by convention) that the true population value can be found. The calculation of a confidence interval considers the standard deviation of the data and the number of observations. Thus, a confidence interval narrows as the number of observations increases, or its variance (dispersion) decreases. CDISC Glossary: A measure of the precision of an estimated value. The interval represents the range of values, consistent with the data, that is believed to encompass the "true" value with high probability (usually 95%). The confidence interval is expressed in the same units as the estimate. Wider intervals indicate lower precision; narrow intervals, greater precision. [CONSORT Statement] NICE: "Confidence interval A way of expressing how certain we are about the findings from a study, using statistics. It gives a range of results that is likely to include the 'true' value for the population. A wide confidence interval (CI) indicates a lack of certainty about the true effect of the test or treatment - often because a small group of patients has been studied. A narrow CI indicates a more precise estimate (for example, if a large number of patients have been studied). The CI is usually stated as '95% CI', which means that the range of values has a 95 in a 100 chance of including the 'true' value. For example, a study may state that 'based on our sample findings, we are 95% certain that the 'true' population blood pressure is not higher than 150 and not lower than 110'. In such a case the 95% CI would be 110 to 150." OECD: A confidence interval is an interval which has a known and controlled probability (generally 95% or 99%) to contain the true value. "Rothman textbook: confidence interval, which provides a range of values for the association, under the hypothesis that only random variation has created discrepancies between the true value of the association under study and the value observed in the data (Altman et al., 2000; see Chapters 13 through 16) Altman DG, Machin D, Bryant TN, Gardner MJ, eds. Statistics with confidence, 2nd ed. London: BMJ Books, 2000 " | |||||||
3 | STATO:0000418 | measure of heterogeneity | A statistic that represents the variation or spread among values in the set of estimates across studies. | There are several types of heterogeneity (or diversity) which are important factors which determine whether or not evidence should be pooled. Qualitative descriptors of explainable sources of heterogeneity include clinical heterogeneity and methodological heterogeneity. Clinical heterogeneity may refer to variations in the population, intervention, comparator, or outcome. Methodological heterogeneity may refer to variations in study design. Statistical heterogeneity, which is a quantitative measure of heterogeneity, whether explained or not, is described here. A measure of dispersion is defined as a statistic that represents the variation or spread among data values in a dataset or data distribution. In the context of a meta-analysis, a measure of heterogeneity is a measure of dispersion in which the dataset is the set of estimates across studies. | Brian Alper, Harold Lehmann, Muhammad Afzal, Joanne Dehnbostel, Khalid Shahin, Kenneth Wilkins | 2024-03-11 vote 5-0 by Eric Harvey, Xing Song, Lenny Vasanthan, Harold Lehmann, Homa Keshavarz 2024-05-06 vote 5-0 by Harold Lehmann, Cauê Monaco, Homa Keshavarz, Eric Harvey, Lenny Vasanthan | 2024-04-29 Comment for application revised during the Statistic Terminology Working Group meeting | STATO: a measure of heterogeneity in meta-analysis is a data item which aims to describe the variation in study outcomes between studies. Cochrane Handbook 10.10.4 (August 2023) Analysing data and undertaking meta-analyses Variability in the intervention effects being evaluated in the different studies is known as statistical heterogeneity, and is a consequence of clinical or methodological diversity, or both, among the studies. Statistical heterogeneity manifests itself in the observed intervention effects being more different from each other than one would expect due to random error (chance) alone. We will follow convention and refer to statistical heterogeneity simply as heterogeneity. https://training.cochrane.org/handbook/current/chapter-10#section-10-10-4 There are types of heterogeneity (or diversity) other than statistical heterogeneity which are important factors in whether or not evidence can be pooled. Clinical heterogeneity may refer to variations in the population, intervention, comparator, or outcome. Methodological heterogeneity may refer to variations in study design. | ||||||
4 | STATO:0000419 | Cochran's Q | A measure of heterogeneity, based on the chi-square statistic, for reporting an analytic finding regarding whether two or more multinomial distributions are equal, accounting for chance variability. | A measure of heterogeneity is defined as a statistic that represents the variation or spread among values in the set of estimates across studies. Chi square for homogeneity assesses whether observed differences in results are compatible with chance alone. A chi square for homogeneity is a hypothesis testing measure that is testing the hypothesis of heterogeneity. A chi square for homogeneity is distinct from a chi square for independence (also called Pearson's chi square). A chi square test for homogeneity is based on testing whether the distributions across two or more populations are the same. | Kenneth Wilkins, Brian S. Alper, Harold Lehmann, Muhammad Afzal, Joanne Dehnbostel | 2024-05-28 vote 7-0 by Carlos Alva-Diaz, Homa Keshavarz, Sheyu Li, Harold Lehmann, Saphia Mokrane, Eric Harvey, Lenny Vasanthan | 2024-05-13 vote 3-1 by Eric Harvey, Harold Lehmann, Saphia Mokrane, Sheyu Li 2024-05-06 vote 5-2 by Cauê Monaco, Homa Keshavarz, Eric Harvey, Lenny Vasanthan, Harold Lehmann, Sean Grant, Sheyu Li | 2024-05-13 comments Cochran's Q may be the most accepted name. I would thus suggest changing the term. Actually, Cochran's Q is Chi square contributed and theoretically there can be other Chi square distributed in testing heterogeneity or homogeneity (although current Cochran's Q may be the only one). 2024-04-29 comments: 1) Add to "comment for application" the following from the Cochrane Handbook: "Chi square for homogeneity assesses whether observed differences in results are compatible with chance alone." 2) Is it chi square or Chi square? Is it Chi square for homogeneity or Chin square for heterogeneity? X2 test is a very common statistic. To avoid confusion, should we add some detailed statistical explanation regarding the difference between this x2 test and other x2 tests? | STATO: Cochran's Q test is a statistical test used for unreplicated randomized block design experiments with a binary response variable and paired data. In the analysis of two-way randomized block designs where the response variable can take only two possible outcomes (coded as 0 and 1), Cochran's Q test is a non-parametric statistical test to verify whether k treatments have identical effects. from Cochrane Handbook https://training.cochrane.org/handbook/current/chapter-10 More formally, a statistical test for heterogeneity is available. This Chi2 (χ2, or chi-squared) test is included in the forest plots in Cochrane Reviews. It assesses whether observed differences in results are compatible with chance alone. A low P value (or a large Chi2 statistic relative to its degree of freedom) provides evidence of heterogeneity of intervention effects (variation in effect estimates beyond chance). Care must be taken in the interpretation of the Chi2 test, since it has low power in the (common) situation of a meta-analysis when studies have small sample size or are few in number. This means that while a statistically significant result may indicate a problem with heterogeneity, a non-significant result must not be taken as evidence of no heterogeneity. This is also why a P value of 0.10, rather than the conventional level of 0.05, is sometimes used to determine statistical significance. A further problem with the test, which seldom occurs in Cochrane Reviews, is that when there are many studies in a meta-analysis, the test has high power to detect a small amount of heterogeneity that may be clinically unimportant. Hoaglin DC. Misunderstandings about Q and 'Cochran's Q test' in meta-analysis. Stat Med. 2016 Feb 20;35(4):485-95. doi: 10.1002/sim.6632. Epub 2015 Aug 24. PMID: 263037 and discussions | |||||
4 | STATO:0000420 | I-squared | A measure of heterogeneity that estimates the proportion of variability across studies that is in excess of the expected variability due to chance. | A measure of heterogeneity is defined as a statistic that represents the variation or spread among values in the set of estimates across studies. $I^2$ = 100%×(Q - df)/Q, where Q is Cochran's heterogeneity statistic and df is the degrees of freedom. $I^2$ is strictly non-negative and lies between 0 and 100%. (If Q is less than df and the calculation would result in a negative value, then $I^2$ is defined as zero.) There are competing methods of calculating the confidence interval such as Hedges and Piggott (2001) and Higgins and Thompson (2002). | Brian S. Alper, Kenneth Wilkins, Harold Lehmann, Joanne Dehnbostel, Muhammad Afzal, Khalid Shahin | 2024-05-28 vote 7-0 by Lenny Vasanthan, Carlos Alva-Diaz, Saphia Mokrane, Homa Keshavarz, Sheyu Li, Brian S. Alper, Eric Harvey | 2024-05-20 comment: "df, the degrees of freedom." Note punctuation needs. "(If calculation results in a negative value, Q is less than df, 𝐼2 is defined as zero.)" Is that supposed or be "if Q is less than df" or, "and Q is less than df" or "because"? 2024-05-28 comment: Is the most commonly used name I square or I squared? [[[so added as an alternative term]]] | STATO: I-squared = The quantity called I2, describes the percentage of total variation across studies that is due to heterogeneity rather than chance. I2 can be readily calculated from basic results obtained from a typical meta-analysis as I2 = 100%×(Q - df)/Q, where Q is Cochran's heterogeneity statistic and df the degrees of freedom. Negative values of I2 are put equal to zero so that I2 lies between 0% and 100%. A value of 0% indicates no observed heterogeneity, and larger values show increasing heterogeneity. Unlike Cochran's Q, it does not inherently depend upon the number of studies considered. A confidence interval for I² is constructed using either i) the iterative non-central chi-squared distribution method of Hedges and Piggott (2001); or ii) the test-based method of Higgins and Thompson (2002). The non-central chi-square method is currently the method of choice (Higgins, personal communication, 2006) – it is computed if the 'exact' option is selected. (STATO:0000420) Hedges, L. V., & Pigott, T. D. (2001). The power of statistical tests in meta-analysis. Psychological methods, 6(3), 203–217.https://pubmed.ncbi.nlm.nih.gov/11570228/ Higgins, J. P., & Thompson, S. G. (2002). Quantifying heterogeneity in a meta-analysis. Statistics in medicine, 21(11), 1539–1558. https://doi.org/10.1002/sim.1186 https://pubmed.ncbi.nlm.nih.gov/12111919/ | ||||||
4 | STATO:0000421 | tau squared | A measure of heterogeneity that estimates the variance of the distribution of true effect sizes. | A measure of heterogeneity is defined as a statistic that represents the variation or spread among values in the set of estimates across studies. The tau squared estimates the between-study variance in a random-effects meta-analysis or hierarchical multilevel model meta-analysis. | Kenneth Wilkins, Brian S. Alper | 2024-05-28 vote 6-0 by Lenny Vasanthan, Carlos Alva-Diaz, Saphia Mokrane, Homa Keshavarz, Sheyu Li, Eric Harvey | STATO: Tau-squared is an estimate of the between-study variance in a random-effects meta-analysis. The square root of this number (i.e. tau) is the estimated standard deviation of underlying effects across studies. (STATO:0000421) | |||||||
2 | STATO:0000209 | area under the curve | A statistic that summarizes the variation of a quantity of interest across a domain interval of interest. | As examples, in classification tasks, the quantity of interest is the true positive rate and the domain interval of interest is the false positive rate; in pharmacodynamic studies, the quantity of interest is the concentration of a drug and the domain interval of interest is time; and in assessment of lung barotrauma in intensive care, the quantity of interest is pressure and the domain interval of interest is time. In general, the average quantity is calculated as the area under the curve divided by the range of the domain interval of interest. | Harold Lehmann, Kenneth Wilkins, Joanne Dehnbostel, Muhammad Afzal, Khalid Shahin | 2024-03-11 vote 5-0 by Eric Harvey, Xing Song, Lenny Vasanthan, Harold Lehmann, Homa Keshavarz | STATO: area under curve is a measurement datum which corresponds to the surface define by the x-axis and bound by the line graph represented in a 2 dimensional plot resulting from an integration or integrative calculus. The interpretation of this measurement datum depends on the variables plotted in the graph | |||||||
3 | STATO:0000608 | area under the ROC curve | An area under the curve where the curve is the true positive rate and the range of interest is the false positive rate. | ROC stands for Receiver Operating Characteristic. The area under the ROC curve is used to assess the performance of a classifier used to distinguish between two or more groups. Another term for true positive rate is sensitivity and another term for false positive rate is 1-specificity. The c-statistic is the area under the ROC curve calculated with the full range of possible values for true positive rate and false positive rate. Another interpretation of the c-statistic is similar without explicitly referencing the ROC curve: "The C statistic is the probability that, given 2 individuals (one who experiences the outcome of interest and the other who does not or who experiences it later), the model will yield a higher risk for the first patient than for the second. It is a measure of concordance (hence, the name “C statistic”) between model-based risk estimates and observed events. C statistics measure the ability of a model to rank patients from high to low risk but do not assess the ability of a model to assign accurate probabilities of an event occurring (that is measured by the model’s calibration). C statistics generally range from 0.5 (random concordance) to 1 (perfect concordance)." (JAMA. 2015;314(10):1063-1064. doi:10.1001/jama.2015.11082) | Harold Lehmann, Kenneth Wilkins, Muhammad Afzal, Joanne Dehnbostel, Khalid Shahin | 2024-04-22 vote 6-0 by Cauê Monaco, Harold Lehmann, Eric Harvey, Homa Keshavarz, Lenny Vasanthan, Sheyu Li | area under the ROC curve was approved 2024-03-18 vote 6-0 by Cauê Monaco, Homa Keshavarz, Elma OMERAGIC, Xing Song, Lenny Vasanthan, Eric Harvey BUT then ... c-statistic had 2024-04-01 vote 4-2 by Philippe Rocca-Serra, Sheyu Li, Harold Lehmann, Lenny Vasanthan, Eric Harvey, Homa Keshavarz | 2024-03-18 comment: Can we consider having the statement on "ROC stands for Receiver Operating Characteristic" in the first line followed by the area under the ROC curve? 2024-04-01 comments for c-statistic: 1) isn't this a synonym for AUROC ? (with the assumption that AUROC refers to the totality of the space located below the curve. 2) Is it c-statistic or C statistic (capitaled)? Quote: The C statistic is the probability that, given 2 individuals (one who experiences the outcome of interest and the other who does not or who experiences it later), the model will yield a higher risk for the first patient than for the second. It is a measure of concordance (hence, the name “C statistic”) between model-based risk estimates and observed events. C statistics measure the ability of a model to rank patients from high to low risk but do not assess the ability of a model to assign accurate probabilities of an event occurring (that is measured by the model’s calibration). C statistics generally range from 0.5 (random concordance) to 1 (perfect concordance).I would use discrimination rather than distinguish for the purpose of C-statistic. (JAMA. 2015;314(10):1063-1064. doi:10.1001/jama.2015.11082) I am also worried about the definition is highly relied on the definition of ROC. A link of ROC may be better for users. Discrimination rather than distinguish could be a better word to describe C statistic. C statistic also refers to concordance statistic. | One of the earliest descriptions of this concept is found in "The area above the ordinal dominance graph and the area below the receiver operating characteristic graph" (https://doi.org/10.1016/0022-2496(75)90001-2) | |||||
3 | STATO:0000689 | partial area under the ROC curve | An area under the curve where the curve is the true positive rate and the range of interest is a specified portion of the range of possible values for the false positive rate and/or range of possible values for the true positive rate. | Area under the ROC curve is defined as an area under the curve where the curve is the true positive rate and the range of interest is the false positive rate. ROC stands for Receiver Operating Characteristic. The area under the ROC curve is used to assess the performance of a classifier used to distinguish between two or more groups. Another term for true positive rate is sensitivity and another term for false positive rate is 1-specificity. | Kenneth Wilkins, Brian S. Alper, Muhammad Afzal, Harold Lehmann | 2024-04-22 vote 5-0 by Lenny Vasanthan, Homa Keshavarz, Eric Harvey, Harold Lehmann, Sheyu Li | 2024-04-01 vote 4-2 by Philippe Rocca-Serra, Sheyu Li, Harold Lehmann, Lenny Vasanthan, Eric Harvey, Homa Keshavarz 2024-04-15 vote 3-1 by Harold Lehmann, Eric Harvey, Lenny Vasanthan, Homa Keshavarz | 2024-04-01 comments: 1) i assume 'area under the curve' to equate the entire area located under the curve, so 'a part' can not be a subtype of the whole. 2) Similar concerns with the definition of C statistic. 2024-04-15 comment: I think we have to add, to the definition, "and/or range of possible values for the true positive rate" as well. My apologies, but I didn't catch this during our dicussions. | ||||||
3 | STATO:0000691 | area under the precision-recall curve | An area under the curve where the curve is the precision and the domain of interest is the recall. | In information retrieval, recall is a synonym for sensitivity and precision is a synonym for positive predictive value. | Kenneth Wilkins, Harold Lehmann, Brian S. Alper | 2024-04-01 vote 6-0 by Homa Keshavarz, Philippe Rocca-Serra, Sheyu Li, Harold Lehmann, Lenny Vasanthan, Eric Harvey | ||||||||
3 | STATO:0000690 | area under the value-time curve | An area under the curve where the curve is the repeated measures of a variable over time and the domain of interest is time. | The area under the value-by-time curve is used for pharmacokinetics, pharmacodynamics, and physiological monitoring. | Kenneth Wilkins, Harold Lehmann | 2024-04-29 vote 7-0 by Lenny Vasanthan, Harold Lehmann, Eric Harvey, Homa Keshavarz, Sean Grant, Philippe Rocca-Serra, Sheyu Li | 2024-04-01 vote 5-1 by Homa Keshavarz, Philippe Rocca-Serra, Sheyu Li, Harold Lehmann, Lenny Vasanthan, Eric Harvey | 2024-04-01 comment: Is it value-time curve? | ||||||
2 | STATO:0000633 | threshold | A statistic that represents the boundary at which something changes. | The thing that changes at the threshold value may be relevant for function, application, classification, or detection. | Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Janice Tufte, Bhagvan Kommadi, Kenneth Wilkins, Khalid Shahin | 5/5 as of 10/11/2021: Janice Tufte, Joanne Dehnbostel, Louis Leff, Vignesh Subbian, Robin Ann Yurk | ||||||||
2 | STATO:0000069 | degrees of freedom | A statistic that represents the number of independent values used to calculate a statistical estimate. The number of degrees of freedom ν is equal to the number of independent units of information given the model. | The formula to calculate degrees of freedom will depend on the model. For example, the degrees of freedom for a sample standard deviation, given the sample mean, is N-1, because the Nth observation is no longer independent, given the N-1 other observations and the sample mean. | Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Janice Tufte, Bhagvan Kommadi, Kenneth Wilkins, Khalid Shahin, Harold Lehmann | 6/6 as of 10/27/2021: Janice Tufte, Louis Leff, Vignesh Subbian, Robin Ann Yurk, Harold Lehmann, Muhammad Afzal, Pentti Nieminen | Include * in P = x1 * x2...to clarify this is a product. | |||||||
2 | STATO:0000609 | hypothesis testing measure | A statistic that represents the relative support for competing hypotheses, based on the observed data under an assumed modeling framework. | A hypothesis testing measure may be used within the frequentist framework (Neyman-Pearson framework) or the Bayesian framework. Within the frequentist framework, the criterion for rejecting the null hypothesis is typically expressed as a [p-value](https://fevir.net/resources/CodeSystem/27270#TBD:0000076) that is less than an [alpha setting](https://fevir.net/resources/CodeSystem/27270#TBD:0000081). Within the Bayesian framework, the approach for rejecting the null hypothesis is typically based on a Bayes factor. | Brian S. Alper, Kenneth Wilkins, Harold Lehmann, Joanne Dehnbostel, Muhammad Afzal | 2024-07-08 vote 6-0 by Saphia Mokrane, C P Ooi, Harold Lehmann, Eric Harvey, Lenny Vasanthan, Homa Keshavarz | 2024-06-10 vote 4-1 by Harold Lehmann, Lenny Vasanthan, Sean Grant, Eric Harvey, Sheyu Li 2024-06-17 vote 7-0 by Harold Lehmann, Sean Grant, Carlos Alva-Diaz, Lenny Vasanthan, Eric Harvey, Yaowaluk Ngoenwiwatkul, Homa Keshavarz BUT ALTERNATIVE TERMS AND COMMENT MODIFIED 2024-06-24 vote 7-1 by Sean Grant, Philippe Rocca-Serra, Sheyu Li, Saphia Mokrane, Cauê Monaco, Eric Harvey, Lenny Vasanthan, Homa Keshavarz | 2024-06-10 comments re: "hypothesis testing measure" = "A statistic that represents the result of evaluating the congruence between a hypothesis and the statistics derived from the observed data."1) I guess we'll add "hypothesis" to our emerging glossary. 2N) Would Bayesians say that they are "testing" a hypothesis? 3) The definition looks fine but the comment seems confusing and unnecessary. 2024-06-17 comment re: "hypothesis testing measure" = "A statistic that represents the result of evaluating the congruence between a hypothesis and the statistics derived from the observed data.""Hypothesis testing statistic" as an alternative term? 2024-06-24 comment re: "hypothesis testing measure" = "A statistic that represents the result of evaluating the congruence between a hypothesis and the statistics derived from the observed data."The definition seems wordy. Why not: A statistic that represents the strength (or value?) of a statistical test (or: between the hypothesis and the observation). | ||||||
3 | STATO:0000700 | p-value | A hypothesis testing measure that represents the probability of obtaining a result at least as far from the value actually obtained as the value expected, assuming the null hypothesis is true. | [Hypothesis testing measure](https://fevir.net/resources/CodeSystem/27270#TBD:0000073) is defined as a statistic that represents the result of evaluating the congruence between a hypothesis and the statistics derived from the observed data. Within the frequentist framework, a p-value is typically a [p-value for two-sided test](https://fevir.net/resources/CodeSystem/27270#TBD:p-value-two-sided) and the criterion for rejecting the null hypothesis is typically expressed as a p-value that is less than an [alpha setting](https://fevir.net/resources/CodeSystem/27270#TBD:0000081) divided by 2. In some cases, the p-value is a [p-value for one-sided test](https://fevir.net/resources/CodeSystem/27270#TBD:p-value-one-sided), and the criterion for rejecting the null hypothesis is typically expressed as a p-value that is less than an [alpha setting](https://fevir.net/resources/CodeSystem/27270#TBD:0000081). A p-value is preferably coded more precisely as a [p-value for two-sided test](https://fevir.net/resources/CodeSystem/27270#TBD:p-value-two-sided) or a [p-value for one-sided test](https://fevir.net/resources/CodeSystem/27270#TBD:p-value-one-sided) rather than using the code for p-value without specification. The code for p-value (without specification) may be used in contexts where the p-value reported is ambiguous. Within the Bayesian framework, use the [posterior predictive p-value](https://fevir.net/resources/CodeSystem/27270#TBD:Bayesianp). | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Muhammad Afzal, Kenneth Wilkins | 2024-07-22 vote 7-0 by Saphia Mokrane, Carlos Alva-Diaz, Lenny Vasanthan, Sheyu Li, Airton Tetelbom Stein, Harold Lehmann, Eric Harvey | 2024-06-17 vote 7-1 by Harold Lehmann, Sean Grant, Khalid Shahin, Carlos Alva-Diaz, Lenny Vasanthan, Eric Harvey, Yaowaluk Ngoenwiwatkul, Homa Keshavarz 2024-07-01 vote 8-1 by Harold Lehmann, Sean Grant, Philippe Rocca-Serra, Sheyu Li, Saphia Mokrane, Cauê Monaco, Eric Harvey, Lenny Vasanthan, Homa Keshavarz 2024-07-08 vote 3-0 by Saphia Mokrane, Eric Harvey, C P Ooi BUT THEN THE DEFINITION CHANGED 2024-07-15 vote 4-2 by Homa Keshavarz, Cauê Monaco, Sheyu Li, Eric Harvey, Philippe Rocca-Serra, Lenny Vasanthan | 2024-06-17 comments re: "p value" = "A hypothesis testing measure that represents the probability of obtaining a result at least as extreme as that actually obtained, assuming that the actual value was the result of chance alone."1) "Within the frequentist framework..."2N) The threshold is the alpha-level, not the p-value 3) I personally prefer p-value, but I'm fine with "p value" 2024-07-01 comments re: "p-value" = "A hypothesis testing measure that represents the probability of obtaining a result at least as extreme as that actually obtained, assuming the null hypothesis is true."1) suggestion: add a sentence to also cover Bayesian framework 2N) The hypothesis can be null or not null (non-inferior test for example). 2024-07-15 comments re: "p-value" = "A hypothesis testing measure that represents the probability of obtaining a result at least as far from the value actually obtained as the value expected, assuming the null hypothesis is true."1N) with the distinction with posterior predictive p-value, one-sided pvalue and two-sided pvalue, is pvalue still needed? when should the term be used? if used to annotate data, is it imprecise as one would not know if both hypotheses have been tested. The introduction of more specific terms may render this type moot. Also in the 'comment for application', if pvalue is parent term, the statement 'typically expressed as a p-value lesss than an alpha setting ' would not be true for 'two-sided pvalue'. 2N) I am wondering if it might be helpful to add a statement on how 'p' value can help to identify if the desired effect is due to 'chance' or if there is an actual difference | STATO: A quantitative confidence value that represents the probability of obtaining a result at least as extreme as that actually obtained, assuming that the actual value was the result of chance alone. (OBI:0000175) "You may summarize this comparison using a Bayesian p-value (Gelman et al., 1996, 2004), the predictive probability that a statistic is equal to or more extreme than that observed under the assumptions of the model." -- https://www.fda.gov/regulatory-information/search-fda-guidance-documents/guidance-use-bayesian-statistics-medical-device-clinical-trials | |||||
4 | STATO:0000661 | p-value for one-sided test | A p-value which represents the probability of obtaining a result at least as far, in one direction, from the value actually obtained as the value expected, assuming the null hypothesis is true. A p-value for one-sided test interprets 'at least as far from' with only one of two directions, either the direction of 'greater than' or the direction of 'less than'. | [P-value](https://fevir.net/resources/CodeSystem/27270#TBD:0000076) is defined as a hypothesis testing measure that represents the probability of obtaining a result at least as far from the value actually obtained as the value expected, assuming the null hypothesis is true. For p-value for one-sided test, the criterion for rejecting the null hypothesis is typically expressed as a p-value that is less than an [alpha setting](https://fevir.net/resources/CodeSystem/27270#TBD:0000081). | Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Harold Lehmann, Kenneth Wilkins | 2024-07-22 vote 6-0 by Carlos Alva-Diaz, Lenny Vasanthan, Sheyu Li, Airton Tetelbom Stein, Harold Lehmann, Eric Harvey | 2024-07-08 vote 8-1 by C P Ooi, Harold Lehmann, Sean Grant, Philippe Rocca-Serra, Sheyu Li, Saphia Mokrane, Eric Harvey, Lenny Vasanthan, Homa Keshavarz 2024-07-15 vote 5-1 by Homa Keshavarz, Cauê Monaco, Sheyu Li, Eric Harvey, Philippe Rocca-Serra, Lenny Vasanthan | 2024-07-08 comment re: "p-value for one-sided test" = "A p-value which represents the probability of obtaining a result at least as extreme, in one direction, as that actually obtained."1N) The word 'one direction' did not explain 'one sided'. 2024-07-15 comment re: "p-value for one-sided test" = "A p-value which represents the probability of obtaining a result at least as far, in one direction, from the value actually obtained as the value expected, assuming the null hypothesis is true."1N) from an end user perspective, i feel that the definition should included the last statement found in the 'comment for application' = " A p-value for two-sided test interprets 'at least as far from' with both the direction of 'greater than' and the direction of 'less than'. For hypothesis test interpretation, the one-tailed p-value is compared to the alpha setting" | ||||||
4 | STATO:0000662 | p-value for two-sided test | A p-value which represents the probability of obtaining a result at least as far, in either direction, from the value actually obtained as the value expected, assuming the null hypothesis is true. A p-value for two-sided test interprets 'at least as far from' with both the direction of 'greater than' and the direction of 'less than'. | [P-value](https://fevir.net/resources/CodeSystem/27270#TBD:0000076) is defined as a hypothesis testing measure that represents the probability of obtaining a result at least as far from the value actually obtained as the value expected, assuming the null hypothesis is true. For hypothesis test interpretation, the two-tailed p-value is compared to the [alpha setting](https://fevir.net/resources/CodeSystem/27270#TBD:0000081) divided by 2. | Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Harold Lehmann, Kenneth Wilkins | 2024-07-22 vote 6-0 by Carlos Alva-Diaz, Lenny Vasanthan, Sheyu Li, Airton Tetelbom Stein, Harold Lehmann, Eric Harvey | 2024-07-15 vote 5-1 by Homa Keshavarz, Cauê Monaco, Sheyu Li, Eric Harvey, Philippe Rocca-Serra, Lenny Vasanthan | 2024-07-15 comment re: "p value for two-sided test" = "A p-value which represents the probability of obtaining a result at least as far, in either direction, from the value actually obtained as the value expected, assuming the null hypothesis is true."1N) from an end user perspective, i feel that the definition should included the last statement found in the 'comment for application' = " A p-value for two-sided test interprets 'at least as far from' with both the direction of 'greater than' and the direction of 'less than'. For hypothesis test interpretation, the two-tailed p-value is compared to the alpha setting divided by 2" | ||||||
3 | STATO:0000030 | chi-square statistic | A hypothesis testing measure that is assumed to follow a chi-square distribution. | A hypothesis testing measure is defined as a statistic that represents the relative support for competing hypotheses, based on the observed data under an assumed modeling framework. | Kenneth Wilkins, Harold Lehmann, Brian S. Alper, Joanne Dehnbostel | 2024-10-28 vote 6-0 by Saphia Mokrane, Eric Harvey, Harold Lehmann, Homa Keshavarz, Airton Tetelbom Stein, Lenny Vasanthan | STATO: Chi-squared statistic is a statistic computed from observations and used to produce a p-value in statistical test when compared to a Chi-Squared distribution. (STATO:0000030) | |||||||
4 | STATO:0000081 | chi-square statistic for independence | A chi-square statistic used to test whether two categorical or nominal variables are associated. | Types of chi-square statistic for independence include the Pearson's chi-square statistic for independence and the Yate's corrected chi-square statistic. The chi-square statistic for independence is a chi-square statistic used for testing for an association. | Kenneth Wilkins, Harold Lehmann, Brian S. Alper, Joanne Dehnbostel | 2024-11-04 vote 6-0 by Lara Kahaleh, Bhagvan Kommadi, Harold Lehmann, Eric Harvey, Homa Keshavarz, Airton Tetelbom Stein | 2024-10-28 vote 6-0 by Saphia Mokrane, Eric Harvey, Harold Lehmann, Homa Keshavarz, Airton Tetelbom Stein, Lenny Vasanthan BUT THEN COMMENT CHANGED DEFINITION AND COMMENT FOR APPLICATION | 2024-10-28 comment re: "chi-square statistic for independence" = "A chi-square statistic used to determine whether two categorical or nominal variables are likely to be related."1Yes) Can we consider adding in "chi-squared statistic for association" along with this. | STATO: Chi-squared statistic is a statistic computed from observations and used to produce a p-value in statistical test when compared to a Chi-Squared distribution. (STATO:0000030) | |||||
4 | STATO:0000148stat | Cochran-Armitage chi-square statistic for trend | A chi-square statistic used to test whether a dichotomous variable and an ordinal variable are related. | There are types of chi-square statistic for trend other than the Cochran-Armitage statistic. | Kenneth Wilkins, Brian S. Alper, Joanne Dehnbostel | 2024-11-04 vote 6-0 by Lara Kahaleh, Bhagvan Kommadi, Harold Lehmann, Eric Harvey, Homa Keshavarz, Airton Tetelbom Stein | ||||||||
4 | STATO:0000698 | chi-square statistic for homogeneity | A chi-square statistic used to test whether observed data from two or more groups follow the same distribution. | In meta-analysis, a chi-square statistic for homogeneity is often used to determine the appropriateness of combining the statistics to represent a common population. | Kenneth Wilkins, Brian S. Alper, Joanne Dehnbostel, Harold Lehmann | 2024-11-11 vote 7-0 by Lenny Vasanthan, Yaowaluk Ngoenwiwatkul, Cauê Monaco, Saphia Mokrane, Eric Harvey, Airton Tetelbom Stein, Homa Keshavarz | 2024-11-04 vote 5-1 by Lara Kahaleh, Bhagvan Kommadi, Harold Lehmann, Eric Harvey, Homa Keshavarz, Airton Tetelbom Stein | 2024-11-04 comment re: "chi-square statistic for homogeneity" = "A chi-square statistic used to test whether observed data from two or more groups follow the same distribution."1No) I recall that there used to be considerable controversy about the value of statistical tests applied to baseline characteristics, the recommendation being to avoid such tests. If this remains the case, shouldn't we note that here? | ||||||
4 | STATO_0000309 | chi-square statistic for goodness of fit | A chi-square statistic used to test whether observed data follows a specified distribution. | There are types of chi-square statistic for goodness of fit with named tests based on the specified distribution, such as the Hosmer–Lemeshow test for a given logistic regression model and the Shapiro–Wilk test for a normal (Gaussian) distribution. | Kenneth Wilkins, Brian S. Alper, Joanne Dehnbostel | 2024-11-04 vote 6-0 by Lara Kahaleh, Bhagvan Kommadi, Harold Lehmann, Eric Harvey, Homa Keshavarz, Airton Tetelbom Stein | ||||||||
3 | STATO:0000376 | z-statistic | A hypothesis testing measure that is a z-score where the specified mean is based on the null hypothesis and the standard deviation is based on the observed data. | A [z-score](#STATO:0000104) is defined as a statistic that represents a measure of the divergence of an individual experimental result from a specified mean, expressed in terms of the number of standard deviations from the mean value. A z-statistic is a z-score of a sample in the conduct of a Z-test. | Kenneth Wilkins, Joanne Dehnbostel, Harold Lehmann, Brian S. Alper | 2024-08-12 vote 7-0 by Harold Lehmann, Homa Keshavarz, Eric Harvey, Lenny Vasanthan, Sheyu Li, Airton Tetelbom Stein, Sean Grant | STATO: A z-score (also known as z-value, standard score, or normal score) is a measure of the divergence of an individual experimental result from the most probable result, the mean. Z is expressed in terms of the number of standard deviations from the mean value. (STATO:0000104) Z-statistic is a statistic computed from observations and used to produce a p-value when compared to a Standard Normal Distribution in a statistical test called the Z-test. (STATO:0000376) | |||||||
3 | STATO:0000176 | t-statistic | A hypothesis testing measure that is a t-score where the specified mean is based on the null hypothesis, the standard error is based on the observed data, and the degrees of freedom is based on the sample size. | A t-score is defined as a statistic that represents a measure of the divergence of an individual experimental result from a specified mean, expressed in terms of the number of standard deviations from the mean value. A t-statistic is a t-score of a sample in the conduct of a t-test. | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Muhammad Afzal | 2024-10-14 vote 5-0 by Lenny Vasanthan, Harold Lehmann, Eric Harvey, Airton Tetelbom Stein, Homa Keshavarz | STATO: t-statistic is a statistic computed from observations and used to produce a p-value in statistical test when compared to a Student's t distribution. (STATO:0000176) | |||||||
3 | STATO:0000266 | Bayes factor | A hypothesis testing measure that is a ratio of the probability of the observed data under one hypothesis divided by the probability of the same observed data under a different hypothesis. | The Bayes factor numerator and denominator take into account the prior probabilities of each hypothesis. The Bayes factor represents the relative plausibility between competing hypotheses, taking the prior probabilities into account. | Harold Lehmann, Brian S. Alper, Joanne Dehnbostel | 2024-11-11 vote 7-0 by Lenny Vasanthan, Yaowaluk Ngoenwiwatkul, Cauê Monaco, Saphia Mokrane, Eric Harvey, Airton Tetelbom Stein, Homa Keshavarz | STATO: Bayes factor is a ratio between 2 probabilities of observing data according 2 distinct models. It is used in Bayes model selection to evaluate which model best explains the data. if K<0, the model used in the denominator term is supported, if K>1, the model used in the numerator term is supported. The Bayes factor is about the plausibility of 2 different models NCI Code CDISC Submission Value CDISC Synonym NCI Preferred Term CDISC Definition C142403 Bayesian approaches _ Bayesian Approach Approaches to data analysis that provide a posterior probability distribution for some parameter (e.g., treatment effect), derived from the observed data and a prior probability distribution for the parameter. The posterior distribution is then used as the basis for statistical inference. [ICH E9 Glossary] C142404 Bayesian statistics _ Bayesian Statistics Statistical approach named for Thomas Bayes (1701-1761) that has among its features giving a subjective interpretation to probability, accepting the idea that it is possible to talk about the probability of hypotheses being true and of parameters having particular values. | |||||||
3 | STATO:0000660 | posterior predictive p-value | A hypothesis testing measure that is the probability that a statistic value derived from a posterior predictive distribution under the assumptions of the model is equal to or more extreme than the observed posterior value of the statistic. | [Hypothesis testing measure](https://fevir.net/resources/CodeSystem/27270#TBD:0000073) is defined as a statistic that represents the result of evaluating the congruence between a hypothesis and the statistics derived from the observed data. A posterior predictive p-value includes assumptions of prior probability of the hypothesis or parameters in the model. The term 'posterior predictive p-value' is used within the Bayesian framework. Within the frequentist framework, use [p-value for two-sided test](https://fevir.net/resources/CodeSystem/27270#TBD:p-value-two-sided) or [p-value for one-sided test](https://fevir.net/resources/CodeSystem/27270#TBD:p-value-one-sided). Note that in the Bayesian case, one generally focuses on the posterior probability distribution of the parameter of interest, not on this posterior probability of a statistic. | Kenneth Wilkins, Harold Lehmann, Philippe Rocca-Sera, Brian S. Alper | 2024-07-29 vote 5-0 by Lenny Vasanthan, Airton Tetelbom Stein, Eric Harvey, Harold Lehmann, Homa Keshavarz | 2024-07-08 vote 3-1 by C P Ooi, Sheyu Li, Eric Harvey, Harold Lehmann 2024-07-15 vote 5-1 by Homa Keshavarz, Cauê Monaco, Sheyu Li, Eric Harvey, Philippe Rocca-Serra, Lenny Vasanthan 2024-07-22 vote 6-0 by Carlos Alva-Diaz, Lenny Vasanthan, Sheyu Li, Airton Tetelbom Stein, Harold Lehmann, Eric Harvey (Comment led us to reopen this term for voting) | 2024-07-08 comment re: "posterior predictive p-value" = "A hypothesis testing measure that is the predictive probability that a statistic is equal to or more extreme than that observed under the assumptions of the model."1N) I think "Bayesian" should be the primary name, and something about priors and data should be included in the comment for application, since this is an atypical measure. (Sorry not having pointed this out during the session.) 2024-07-15 comment re: "posterior predictive p-value" = "A hypothesis testing measure that is the predictive probability that a statistic is equal to or more extreme than that observed under the assumptions of the model."1N) add "Within the frequentist framework, use 'one-sided or two sided pvalue" (or pvalue if kept) to the 'comment for application' for consistency and reciprocity 2024-07-22 comment re: "posterior predictive p-value" = "A hypothesis testing measure that is the probability that a statistic value derived from a posterior predictive distribution under the assumptions of the model is equal to or more extreme than the observed posterior value of the statistic."1Y) Because most "posteriors" in Bayesian statistics are about the parameter we really care about, let me suggest saying something like: "Note that in the Bayesian case, one generally focuses on the posterior probability distribution of the parameter of interest, not on this posterior probability of a statistic." And we will need a SEVCO term for "posterior probability," to go under Bayes factor (not a child). | "You may summarize this comparison using a Bayesian p-value (Gelman et al., 1996, 2004), the predictive probability that a statistic is equal to or more extreme than that observed under the assumptions of the model." -- https://www.fda.gov/regulatory-information/search-fda-guidance-documents/guidance-use-bayesian-statistics-medical-device-clinical-trials | |||||
2 | TBD:0000065 | DEPRECATED: measure of discrimination | A statistic that quantifies the degree to which a classifier can distinguish among two or more groups. | A classifier is a rule, formula, algorithm, or procedure used to label an instance based on its attributes. | Harold Lehmann, Kenneth Wilkins, Joanne Dehnbostel, Khalid Shahin | 2024-02-05 in response to comment questioning placing other terms as types of measure of discrimination, the Working Group decided to remove this term from the hierarchy and simply make 'area under the curve' a type of statistic wtihout an additional hierarchical layer | DEPRECATED 2024-02-05 | |||||||
2 | STATO:0000104 | z-score | A statistic that represents a measure of the divergence of an individual experimental result from a specified mean, expressed in terms of the number of standard deviations from the mean value. | A z-score can be calculated for an individual observation, using a reference mean and standard deviation, such as in an IQ score or a weight-for-age-and-sex value in a pediatric growth chart. A z-score can also be calculated for a statistic for a sample, such as a [z-statistic](https://fevir.net/resources/CodeSystem/27270#STATO:0000376), which is a hypothesis testing measure. The z-score for the sample is based on the null hypothesis mean, and the standard deviation is based on the observed data. | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Muhammad Afzal | 2024-08-26 vote 7-0 by Carlos Alva-Diaz, Elma OMERAGIC, Lenny Vasanthan, Eric Harvey, Homa Keshavarz, Airton Tetelbom Stein, Harold Lehmann | 2024-08-12 vote 7-1 by Harold Lehmann, Homa Keshavarz, Eric Harvey, Lenny Vasanthan, Sheyu Li, Elma OMERAGIC, Airton Tetelbom Stein, Sean Grant 2024-08-19 vote 6-1 by Philippe Rocca-Serra, Sean Grant, Airton Tetelbom Stein, Homa Keshavarz, Cauê Monaco, Eric Harvey, Brian S. Alper | 2024-08-12 Comment for application maybe should include: "A z-score can be calculated for a statistic for a sample z-statistic or may be calculated for an individual, using a reference mean and standard deviation, as in an IQ score." 2024-08-19 comment re: "z-score" = "A statistic that represents a measure of the divergence of an individual experimental result from a specified mean, expressed in terms of the number of standard deviations from the mean value."1N) Does 'calculated for a statistic for a sample z-statistic' mean that you calculate the z-score from the z-statistic and they are 2 different values, or does it mean that z-statistic is the term for z-score when it is calculated for a sample? This ambiguity can be avoided by making the comment for application 2 separate sentences, one clearly describing "for a sample" and the other clearly describing "for an individual observation" -- note individual needs to be an adjective and not a noun in the second sentence. An individual person can be the population for a sample of observations about the individual. re: "z-score" = "A statistic that represents a measure of the divergence of an individual experimental result from a specified mean, expressed in terms of the number of standard deviations from the mean value." 2024-08-26 comment re: "z-score" = "A statistic that represents a measure of the divergence of an individual experimental result from a specified mean, expressed in terms of the number of standard deviations from the mean value."1Y) Only a petit change in comment for application..."A z-score can also be calculated for a statistic for a sample, such as a z-statistic, which is a hypothesis testing measure. The z-score for the sample is based on the null hypothesis mean, and the standard deviation is based on the observed data." | STATO: A z-score (also known as z-value, standard score, or normal score) is a measure of the divergence of an individual experimental result from the most probable result, the mean. Z is expressed in terms of the number of standard deviations from the mean value. (STATO:0000104) | |||||
2 | STATO:0000699 | t-score | A statistic that represents a measure of the divergence of an individual experimental result from a specified mean, taking sample size into account and expressed in terms of the number of standard errors from the mean value. | A t-score can be calculated for an individual observation, using a reference mean and standard error, such as in a bone mineral density value with a reference mean and standard error for young adult men. A t-score can also be calculated for a statistic for a sample, such as a t-statistic, which is a hypothesis testing measure. The t-score for the sample is based on the null hypothesis mean, and the standard error is based on the observed data. A t-score is used preferentially to a z-score when the sample size is low (e.g. < 30 for a unimodal symmetric distribution), the population standard deviation is unknown, or when the sample distribution cannot be assumed to follow a normal distribution. | Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Muhammad Afzal, Kenneth Wilkins | 2024-10-14 vote 5-0 by Lenny Vasanthan, Harold Lehmann, Eric Harvey, Airton Tetelbom Stein, Homa Keshavarz | ||||||||
2 | STATO:0000702 | prior probability | A statistic that represents the likelihood of a parameter of interest, before accounting for the observed data from the study. | This term is core to Bayesian statistics, where the focus is on estimation of the parameter of interest (unlike frequentist statistics, where the focus is on the likelihood of producing an estimate given a specific value of the parameter). The prior probability may be constructed in several ways: from prior research, from raw data in a database, from expert opinion, or specified as "non-informative" (meaning pure ignorance). The likelihood of the parameter is usually represented as a distribution over all possible values of the parameter. | Harold Lehmann, Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal | 2024-11-11 vote 7-0 by Lenny Vasanthan, Yaowaluk Ngoenwiwatkul, Cauê Monaco, Saphia Mokrane, Eric Harvey, Airton Tetelbom Stein, Homa Keshavarz | 2024-08-12 vote 6-1 by Harold Lehmann, Homa Keshavarz, Eric Harvey, Lenny Vasanthan, Sheyu Li, Airton Tetelbom Stein, Sean Grant 2024-08-19 vote 6-1 by Philippe Rocca-Serra, Sean Grant, Airton Tetelbom Stein, Homa Keshavarz, Cauê Monaco, Eric Harvey, Brian S. Alper | 2024-08-12 comments: 1N) I would suggest including Bayersian analysis as a context in the definition but not only in the comments. 2Y) Just a comment for us: Note that this may be the first time that "statistic" is used NOT to mean a function of the observed data, but is still covered by our SEVCO definition for "statistic." 2024-08-19 comment re: "prior probability" = "A statistic that represents the uncertainty of a parameter of interest, before accounting for the observed data from the study."1N) Uncertainty and likelihood are not synonymous. The word 'uncertainty' does in the proposed definition of prior probability does not match the parallel definition of posterior probability (using 'likelihood') and does not match the comment in posterior probability which states 'The prior probability is the likelihood of a parameter of interest' | NCI Code CDISC Submission Value CDISC Synonym NCI Preferred Term CDISC Definition C142403 Bayesian approaches _ Bayesian Approach Approaches to data analysis that provide a posterior probability distribution for some parameter (e.g., treatment effect), derived from the observed data and a prior probability distribution for the parameter. The posterior distribution is then used as the basis for statistical inference. [ICH E9 Glossary] C142404 Bayesian statistics _ Bayesian Statistics Statistical approach named for Thomas Bayes (1701-1761) that has among its features giving a subjective interpretation to probability, accepting the idea that it is possible to talk about the probability of hypotheses being true and of parameters having particular values. | |||||
2 | STATO:0000703 | posterior probability | A statistic that represents the likelihood of a parameter of interest, after updating its prior probability with the observed data from the study. | This term is core to Bayesian statistics, where the focus is on estimation of the parameter of interest (unlike frequentist statistics, where the focus is on the likelihood of producing an estimate given a specific value of the parameter). The [prior probability](#TBD:priorprobability) is the likelihood of a parameter of interest from before the study. For calculating the posterior probability of a parameter, the [prior probability](#TBD:priorprobability) is updated using Bayes' Theorem' and the likelihood function. | Harold Lehmann, Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal | 2024-08-19 vote 7-0 by Philippe Rocca-Serra, Brian S. Alper, Sean Grant, Airton Tetelbom Stein, Homa Keshavarz, Cauê Monaco, Eric Harvey | 2024-08-12 vote 6-1 by Harold Lehmann, Homa Keshavarz, Eric Harvey, Lenny Vasanthan, Sheyu Li, Airton Tetelbom Stein, Sean Grant | 2024-08-12 1N) I would suggest including Bayersian analysis as a context in the definition but not only in the comments. 2Y) I would amend the last sentence of the Comment for application as, "For calculating the posterior probability of a parameter, the prior probabilty is updated using Bayes' Theorem' " | ||||||
1 | TBD:0000080 | hypothesis test attribute | An aspect, characteristic, or feature of a statistical hypothesis test. | Brian S. Alper, Harold Lehmann, Kenneth Wilkins, Philippe Rocca-Serra | 2024-07-08 vote 5-0 by Joanne Dehnbostel, C P Ooi, Sheyu Li, Eric Harvey, Harold Lehmann | |||||||||
2 | TBD:beta | beta | ||||||||||||
2 | STATO:0000200 | power | from STATO: the statistical test power is data item which is about a statistical test and is obtained by subtracting the false negative rate (type II error rate) to 1. The power of a statistical test is the probability that it will correctly lead to the rejection of a false null hypothesis (Greene 2000). The statistical power is the ability of a test to detect an effect, if the effect actually exists (High 2000). | |||||||||||
2 | TBD:0000081 | alpha setting | ||||||||||||
3 | TBD:0000084 | alpha setting with subtype unspecified | ||||||||||||
3 | TBD:0000085 | individual test alpha without multiple testing adjustment | ||||||||||||
3 | TBD:0000086 | overall alpha with multiple testing | ||||||||||||
3 | TBD:0000087 | individual test alpha with multiple testing adjustment | ||||||||||||
2 | STATO:0000286 | one-tailed test | STATO: one tailed test (one sided test) = a one-tailed test is a statistical test which, assuming an unskewed probability distribution, allocates all of the significance level to evaluate only one hypothesis to explain a difference. The one-tailed test provides more power to detect an effect in one direction by not testing the effect in the other direction. one-tailed test should be preceded by two-tailed test in order to avoid missing out on detecting alternate effect explaining an observed difference. | |||||||||||
2 | STATO:0000287 | two-tailed test | STATO: two tailed test (two sided test) = a two tailed test is a statistical test which assess the null hypothesis of absence of difference assuming a symmetric (not skewed) underlying probability distribution by allocating half of the significance level selected to each of the direction of change which could explain a difference (for example, a difference can be an excess or a loss). | |||||||||||
2 | TBD:checkIfInSTATOtesting-margin | hypothesis testing margin | ||||||||||||
2 | STATO:0000057 | null hypothesis | from STATO: A null hypothesis is a statistical hypothesis that is tested for possible rejection under the assumption that it is true (usually that observations are the result of chance). The concept was introduced by R. A. Fisher. The hypothesis contrary to the null hypothesis, usually that the observations are the result of a real effect, is known as the alternative hypothesis.[wolfram alpha -- from http://mathworld.wolfram.com/NullHypothesis.html] | |||||||||||
2 | STATO:0000208 | alternative hypothesis | from STATO: An alternative hypothesis is an hypothesis defined in a statistical test that is the opposite of the null hypothesis. from Wolfram Alpha (https://mathworld.wolfram.com/AlternativeHypothesis.html): The alternative hypothesis is the hypothesis used in hypothesis testing that is contrary to the null hypothesis. It is usually taken to be that the observations are the result of a real effect (with some amount of chance variation superposed). | |||||||||||
1 | STATO:0000107 | statistical model | A set of mathematical relationships that express assumptions related to the generation of the observed data and that sets constraints for the analysis of the data. | A statistical model describes how one or more random variables are related to one or more other variables. A statistical model often relates to the generation of sample data from a larger population. "Generative model" is a term used by the machine learning community. | Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins, Muhammad Afzal, Harold Lehmann | 2023-06-12 vote 5-0 by Brian S. Alper, Sunu Alice Cherian, Harold Lehmann, Paola Rosati, Eric Harvey | 2023-05-22 vote 3-1 by Jesus Lopez-Alcalde, Sunu Alice Cherian, Janice Tufte, Harold Lehmann 2023-06-05 vote 5-1 by Cauê Monaco, Eric Harvey, Paul Whaley, Jesus Lopez-Alcalde, Sunu Alice Cherian, Harold Lehmann | 2023-05-22 comments: Definition: A mathematical model that reflects a set of statistical assumptions with regards to the process governing the generation of sample data from a larger population. Since we now have Statistical Model Characteristics as a separate hierarchy, might we want to refer to that hierarchy in the Comment for Application. ("There are many potential components to a statistical model. Those components are represented by the SEVCO hierarchy beginning with...") 2023-06-05 comment: The comment for application needs to be improved - it is difficult to read and the sentences are not grammatically correct. | ||||||
2 | TBD:0000090 | fixed-effect model | ||||||||||||
2 | TBD:0000091 | random-effects model | ||||||||||||
2 | STATO:0000464 | generalized linear mixed model | STATO: linear mixed model (LMM) = "A lnear mixed model is a mixed model containing both fixed effects and random effects and in which factors and covariates are assumed to have a linear relationship to the dependent variable. These models are useful in a wide variety of disciplines in the physical, biological and social sciences. They are particularly useful in settings where repeated measurements are made on the same statistical units (longitudinal study), or where measurements are made on clusters of related statistical units. Because of their advantage in dealing with missing values, mixed effects models are often preferred over more traditional approaches such as repeated measures ANOVA. Fixed-effects factors are generally considered to be the variables whose values of interest are all represented in the data file. Random-effects factors are variables whose values correspond to unwanted variation. They are useful when trying to understand variability in the dependent variable which was not anticipated and exceeds what was expected. Linear mixed models also allow to specify specific interactions between factors, and allow the evaluation of the various linear effect that a particular combination of factor levels may have on a response variable. Finally, linear mixed models allow to specify variance components in order to describe the relation between various random effects levels." | |||||||||||
3 | TBD:0000093 | GLMM with probit link | ||||||||||||
3 | TBD:0000094 | GLMM with logit link | ||||||||||||
3 | TBD:0000095 | GLMM with identity link | ||||||||||||
3 | TBD:0000096 | GLMM with log link | ||||||||||||
3 | TBD:0000097 | GLMM with generalized logit link | ||||||||||||
3 | TBD:0000098 | GLMM with subtype unspecified | ||||||||||||
2 | TBD:0000099 | GLM | ||||||||||||
3 | TBD:0000100 | GLM with probit link | ||||||||||||
3 | TBD:0000101 | GLM with logit link | TBD:0000099 and TBD:0000106 | |||||||||||
3 | TBD:0000102 | GLM with identity link | TBD:0000099 and TBD:0000106 | |||||||||||
3 | TBD:0000103 | GLM with log link | ||||||||||||
3 | TBD:0000104 | GLM with generalized logit link | ||||||||||||
3 | TBD:0000105 | GLM with subtype unspecified | ||||||||||||
1 | TBD:0000121 | data transformation | ||||||||||||
2 | TBD:0000122 | data imputation | ||||||||||||
3 | TBD:0000125 | zero-cell adjustment with constant | ||||||||||||
3 | TBD:0000126 | zero-cell adjustment with continuity correction | ||||||||||||
2 | TBD:0000123 | meta-analysis | ||||||||||||
3 | TBD:0000127 | meta-analysis with fixed-effect model | STATO: STATO_0000082: fixed effect model = a fixed effect model is a statistical model which represents the observed quantities in terms of explanatory variables that are treated as if the quantities were non-random. | |||||||||||
4 | TBD:0000129 | meta-analysis using inverse variance method | ||||||||||||
4 | TBD:0000130 | meta-analysis using Mantel-Haenszel method | ||||||||||||
4 | TBD:0000131 | meta-analysis using Peto method | ||||||||||||
3 | TBD:0000128 | meta-analysis with random-effects model | STATO: STATO_0000099: random effect model (variance components model) = a random effect(s) model, also called a variance components model, is a kind of hierarchical linear model. It assumes that the dataset being analysed consists of a hierarchy of different populations whose differences relate to that hierarchy. | |||||||||||
4 | TBD:0000132 | meta-analysis using DerSimonian-Laird method | STATO: STATO_0000429: DerSimonian-Laird estimator is a data item computed to estimate heterogeneity parameter (estimate of between-study variance) in a random effect model for meta analysis. The estimator is used in simple noniterative procedure for characterizing the distribution of treatment effects in a series of studies | |||||||||||
4 | TBD:0000133 | meta-analysis using Paule-Mandel method | ||||||||||||
4 | TBD:0000134 | meta-analysis using restricted maximum likelihood method | STATO: STATO_0000427: restricted maximum likelihood estimation (REML) = restricted maximum likelihood estimation is a kind of maximum likelihood estimation data transformation which estimates the variance components of random-effects in univariate and multivariate meta-analysis. in contrast to 'maximum likelihood estimation', reml can produce unbiased estimates of variance and covariance parameters. | |||||||||||
4 | TBD:0000135 | meta-analysis using maximum likelihood method | STATO: STATO_0000428: maximum likelihood estimation = "maximum likelihood estimation (MLE) is a method of estimating the parameters of a statistical model, given observations. MLE attempts to find the parameter values that maximize the likelihood function, given the observations. The method of maximum likelihood is based on the likelihood function, {displaystyle {\mathcal {L}}(\theta \,;x)} {\displaystyle {\mathcal {L}}(\theta \,;x)}. We are given a statistical model, i.e. a family of distributions {\displaystyle \{f(\cdot \,;\theta )\mid \theta \in \Theta \}} {\displaystyle \{f(\cdot \,;\theta )\mid \theta \in \Theta \}}, where {\displaystyle \theta } \theta denotes the (possibly multi-dimensional) parameter for the model. The method of maximum likelihood finds the values of the model parameter, {\displaystyle \theta } \theta , that maximize the likelihood function, {\displaystyle {\mathcal {L}}(\theta \,;x)} {\displaystyle {\mathcal {L}}(\theta \,;x)}. I" | |||||||||||
4 | TBD:0000136 | meta-analysis using empirical Bayes method | NCI Code CDISC Submission Value CDISC Synonym NCI Preferred Term CDISC Definition C142403 Bayesian approaches _ Bayesian Approach Approaches to data analysis that provide a posterior probability distribution for some parameter (e.g., treatment effect), derived from the observed data and a prior probability distribution for the parameter. The posterior distribution is then used as the basis for statistical inference. [ICH E9 Glossary] C142404 Bayesian statistics _ Bayesian Statistics Statistical approach named for Thomas Bayes (1701-1761) that has among its features giving a subjective interpretation to probability, accepting the idea that it is possible to talk about the probability of hypotheses being true and of parameters having particular values. | |||||||||||
4 | TBD:0000137 | meta-analysis using Hunter-Schmidt method | STATO: STATO_0000426: Hunter-Schmidt estimator = Hunter-Schmidt estimator is a data item computed to estimate heterogeneity parameter (estimate of between-study variance) in a random effect model for meta analysis. | |||||||||||
4 | STATO:0000430 | meta-analysis using Hartung-Knapp-Sidik-Jonkman method | STATO: a random effect meta analysis procedure defined by Hartung and Knapp and by Sidik and Jonkman which performs better than DerSimonian and Laird approach, especially when there is heterogeneity and the number of studies in the meta-analysis is small. also STATO_0000425 Sidik-Jonkman estimator = Sidik-Jonkman estimator is a data item computed to estimate heterogeneity parameter (estimate of between-study variance) in a random effect model for meta analysis. | |||||||||||
4 | TBD:0000139 | meta-analysis using modified Knapp-Hartung method | ||||||||||||
4 | TBD:0000140 | meta-analysis using Hedges method | ||||||||||||
2 | TBD:0000124 | statistical hypothesis test | ||||||||||||
3 | TBD:0000141 | between group comparison statistical test | ||||||||||||
4 | TBD:0000146 | ANOVA | STATO: uses OBI_0200201: ANOVA or analysis of variance is a data transformation in which a statistical test of whether the means of several groups are all equal. | |||||||||||
5 | TBD:0000150 | multivariate ANOVA | ||||||||||||
5 | STATO:0000048 | multiway ANOVA | child term ?? 3-way ANOVA | STATO: Multi-way ANOVA is an analysis of variance where the difference groups being compared are associated to the factor levels of more than 2 independent variables. The null hypothesis is an absence of difference between the means calculated for each of the groups. The test assumes normality and equivariance of the data. | ||||||||||
5 | STATO:0000044 | one-way ANOVA | STATO: one-way ANOVA (one factor ANOVA) = one-way ANOVA is an analysis of variance where the different groups being compared are associated with the factor levels of only one independent variable. The null hypothesis is an absence of difference between the means calculated for each of the groups. The test assumes normality and equivariance of the data. | |||||||||||
5 | TBD:0000153 | repeated measure ANOVA | ||||||||||||
5 | STATO:0000045 | two-way ANOVA | child terms ?? 2-way ANOVA without replication ?? 2-way ANOVA with replication | STATO: two-way ANOVA (two factor ANOVA) = two-way ANOVA is an analysis of variance where the different groups being compared are associated the factor levels of exatly 2 independent variables. The null hypothesis is an absence of difference between the means calculated for each of the groups. The test assumes normality and equivariance of the data. | ||||||||||
4 | TBD:0000147 | non-parametric test | ||||||||||||
5 | STATO:0000094 | Kruskal-Wallis test | STATO: Kruskal Wallis test (rank-sum test for the comparison of multiple (more than 2) samples.; H test) = "The Kruskal–Wallis test is a null hypothesis statistical testing objective which allows multiple (n>=2) groups (or conditions or treatments) to be compared, without making the assumption that values are normally distributed. The Kruskal–Wallis test is the non-parametric equivalent of the independent samples ANOVA. The Kruskal–Wallis test is most commonly used when there is one nominal variable and one measurement variable, and the measurement variable does not meet the normality assumption of an ANOVA." | |||||||||||
5 | TBD:0000156 | log rank test | ||||||||||||
5 | STATO:0000076 | Mann-Whitney U-test | STATO: "The Mann-Whitney U-test is a null hypothesis statistical testing procedure which allows two groups (or conditions or treatments) to be compared without making the assumption that values are normally distributed. The Mann-Whitney test is the non-parametric equivalent of the t-test for independent samples" | |||||||||||
5 | STATO:0000433 | McNemar test | STATO: McNemar test (McNemar's Chi-squared Test for Count Data; test of the marginal homogeneity of a contingency table; within-subjects chi-squared test) = "McNemar's test is a statistical test used on paired nominal data. It is applied to 2 × 2 contingency tables with a dichotomous trait, with matched pairs of subjects, to determine whether the row and column marginal frequencies are equal (that is, whether there is ""marginal homogeneity""). It is named after Quinn McNemar, who introduced it in 1947. An application of the test in genetics is the transmission disequilibrium test for detecting linkage disequilibrium" | |||||||||||
5 | TBD:0000159 | sign test | ||||||||||||
5 | TBD:0000160 | Friedman test | ||||||||||||
4 | TBD:0000148 | two sample t-test | ||||||||||||
5 | STATO:0000303 | two sample t-test with equal variance | STATO: two sample t-test with equal variance (t-test for independent means assuming equal variance; two sample t-test) = two sample t-test is a null hypothesis statistical test which is used to reject or accept the hypothesis of absence of difference between the means over 2 randomly sampled populations. It uses a t-distribution for the test and assumes that the variables in the population are normally distributed and with equal variances. | |||||||||||
5 | STATO:0000304 | two sample t-test with unequal variance | STATO: two sample t-test with unequal variance (t-test for independent means assuming unequal variance; Welsh t-test) = Welch t-test is a two sample t-test used when the variances of the 2 populations/samples are thought to be unequal (homoskedasticity hypothesis not verified). In this version of the two-sample t-test, the denominator used to form the t-statistics, does not rely on a 'pooled variance' estimate. | |||||||||||
4 | STATO:0000052 | z test for between group comparison | STATO: Z-test is a statistical test which evaluate the null hypothesis that the means of 2 populations are equal and returns a p-value. | |||||||||||
4 | TBD:ANCOVA | ANCOVA | analysis of covariance (ANCOVA) | |||||||||||
3 | TBD:0000142 | chi square test | STATO: from OBI_0200200: The chi-square test is a data transformation with the objective of statistical hypothesis testing, in which the sampling distribution of the test statistic is a chi-square distribution when the null hypothesis is true, or any in which this is asymptotically true, meaning that the sampling distribution (if the null hypothesis is true) can be made to approximate a chi-square distribution as closely as desired by making the sample size large enough. | |||||||||||
4 | TBD:0000163 | chi square test for homogeneity | ||||||||||||
4 | STATO:0000074 | Mantel-Haenszel method | STATO: Cochran-Mantel-Haenzel test for repeated tests of independence (CHM test; Mantel–Haenszel test) = "Cochran-Mantel-Haenzel test for repeated tests of independence is a statitiscal test which allows the comparison of two groups on a dichotomous/categorical response. It is used when the effect of the explanatory variable on the response variable is influenced by covariates that can be controlled. It is often used in observational studies where random assignment of subjects to different treatments cannot be controlled, but influencing covariates can. The null hypothesis is that the two nominal variables that are tested within each repetition are independent of each other. So there are 3 variables to consider: two categorical variables to be tested for independence of each other, and the third variable identifies the repeats." | |||||||||||
4 | TBD:0000165 | Pearson’s chi square test of goodness of fit | ||||||||||||
4 | TBD:0000166 | Pearson’s chi square test of goodness of independence between categorical variables | ||||||||||||
5 | TBD:0000167 | Yate’s corrected chi-squared test | ||||||||||||
3 | TBD:0000143 | single-sample reference comparison statistical test | ||||||||||||
4 | STATO:0000302 | one sample t-test | STATO: "one sample t-test is a kind of Student's t-test which evaluates if a given sample can be reasonably assumed to be taken from the population. The test compares the sample statistic (m) to the population parameter (M). The one sample t-test is the small sample analog of the z test, which is suitable for large samples." | |||||||||||
4 | TBD:0000169 | z test for single-sample | ||||||||||||
3 | TBD:0000144 | test of association between categorical variables | ||||||||||||
4 | STATO:0000148 | Cochran-Armitage test for trend | STATO: "The Cochran-Armitage test (CATT) s a statistical test used in categorical data analysis when the aim is to assess for the presence of an association between a dichotomous variable (variable with two categories) and a polychotomous variable (a variable with k categories). The two-level variable represents the response, and the other represents an explanatory variable with ordered levels. The null hypothesis is the hypothesis of no trend, which means that the binomial proportion is the same for all levels of the explanatory variable For example, doses of a treatment can be ordered as 'low', 'medium', and 'high', and we may suspect that the treatment benefit cannot become smaller as the dose increases. The trend test is often used as a genotype-based test for case-control genetic association studies." | |||||||||||
4 | STATO:0000073 | Fisher’s exact test | STATO: Fisher's exact test is a statistical test used to determine if there are nonrandom associations between two categorical variables. | |||||||||||
3 | TBD:0000145 | within subject comparison statistical test | ||||||||||||
4 | STATO:0000095 | paired t-test | STATO: paired t-test (t-test for dependent means) = paired t-test is a statistical test which is specifically designed to analysis differences between paired observations in the case of studies realizing repeated measures design with only 2 repeated measurements per subject (before and after treatment for example) | |||||||||||
4 | STATO:0000092 | Wilcoxon signed rank test | STATO: "The Wilcoxon signed rank test is a statistical test which tests the null hypothesis that the median difference between pairs of observations is zero. This is the non-parametric analogue to the paired t-test, and should be used if the distribution of differences between pairs may be non-normally distributed. The procedure involves a ranking, hence the name. The absolute value of the differences between observations are ranked from smallest to largest, with the smallest difference getting a rank of 1, then next larger difference getting a rank of 2, etc. Ties are given average ranks. The ranks of all differences in one direction are summed, and the ranks of all differences in the other direction are summed. The smaller of these two sums is the test statistic, W (sometimes symbolized Ts). Unlike most test statistics, smaller values of W are less likely under the null hypothesis." | |||||||||||
3 | TBD:permutation-test | permutation test | ||||||||||||
4 | TBD:prospective-sample-permutation-testing | prospective sample permutation testing | ||||||||||||
4 | TBD:retrospective-sample-permutation-testing | retrospective sample permutation testing | ||||||||||||
2 | TBD:0000Log | logarithm | ||||||||||||
1 | TBD:model-characteristics | statistical model characteristic | An aspect, attribute, or feature of a statistical model. | A statistical model is defined as a set of mathematical relationships that express assumptions related to the generation of the observed data and that sets constraints for the analysis of the data. | Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins, Muhammad Afzal | 2023-06-05 vote 6-0 by Cauê Monaco, Eric Harvey, Paul Whaley, Jesus Lopez-Alcalde, Sunu Alice Cherian, Harold Lehmann | ||||||||
2 | TBD:0000118 | statistical model goal | ||||||||||||
3 | TBD:0000119 | adjustment for clustering | ||||||||||||
3 | TBD:0000120 | adjustment for covariates | ||||||||||||
3 | TBD:participant-inclusion-criteria-for-analysis | participant inclusion criteria for analysis | ||||||||||||
4 | TBD:ITTA | intention-to-treat analysis | ||||||||||||
4 | TBD:PPA | per-protocol analysis | ||||||||||||
4 | TBD:participant-inclusion-criteria-for-secondary-analysis | participant inclusion criteria for secondary analysis | ||||||||||||
3 | TBD:data-inclusion-criteria-for-analysis | data inclusion criteria for analysis | ||||||||||||
4 | TBD:data-inclusion-criteria-for-secondary-analysis | data inclusion criteria for secondary analysis | ||||||||||||
3 | TBD:handling-of-missing-endpoint-data | handling of missing endpoint data | ||||||||||||
4 | TBD:single-imputation-by-LOCF | single imputation by last-observation-carried-forward (LOCF) | ||||||||||||
3 | TBD:sample-size | sample size estimation | The term 'sample size estimation' may be applied to hypothesis testing-based sample size calculation. | |||||||||||
4 | TBD:sample-size-per-group | sample size per group | ||||||||||||
4 | TBD:number-of-permutations-sampled | number of permutations sampled | ||||||||||||
3 | TBD:net-effect-analysis | net effect analysis | ||||||||||||
4 | TBD:OutcomeSetNetEffect | set of outcomes (for a net effect analysis) | ||||||||||||
3 | TBD:primary-analytic-method | primary analytic method | ||||||||||||
3 | TBD:identify-source-of-interaction | identify source(s) of significant interaction | ||||||||||||
3 | TBD:rank-based-analytic-method | rank-based analytic method | ||||||||||||
3 | TBD:net-effect-contribution-analysis | Net effect contribution analysis | ||||||||||||
2 | TBD:statistical-model-assumption | statistical model assumption | ||||||||||||
3 | TBD:assumption001 | data distribution assumption of normal distribution | Assumption that the observed data in each comparison group follows a normal distribution. | |||||||||||
3 | TBD:assumption002 | data distribution assumption of equal standard deviations | Assumption that the observed data across comparison groups have the same standard deviation. | |||||||||||
3 | TBD:assumption003 | data distribution assumption of asymptotic approximation | Assumption that there is sufficient data across the distribution to permit using an approximation that is [asymptotic]. | |||||||||||
2 | TBD:statistical-model-assumption-assessment | statistical model assumption assessment | ||||||||||||
3 | TBD:assumption-assessment-001 | all the expected counts in the cells of the contingency table meet or exceed a threshold | ||||||||||||
2 | TBD:statistical-software-package | statistical software package | ||||||||||||
2 | TBD:predicted | predicted value | TBD after example modeling | |||||||||||
2 | STATO:0000299 | statistical inference paradigm | A set of fundamental assumptions underlying the quantitative process of using data analysis to deduce properties of a probability distribution. | In this definition, the phrase "fundamental assumptions" refers to the foundational framework underlying specific statistical analyses, such as the Bayesian approach or the frequentist approach. Other less commonly used statistical inference paradigms include resampling methods, fiducial inference, pragmatic inference, and likelihoodist paradigm. | Harold Lehmann, Brian S. Alper, Janice Tufte | 2024-09-30 vote 6-0 by Eric Harvey, Arnav Agarwal, Airton Tetelbom Stein, Homa Keshavarz, Carlos Alva-Diaz, Javier Bracchiglione | 2024-09-16 vote 9-1 by C P Ooi, Bhagvan Kommadi, Carlos Alva-Diaz, Cauê Monaco, Javier Bracchiglione, Sheyu Li, Airton Tetelbom Stein, Lenny Vasanthan, Eric Harvey, Janice Tufte | 2024-09-16 comment re: "statistical inference paradigm" = "A set of fundamental assumptions underlying the quantitative process of using data analysis to deduce properties of a probability distribution."1N) The definition confuses me. It is unclear why this paradigm is a fixed set or a contextualised set. A comment here may be necessary. | ||||||
3 | TBD:Bayesian | Bayesian inference | A statistical inference paradigm in which the assumptions are that parameters are random variables with probability distributions, that prior knowledge can be represented with such distributions, and that one learns from data by updating those prior distributions using Bayes' Rule. | Regarding random variables with probability distribution: These random variables are often used to represent population parameters, but there are other types of random variables, e.g., representing typical methodological concerns, where the typical notion of “population” does not apply. Regarding prior knowledge: The notion of representing prior knowledge as a probability distribution is the core of disagreement among statisticians around Bayesian inference. For those interpreting “probability” as “measure of belief”, prior probability distributions encodes belief about the parameters (“subjectivist Bayes”), and ignorance is represented as a “non-informative” prior. For those with a more frequentist interpretation of probability, such priors summarizes frequency knowledge seen in the past (“empirical Bayes”). Regarding Bayes’ Rule: Probability theory, with no assumptions beyond regular probability, says that P(A|B) = P(B|A) P(A). Bayes Rule says that one can (should) use this formula to update knowledge from data: P(parameter value | data) = P(data | parameter value) P(Data), with P(data | parameter value) being called the “likelihood function,” communicating how likely data are, given parameter values (and what type of distribution the parameter(s) connote), and could be use to reflect sampling, if relevant. | Harold Lehmann, Kenneth Wilkins | 2024-09-30 vote 6-0 by Carlos Alva-Diaz, Eric Harvey, Arnav Agarwal, Airton Tetelbom Stein, Homa Keshavarz, Javier Bracchiglione | NCI Code CDISC Submission Value CDISC Synonym NCI Preferred Term CDISC Definition C142403 Bayesian approaches _ Bayesian Approach Approaches to data analysis that provide a posterior probability distribution for some parameter (e.g., treatment effect), derived from the observed data and a prior probability distribution for the parameter. The posterior distribution is then used as the basis for statistical inference. [ICH E9 Glossary] C142404 Bayesian statistics _ Bayesian Statistics Statistical approach named for Thomas Bayes (1701-1761) that has among its features giving a subjective interpretation to probability, accepting the idea that it is possible to talk about the probability of hypotheses being true and of parameters having particular values. | |||||||
3 | TBD:frequentist | frequentist inference | A statistical inference paradigm in which the assumptions are that probabilities reflect relative frequencies of events and that the probability of observing data are governed by fixed yet unknown population parameters. | The frequentist inference is based on hypothetical repeated sampling and the resulting relative frequencies in the long run. Inferences are based on point and interval estimates. For hypothesis testing, the inference is based on the probability of obtaining values equal to or more extreme than the observed data. | Harold Lehmann, Kenneth Wilkins, Muhammad Afzal, Joanne Dehnbostel | 2024-09-30 vote 6-0 by Eric Harvey, Arnav Agarwal, Airton Tetelbom Stein, Homa Keshavarz, Carlos Alva-Diaz, Javier Bracchiglione | ||||||||
1 | TBD:model-component | statistical model component | A part of a statistical model. | A statistical model is defined as a set of mathematical relationships that express assumptions related to the generation of the observed data and that sets constraints for the analysis of the data. Statistical model components include graphical structures (e.g. directed acyclic graph), equations (e.g. regression model form), components of equations (e.g. covariate term), and distributional assumptions (e.g. regression error distribution). | Brian S. Alper, Muhammad Afzal, Joanne Dehnbostel, Harold Lehmann | 2023-06-05 vote 6-0 by Cauê Monaco, Eric Harvey, Paul Whaley, Jesus Lopez-Alcalde, Sunu Alice Cherian, Harold Lehmann | ||||||||
2 | TBD:0000088 | covariate term | ||||||||||||
2 | STATO:0000469 | interaction term | STATO: model interaction effect term = a model interaction effect term is a model term which accounts for variation explained by the combined effects of the factor levels of more than one (usually 2) independent variables. | |||||||||||
2 | TBD:likelihoodfunction | likelihood function | ||||||||||||
2 | TBD:0000106 | regression model form | ||||||||||||
3 | TBD:0000102dup | linear regression | see GLM with identity link (same term IS-A GLM) | STATO: STATO_0000108: linear regression for analysis of continuous dependent variable = "linear regression model is a model which attempts to explain data distribution associated with response/dependent variable in terms of values assumed by the independent variable uses a linear function or linear combination of the regression parameters and the predictor/independent variable(s). linear regression modeling makes a number of assumptions, which includes homoskedasticity (constance of variance)" | true | |||||||||
3 | TBD:0000101dup | logistic regression | see GLM with logit link (same term IS-A GLM) | true | ||||||||||
3 | TBD:0000107 | log linear regression | ||||||||||||
3 | TBD:0000108 | polynomial regression | ||||||||||||
3 | TBD:0000109 | Cox proportional hazards | ||||||||||||
1 | TBD:PDA | probability distribution attribute | An aspect, characteristic, or feature of a probability distribution. | A probability distribution is represented by a combination of probability distribution attributes. | Brian S. Alper, Harold Lehmann, Muhammad Afzal | 2023-10-09 vote 5-0 by Louis Leff, Jesus Lopez-Alcalde, Harold Lehmann,: Eric Harvey, Mario Tristan | ||||||||
2 | TBD:0000110 | probability distribution class | A probability distribution attribute that communicates how the likelihood of a specified outcome is calculated. | The probability distribution class defines the assumed model. Parametric probability distribution classes are determined by parameters. | Harold Lehmann, Kenneth Wilkins, Brian S. Alper, Muhammad Afzal, Joanne Dehnbostel | 2023-10-09 vote 5-0 by Louis Leff, Jesus Lopez-Alcalde, Harold Lehmann, Eric Harvey, Mario Tristan | ||||||||
3 | TBD:0000111 | normal distribution | A probability distribution class in which instances are unimodal, symmetric, and defined by two parameters, mean and standard deviation. | Normal distribution is commonly used to approximate the sampling distribution of quantities estimated from samples. Variance is the square of standard deviation. Variance is sometimes used instead of standard deviation as a parameter for defining a normal distribution. Standard normal distribution is a special case of normal distribution with a mean = 0, variance = 1, and kurtosis = 3. All normal distributions have skewness = 0. | Philippe Rocca-Serra, Ken Wilkins, Joanne Dehnbostel, Khalid Shahin, Brian S. Alper, Harold Lehmann | 2023-08-07 vote 5-0 by Joanne Dehnbostel, Mario Tristan, Philippe Rocca-Serra, Eric Harvey, Harold Lehmann | Term IRI: http://purl.obolibrary.org/obo/STATO_0000227 Definition: A normal distribution is a continuous probability distribution described by a probability distribution function described here: http://mathworld.wolfram.com/NormalDistribution.html | |||||||
3 | STATO:0000438 | log normal distribution | A probability distribution class in which the logarithm transformed values of a variable follow a normal distribution. Instances of the log normal distribution class are unimodal and skewed. Variables can only be non-negative real values. | Log normal distribution is commonly used to approximate the distribution of times and costs. The mean of a log normal distribution is the geometric mean of the log transformed values. Log transformed means the natural log of values replace those values. Normal distribution is defined as a probability distribution class in which instances are unimodal, symmetric, and defined by two parameters, mean and standard deviation. | Harold Lehmann, Kenneth Wilkins, Joanne Dehnbostel, Brian S. Alper, Khalid Shahin | 2023-10-09 vote 5-0 by Louis Leff, Jesus Lopez-Alcalde, Harold Lehmann, Eric Harvey, Mario Tristan | 2023-08-07 vote 4-1 by Joanne Dehnbostel, Mario Tristan, Philippe Rocca-Serra, Eric Harvey, Harold Lehmann | 2023-08-07 comment: (tweak to the definition): A probability distribution class in which the logarithm transformed values of a variable follow a normal distribution. Instances of the log normal distribution class are unimodal and skewed. STATO_0000438 | ||||||
3 | STATO:0000160 | exponential distribution | A probability distribution class defined by a single parameter, rate. Instances of the exponential distribution class are unimodal and skewed. Variables can only be non-negative real values. | Exponential distribution is commonly used to represent the distribution of independent events occurring at the same rate over time. The mean and standard deviation of an exponential distribution are each the reciprocal of the rate. | Harold Lehmann, Kenneth Wilkins, Joanne Dehnbostel, Brian S. Alper, Khalid Shahin | 2023-10-09 vote 5-0 by Louis Leff, Jesus Lopez-Alcalde, Harold Lehmann, Eric Harvey, Mario Tristan | 2023-08-07 vote 4-1 by Joanne Dehnbostel, Mario Tristan, Philippe Rocca-Serra, Eric Harvey, Harold Lehmann | 2023-08-07 comment: (tweak to the definition): A probability distribution class defined by a single parameter, rate and commonly used to represent the distribution of independent events occurring at the same rate over time. Instances of the exponential distribution class are unimodal, skewed, STATO_0000160 | ||||||
3 | STATO:0000149 | binomial distribution | A probability distribution class defined by two parameters: the number of independent trials, n, and the probability of success, p. Variables can only be dichotomous values. | Binomial distribution is commonly used to approximate the probability of a dichotomous state (presence/absence, success/failure, true/false). The mean of a binomial distribution is the number of independent trials, n, multiplied by the probability of success, p. n * p The variance of a binomial distribution is the number of independent trials, n, multiplied by the probability of success, p, multiplied by the probability of failure, 1-p. n * p * q where q = 1 - p | Joanne Dehnbostel, Kenneth Wilkins, Muhammad Afzal, Brian S. Alper | 2023-10-09 vote 5-0 by Louis Leff, Jesus Lopez-Alcalde, Harold Lehmann, Eric Harvey, Mario Tristan | STATO: binomial logistic regression for analysis of dichotomous dependent variable = binomial logistic regression model is a model which attempts to explain data distribution associated with *dichotomous* response/dependent variable in terms of values assumed by the independent variable uses a function of predictor/independent variable(s): the function used in this instance of regression modeling is logistic function. also STATO_0000276: binomial distribution = The binomial distribution is a discrete probability distribution which describes the probability of k successes in n draws with replacement from a finite population of size N. The binomial distribution is frequently used to model the number of successes in a sample of size n drawn with replacement from a population of size N. The binomial distribution gives the discrete probability distribution of obtaining exactly n successes out of N Bernoulli trials (where the result of each Bernoulli trial is true with probability p and false with probability q=1-p ) notation: B(n,p) The mean is N*p The variance is N*p*q | |||||||
3 | STATO:0000109 | multinomial distribution | A probability distribution class defined by multiple parameters: the number of independent trials, n, the number of categories, k, and k-1 probabilities of success. Variables can only be polychotomous values. | Multinomial distribution is commonly used to approximate the probability of a categorical outcome across a discrete number of mutually exclusive possible categories. A classic example is rolling a six-sided die. For *n* independent trials, the expected (mean) number of times category *i* will appear is *n* multiplied by the probability of success, *p<sub>i</sub>*. *n* * *p<sub>i</sub>* The variance of that expectation is *n* multiplied by *p<sub>i</sub>* multiplied by the probability of failure, 1-*p<sub>i</sub>* | Harold Lehmann, Joanne Dehnbostel, Brian S. Alper | 2023-10-09 vote 5-0 by Louis Leff, Jesus Lopez-Alcalde, Harold Lehmann, Eric Harvey, Mario Tristan | STATO: multinomial logistic regression for analysis of dichotomous dependent variable = multinomial logistic regression model is a model which attempts to explain data distribution associated with *polychotomous* response/dependent variable in terms of values assumed by the independent variable uses a function of predictor/independent variable(s): the function used in this instance of regression modeling is logistic function. also multinomial distribution (STATO_0000103) = the multinomial distribution is a probability distribution which gives the probability of any particular combination of numbers of successes for various categories defined in the context of n independent trials each of which leads to a success for exactly one of k categories, with each category having a given fixed success probability. | |||||||
3 | STATO:0000051 | Poisson distribution | A probability distribution class defined by one parameter: a non-negative real number, ?. Random variables following a Poisson distribution can only have non-negative integer values. | Poisson distribution is commonly used to approximate the number (count) of events occurring within a given time interval or given spatial region. The expected value of a Poisson-distributed random variable is equal to ? and so is its variance. | Harold Lehmann, Joanne Dehnbostel, Brian S. Alper, Muhammad Afzal | 2023-11-20 vote 5-0 by Brian S. Alper, Jesus Lopez-Alcalde, Harold Lehmann, Muhammad Afzal, Eric Harvey | 2023-10-09 vote 5-0 by Louis Leff, Jesus Lopez-Alcalde, Harold Lehmann, Eric Harvey, Mario Tristan BUT definition changed based on comment | 2023-10-02 comment: The other definitions include something about what is called the "support" (binary, polychotomous). Here, we should say, to be consistent, "Variables can take on only non-negative integral values." | STATO: STATO_0000051 is Poisson distribution = "Poisson distribution is a probability distribution used to model the number of events occurring within a given time interval. It is defined by a real number (?) and an integer k representing the number of events and a function. The expected value of a Poisson-distributed random variable is equal to ? and so is its variance." | |||||
3 | STATO:0000283 | negative binomial distribution | A probability distribution class for discrete data of the number of successes in a sequence of Bernoulli trials before a specified number (denoted r) of failures occur. | The negative binomial distribution, also known as the Pascal distribution, gives the probability of r-1 successes and x failures in x+r-1 trials, and success on the (x+r)th trial. Pólya distribution is a variation of negative binomial distribution used for all real numbers, not just non-negative integers. | Harold Lehmann, Kenneth Wilkins, Muhammad Afzal, Joanne Dehnbostel, Brian S. Alper | 2023-10-09 vote 5-0 by Louis Leff, Jesus Lopez-Alcalde, Harold Lehmann, Eric Harvey, Mario Tristan | STATO: STATO_0000283: negative binomial distribution (Pascal distribution; Pólya distribution) = negative binomial probability distribution is a discrete probability distribution of the number of successes in a sequence of Bernoulli trials before a specified (non-random) number of failures (denoted r) occur. The negative binomial distribution, also known as the Pascal distribution or Pólya distribution, gives the probability of r-1 successes and x failures in x+r-1 trials, and success on the (x+r)th trial. | |||||||
2 | TBD:mu | distribution mean | A probability distribution attribute that represents the expected value of a variable that has that distribution. | For a normal distribution, the distribution parameter mean (also called µ or mu) coincides with the mean of the distribution. | Brian S. Alper, Harold Lehmann, Kenneth Wilkins, Joanne Dehnbostel, Muhammad Afzal | 2023-10-09 vote 5-0 by Louis Leff, Jesus Lopez-Alcalde, Harold Lehmann, Eric Harvey, Mario Tristan | 2023-06-12 vote 2-2 by Sunu Alice Cherian, Harold Lehmann, Paola Rosati, Eric Harvey | 2023-06-12 comments: mu represents population mean. It is a measure of central tendency that represents the average value of a variable within an entire population. To avoid any ambiguity, rather than mu I would use the Alternative term µ. As reported by Wikipedia: In Ancient Greek, the name of the letter was written µ? and pronounced [my?], but in Modern Greek, the letter is spelled µ? and pronounced [mi]. In polytonic orthography, it is written with an acute accent: µ?. | ||||||
2 | STATO:0000694 | distribution standard deviation | A probability distribution attribute that is the square root of the distribution variance. | A distribution variance is defined as a probability distribution attribute that is the expected value of the square of the difference of the value of a variable that has that distribution from its expected value. For a normal distribution, the distribution parameter standard deviation (also called s or sigma) coincides with the standard deviation of the distribution. Standard deviation is defined as a measure of dispersion that represents the average of the distances from the mean of the dataset to each data point in the dataset. | Brian S. Alper, Harold Lehmann, Kenneth Wilkins | 2023-10-16 vote 5-0 by Harold Lehmann, Eric Harvey,: Muhammad Afzal, Louis Leff, Jesus Lopez-Alcalde | 2023-06-12 vote 3-1 by Sunu Alice Cherian, Harold Lehmann, Paola Rosati, Eric Harvey | 2023-06-12 comment: sigma represents population standard deviation, It is a measure of the dispersion or spread of data points within an entire population. | Measure of Dispersion | |||||
2 | TBD:model-parameter | probability distribution parameter | A member of a set of quantities that unambiguously defines a probability distribution function. | Parameters serve different roles in defining distributions. Location parameters define the position along the range of possible values. Shape and scale parameters define the dispersion around the expected value. When the probability distribution parameters have values, the set of values defines a particular probability distribution function. When a statistic applies to a specific set of data, the specific set of data is called a sample and the statistic is called the sample statistic. Likewise, when a probability distribution parameter applies to the group from which a sample may be derived, the group is called a population and the probability distribution parameter is called a population parameter. | Brian S. Alper, Harold Lehmann, Kenneth Wilkins, Joanne Dehnbostel | 2023-11-20 vote 5-0 by Brian S. Alper, Jesus Lopez-Alcalde, Harold Lehmann, Muhammad Afzal, Eric Harvey | 2023-06-12 vote 5-0 by Brian S. Alper, Sunu Alice Cherian, Harold Lehmann, Paola Rosati, Eric Harvey BUT the term then changed in committee to grapple with sub-terms | |||||||
3 | TBD:mean-normal | mean as normal-distribution parameter | A probability distribution parameter for a normal distribution that provides the location of the distribution. | This parameter is generally denoted as µ or mu. | Harold Lehmann, Brian S. Alper, Kenneth Wilkins | 2023-10-16 vote 5-0 by Harold Lehmann, Eric Harvey,: Muhammad Afzal, Louis Leff, Jesus Lopez-Alcalde | ||||||||
3 | TBD:variance-normal | variance as normal-distribution parameter | A probability distribution parameter for a normal distribution that provides the dispersion of the distribution. | This parameter is generally denoted as s<sup>2</sup> or sigma-squared. | Harold Lehmann, Brian S. Alper, Kenneth Wilkins | 2023-10-16 vote 5-0 by Harold Lehmann, Eric Harvey,: Muhammad Afzal, Louis Leff, Jesus Lopez-Alcalde | ||||||||
3 | TBD:population-parameter | DEPRECATED: population parameter | A statistical distribution parameter that is used to define a probability distribution function of the population. | A [statistical distribution parameter](https://fevir.net/resources/CodeSystem/27270#TBD:model-parameter) is defined as a member of a set of quantities that unambiguously defines a probability distribution function. When a statistic applies to a specific set of data, the specific set of data is called a sample and the statistic is called the sample statistic. Likewise, when a statistical distribution parameter applies to the group from which a sample may be derived, the group is called a population and the statistical distribution parameter is called a population parameter. | Brian S. Alper, Harold Lehmann, Kenneth Wilkins, Muhammad Afzal, Joanne Dehnbostel | 2023-07-10 with reorganization of the SEVCO section | ||||||||
2 | STATO:0000693 | distribution variance | A probability distribution attribute that is the expected value of the square of the difference of the value of a variable that has that distribution from its expected value. | For a normal distribution, the distribution parameter variance (also called s<sup>2</sup> or sigma-squared) coincides with the variance of the distribution. | Brian S. Alper, Harold Lehmann, Kenneth Wilkins | 2023-10-16 vote 5-0 by Harold Lehmann, Eric Harvey,: Muhammad Afzal, Louis Leff, Jesus Lopez-Alcalde | ||||||||
3 | TBD:0000056 | variance of the sampling distribution | A distribution variance in which the distribution is a sampling distribution of a given statistic. | Distribution variance is defined as a probability distribution attribute that is the expected value of the square of the difference of the value of a variable that has that distribution from its expected value. A sampling distribution is a distribution of values for the given statistic derived from a set of random independent samples from the same population. The samples may be theoretical or actual. | Brian S. Alper, Kenneth Wilkins, Harold Lehmann | 2023-11-20 vote 5-0 by Brian S. Alper, Jesus Lopez-Alcalde, Harold Lehmann, Muhammad Afzal, Eric Harvey | Measure of Dispersion | |||||||
1 | TBD:PD | probability distribution |