Methods
Methods used in the Pharmacovigilance for COVID-19 vaccines Dashboard
The main source of evidence is the COVID-19 L·OVE platform. The COVID-19 L·OVE is based on two interrelated components: a repository and a classification platform.
The methods reported in this section are those in use as of April 2020.
The last version of the methods, the total number of sources screened, and a living flow diagram and report of the project is updated regularly on this address:
https://app.iloveevidence.com/covid19/methods
Search strategy
Electronic searches
The COVID-19 L·OVE repository was built, and is maintained, by systematic searches of 41 databases, trial registries and preprint servers without any restriction by study design, language or publication status, including PubMed, EMBASE, CINAHL (the Cumulative Index to Nursing and Allied Health Literature), PsycINFO, LILACS (Latin American & Caribbean Health Sciences Literature), Wanfang Database, CBM (Chinese Biomedical Literature Database), CNKI (Chinese National Knowledge Infrastructure), VIP (Chinese Scientific Journal Database), IRIS (WHO Institutional Repository for Information Sharing), IRIS PAHO (PAHO Institutional Repository for Information Sharing), IBECS (Spanish Bibliographic Index on Health Sciences), Microsoft Academic, ICTRP Search Portal, Clinicaltrials.gov, ISRCTN registry, Chinese Clinical Trial Registry, IRCT (Iranian Registry of Clinical Trials), EU Clinical Trials Register (Clinical trials for covid-19), Japan NIPH Clinical Trials Search, JPRN (Japan Primary Registries Network) (JPRN - includes JapicCTI, JMACCT CTR, jRCT, UMIN CTR), MedRxiv, BioRxiv, SSRN Preprints, Research Square, ChinaXiv and SciELO Preprints.
We have adapted the main COVID-19 strategy (described below) to the syntax of each source. We also retrieve all the records belonging to COVID-19 collections in the sources providing this alternative. The information is obtained from the sources using different technology solutions, such as querying publicly available APIs, subscribing RSS feeds, parsing .csv files posted in websites or running traditional manual searches.
Main COVID-19 strategy
The strategy, frequency and other details employed for retrieving articles from each source are described in https://app.iloveevidence.com/covid19/methods
The records are uploaded first to Epistemonikos Database [Rada G, 2013] where they are deduplicated and cleansed using proprietary software of Epistemonikos Foundation. Then, the articles are served to the COVID-19 L·OVE platform through an API.
Other search sources
In order to identify articles potentially missed by our electronic search we:
- Manually check all the systematic reviews identified by our search and we add the studies included in those reviews to the repository.
- Manually check all the overviews of systematic reviews, scoping reviews or other broad syntheses reporting a list of included reviews/studies and we add the included articles to the repository.
- Evaluate potentially eligible articles sent by users through the contact email or other means (e.g. twitter).
Considering the relevance of randomized trials for decision-making, we also:
- Run a daily search for randomised trials in Twitter using the following strategy: #COVID19 OR #COVID-19 OR #COVID_19 OR #COVID randomized OR randomised.
We also scan relevant scientific conferences, press release websites and the websites of the main trials or companies relevant to vaccination for COVID-19 (see Table).
Table. Relevant websites checked regularly
Selection of articles
At least two review authors independently screen the titles and abstracts yielded against the population, intervention and study design inclusion criteria defined for each section of the dashboard.
We obtain the full text of the articles that appear to meet the inclusion criteria or require further analysis and at least two review authors decide upon their inclusion. The authors resolve discrepancies by discussion and, where necessary, with the intervention of a third reviewer.
Extraction and management of data
Using standardized forms, two reviewers independently extract data from each included article.
The data extracted varies depending on the section of the dashboard. For instance, to inform the clinical efficacy and safety section, the followed data is extracted: study design, participant characteristics, including age, gender (proportion of women), disease severity and study eligibility criteria; details about the administered intervention and comparison, including dose and schedule; the outcomes assessed and the time they were measured; the source of funding of the study; the risk of bias assessment for each individual study.
Risk of bias assessment
The risk of bias for each randomized trial is assessed by using the 'risk of bias' tool (RoB 2.0: a revised tool to assess risk of bias in randomized trials) [Sterne JAC,2019], considering the following domains of bias for each outcome result of all reported outcomes and time points: bias due to (1) the randomisation process, (2) deviations from intended interventions (effects of assignment to interventions at baseline), (3) missing outcome data, (4) measurement of the outcome, and (5) selection of reported results. Discrepancies between review authors are resolved by discussion to reach consensus. If necessary, a third review author is consulted to achieve a decision.
Measures of treatment effect
For dichotomous outcomes, we express the estimate of treatment effect of an intervention as risk ratios (RR) along with 95% confidence intervals (CI).
For continuous outcomes, we use the mean difference and standard deviation to summarise the data along with 95% CI. For continuous outcomes reported using different scales, the treatment effect is expressed as a standardized mean difference with 95% CI.
Strategy for data synthesis
For any outcomes where it is not possible to calculate an effect estimate, a narrative synthesis is presented, describing the studies in terms of the direction and the size of effects, and any available measure of precision For any outcomes where data is available from more than one trial, we conduct a formal quantitative synthesis (meta-analysis) for studies clinically homogeneous using PyMeta [Hongyong D, 2021], using the inverse variance method with the random-effects model. We assess inconsistency by visual inspection of the forest plots and using the I² index.
Assessment of certainty of evidence
The certainty of the evidence for all outcomes is judged using the Grading of Recommendations Assessment, Development and Evaluation working group methodology (GRADE Working Group) [Guyatt GH,2008], across the domains of risk of bias, consistency, directness, precision and reporting bias. For the main comparisons and outcomes, we prepare an interactive Summary of Findings (SoF) tables [Guyatt GH,2013], [Guyatt GH, 2013].
References
- Guyatt GH, Oxman AD, Santesso N, Helfand M, Vist G, Kunz R, Brozek J, Norris S, Meerpohl J, Djulbegovic B, Alonso-Coello P, Post PN, Busse JW, Glasziou P, Christensen R, Schünemann HJ. GRADE guidelines: 12. Preparing summary of findings tables-binary outcomes. Journal of clinical epidemiology. 2013;66(2):158-72. [c27a8734b176143b1a7e93e066c9b2efe02e0dd0]
- Guyatt GH, Oxman AD, Vist GE, Kunz R, Falck-Ytter Y, Alonso-Coello P, Schünemann HJ, GRADE Working Group. GRADE: an emerging consensus on rating quality of evidence and strength of recommendations. BMJ (Clinical research ed.). 2008;336(7650):924-6. [382be8e50a7b703232dc8bdcf375bbfdd3acb13d]
- Guyatt GH, Thorlund K, Oxman AD, Walter SD, Patrick D, Furukawa TA, Johnston BC, Karanicolas P, Akl EA, Vist G, Kunz R, Brozek J, Kupper LL, Martin SL, Meerpohl JJ, Alonso-Coello P, Christensen R, Schunemann HJ. GRADE guidelines: 13. Preparing summary of findings tables and evidence profiles-continuous outcomes. Journal of clinical epidemiology. 2013;66(2):173-83. [9758dc5ba0cb9afed91617116ebb2050c55d65b0]
- Hongyong D. PyMeta, Python module of Meta-analysis. Software - PyMeta. 2021; [66383647c7f01fddd6723efd583c62e4c5288bc9]
- Rada G, Pérez D, Capurro D. Epistemonikos: a free, relational, collaborative, multilingual database of health evidence. Studies in health technology and informatics. 2013;192:486-90.
- Sterne JAC, Savović J, Page MJ, Elbers RG, Blencowe NS, Boutron I, Cates CJ, Cheng HY, Corbett MS, Eldridge SM, Emberson JR, Hernán MA, Hopewell S, Hróbjartsson A, Junqueira DR, Jüni P, Kirkham JJ, Lasserson T, Li T, McAleenan A, Reeves BC, Shepperd S, Shrier I, Stewart LA, Tilling K, White IR, Whiting PF, Higgins JPT. RoB 2: a revised tool for assessing risk of bias in randomised trials. BMJ (Clinical research ed.). 2019;366:l4898. [36f04f2391715879abfc3d4a6ad43dba412226a0]