For systematic reviews to be rigorous, deliverable and useful, they need a well-defined review question. Scoping for a review also requires the specification of clear inclusion criteria and planned synthesis methods. Guidance is lacking on how to develop these, especially in the context of undertaking rapid and responsive systematic reviews to inform health services and health policy.
This report describes and discusses the experiences of review scoping of three commissioned research centres that conducted evidence syntheses to inform health and social care organisation, delivery and policy in the UK, between 2017 and 2020.
Sources included researcher recollection, project meeting minutes, e-mail correspondence with stakeholders and scoping searches, from allocation of a review topic through to review protocol agreement.
We produced eight descriptive case studies of selected reviews from the three teams. From case studies, we identified key issues that shape the processes of scoping and question formulation for evidence synthesis. The issues were then discussed and lessons drawn.
Across the eight diverse case studies, we identified 14 recurrent issues that were important in shaping the scoping processes and formulating a review’s questions. There were ‘consultative issues’ that related to securing input from review commissioners, policy customers, experts, patients and other stakeholders. These included managing and deciding priorities, reconciling different priorities/perspectives, achieving buy-in and engagement, educating the end-user about synthesis processes and products, and managing stakeholder expectations. There were ‘interface issues’ that related to the interaction between the review team and potential review users. These included identifying the niche/gap and optimising value, assuring and balancing rigour/reliability/relevance, and assuring the transferability/applicability of study evidence to specific policy/service user contexts. There were also ‘technical issues’ that were associated with the methods and conduct of the review. These were choosing the method(s) of synthesis, balancing fixed and fluid review questions/components/definitions, taking stock of what research already exists, mapping versus scoping versus reviewing, scoping/relevance as a continuous process and not just an initial stage, and calibrating general compared with specific and broad compared with deep coverage of topics.
As a retrospective joint reflection by review teams on their experiences of scoping processes, this report is not based on prospectively collected research data. In addition, our evaluations were not externally validated by, for example, policy and service evidence users or patients and the public.
We have summarised our reflections on scoping from this programme of reviews as 14 common issues and 28 practical ‘lessons learned’. Effective scoping of rapid, responsive reviews extends beyond information exchange and technical procedures for specifying a ‘gap’ in the evidence. These considerations work alongside social processes, in particular the building of relationships and shared understanding between reviewers, research commissioners and potential review users that may be reflective of consultancy, negotiation and co-production models of research and information use.