BackgroundCodes of conduct mainly focus on research misconduct that takes the form of fabrication, falsification, and plagiarism. However, at the aggregate level, lesser forms of research misbehavior may be more important due to their much higher prevalence. Little is known about what the most frequent research misbehaviors are and what their impact is if they occur.MethodsA survey was conducted among 1353 attendees of international research integrity conferences. They were asked to score 60 research misbehaviors according to their views on and (...) perceptions of the frequency of occurrence, preventability, impact on truth, and impact on trust between scientists on 5-point scales. We expressed the aggregate level impact as the product of frequency scores and truth, trust and preventability scores, respectively. We ranked misbehaviors based on mean scores. Additionally, relevant demographic and professional background information was collected from participants.ResultsResponse was 17% of those who were sent the invitational email and 33% of those who opened it. The rankings suggest that selective reporting, selective citing, and flaws in quality assurance and mentoring are viewed as the major problems of modern research. The “deadly sins” of fabrication and falsification ranked highest on the impact on truth but low to moderate on aggregate level impact on truth, due to their low estimated frequency. Plagiarism is thought to be common but to have little impact on truth although it ranked high on aggregate level impact on trust.ConclusionsWe designed a comprehensive list of 60 major and minor research misbehaviors. Our respondents were much more concerned over sloppy science than about scientific fraud. In the fostering of responsible conduct of research, we recommend to develop interventions that actively discourage the high ranking misbehaviors from our study. (shrink)
The research climate plays a key role in fostering integrity in research. However, little is known about what constitutes a responsible research climate. We investigated academic researchers’ perceptions on this through focus group interviews. We recruited researchers from the Vrije Universiteit Amsterdam and the Amsterdam University Medical Center to participate in focus group discussions that consisted of researchers from similar academic ranks and disciplinary fields. We asked participants to reflect on the characteristics of a responsible research climate, the barriers they (...) perceived and which interventions they thought fruitful to improve the research climate. Discussions were recorded and transcribed at verbatim. We used inductive content analysis to analyse the focus group transcripts. We conducted 12 focus groups with 61 researchers in total. We identified fair evaluation, openness, sufficient time, integrity, trust and freedom to be mentioned as important characteristics of a responsible research climate. Main perceived barriers were lack of support, unfair evaluation policies, normalization of overwork and insufficient supervision of early career researchers. Possible interventions suggested by the participants centered around improving support, discussing expectations and improving the quality of supervision. Some of the elements of a responsible research climate identified by participants are reflected in national and international codes of conduct, such as trust and openness. Although it may seem hard to change the research climate, we believe that the realisation that the research climate is suboptimal should provide the impetus for change informed by researchers’ experiences and opinions. (shrink)
BackgroundConcerns about research misbehavior in academic science have sparked interest in the factors that may explain research misbehavior. Often three clusters of factors are distinguished: individual factors, climate factors and publication factors. Our research question was: to what extent can individual, climate and publication factors explain the variance in frequently perceived research misbehaviors?MethodsFrom May 2017 until July 2017, we conducted a survey study among academic researchers in Amsterdam. The survey included three measurement instruments that we previously reported individual results of (...) and here we integrate these findings.ResultsOne thousand two hundred ninety-eight researchers completed the survey. Results showed that individual, climate and publication factors combined explained 34% of variance in perceived frequency of research misbehavior. Individual factors explained 7%, climate factors explained 22% and publication factors 16%.ConclusionsOur results suggest that the perceptions of the research climate play a substantial role in explaining variance in research misbehavior. This suggests that efforts to improve departmental norms might have a salutary effect on behavior. (shrink)
BackgroundThe emphasis on impact factors and the quantity of publications intensifies competition between researchers. This competition was traditionally considered an incentive to produce high-quality work, but there are unwanted side-effects of this competition like publication pressure. To measure the effect of publication pressure on researchers, the Publication Pressure Questionnaire was developed. Upon using the PPQ, some issues came to light that motivated a revision.MethodWe constructed two new subscales based on work stress models using the facet method. We administered the revised (...) PPQ to a convenience sample together with the Maslach Burnout Inventory and the Work Design Questionnaire. To assess which items best measured publication pressure, we carried out a principal component analysis. Reliability was sufficient when Cronbach’s alpha > 0.7. Finally, we administered the PPQr in a larger, independent sample of researchers to check the reliability of the revised version.ResultsThree components were identified as ‘stress’, ‘attitude’, and ‘resources’. We selected 3 × 6 = 18 items with high loadings in the three-component solution. Based on the convenience sample, Cronbach’s alphas were 0.83 for stress, 0.80 for attitude, and 0.76 for resources. We checked the validity of the PPQr by inspecting the correlations with the MBI and the WDQ. Stress correlated 0.62 with MBI’s emotional exhaustion. Resources correlated 0.50 with relevant WDQ subscales. To assess the internal structure of the PPQr in the independent reliability sample, we conducted the principal component analysis. The three-component solution explains 50% of the variance. Cronbach’s alphas were 0.80, 0.78, and 0.75 for stress, attitude, and resources, respectively.ConclusionWe conclude that the PPQr is a valid and reliable instrument to measure publication pressure in academic researchers from all disciplinary fields. The PPQr strongly relates to burnout and could also be beneficial for policy makers and research institutions to assess the degree of publication pressure in their institute. (shrink)
BackgroundThere is increasing evidence that research misbehaviour is common, especially the minor forms. Previous studies on research misbehaviour primarily focused on biomedical and social sciences, and evidence from natural sciences and humanities is scarce. We investigated what academic researchers in Amsterdam perceived to be detrimental research misbehaviours in their respective disciplinary fields.MethodsWe used an explanatory sequential mixed methods design. First, survey participants from four disciplinary fields rated perceived frequency and impact of research misbehaviours from a list of 60. We then (...) combined these into a top five ranking of most detrimental research misbehaviours at the aggregate level, stratified by disciplinary field. Second, in focus group interviews, participants from each academic rank and disciplinary field were asked to reflect on the most relevant research misbehaviours for their disciplinary field. We used participative ranking methodology inducing participants to obtain consensus on which research misbehaviours are most detrimental.ResultsIn total, 1080 researchers completed the survey and 61 participated in the focus groups. Insufficient supervision consistently ranked highest in the survey regardless of disciplinary field and the focus groups confirmed this. Important themes in the focus groups were insufficient supervision, sloppy science, and sloppy peer review. Biomedical researchers and social science researchers were primarily concerned with sloppy science and insufficient supervision. Natural sciences and humanities researchers discussed sloppy reviewing and theft of ideas by reviewers, a form of plagiarism. Focus group participants further provided examples of particular research misbehaviours they were confronted with and how these impacted their work as a researcher.ConclusionWe found insufficient supervision and various forms of sloppy science to score highly on aggregate detrimental impact throughout all disciplinary fields. Researchers from the natural sciences and humanities also perceived nepotism to be of major impact on the aggregate level. The natural sciences regarded fabrication of data of major impact as well. The focus group interviews helped to understand how researchers interpreted ‘insufficient supervision’. Besides, the focus group participants added insight into sloppy science in practice. Researchers from the natural sciences and humanities added new research misbehaviours concerning their disciplinary fields to the list, such as the stealing of ideas before publication. This improves our understanding of research misbehaviour beyond the social and biomedical fields. (shrink)
Research integrity is usually discussed in terms of responsibilities that individual researchers bear towards the scientific work they conduct, as well as responsibilities that institutions have to enable those individual researchers to do so. In addition to these two bearers of responsibility, a third category often surfaces, which is variably referred to as culture and practice. These notions merit further development beyond a residual category that is to contain everything that is not covered by attributions to individuals and institutions. This (...) paper discusses how thinking in RI can take benefit from more specific ideas on practice and culture. We start by articulating elements of practice and culture, and explore how values central to RI are related to these elements. These insights help identify additional points of intervention for fostering responsible conduct. This helps to build “cultures and practices of research integrity”, as it makes clear that specific times and places are connected to specific practices and cultures and should have a place in the debate on Research Integrity. With this conceptual framework, practitioners as well as theorists can avoid using the notions as residual categories that de facto amount to vague, additional burdens of responsibility for the individual. (shrink)
Research integrity is a continuously developing concept, and increasing emphasis is put on creating RI promotion practices. This study aimed to map the existing RI guidance documents at research performing organisations and research funding organisations. A search of bibliographic databases and grey literature sources was performed, and retrieved documents were screened for eligibility. The search of bibliographical databases and reference lists of selected articles identified a total of 92 documents while the search of grey literature sources identified 118 documents for (...) analysis. The retrieved documents were analysed based on their geographical origin, research field and organisational origin of RI practices, types of guidance presented in them, and target groups to which RI practices are directed. Most of the identified practices were developed for research in general, and are applicable to all research fields and medical sciences. They were mostly written in the form of guidelines and targeted researchers. A comprehensive search of the existing RI promotion practices showed that initiatives mostly come from RPOs while only a few RI practices originate from RFOs. This study showed that more RI guidance documents are needed for natural sciences, social sciences, and humanities since only a small number of documents was developed specifically for these research fields. The explored documents and the gaps in knowledge identified in this study can be used for further development of RI promotion practices in RPOs and RFOs. (shrink)
To foster research integrity, it is necessary to address the institutional and system-of-science factors that influence researchers’ behavior. Consequently, research performing and research funding organizations could develop comprehensive RI policies outlining the concrete steps they will take to foster RI. So far, there is no consensus on which topics are important to address in RI policies. Therefore, we conducted a three round Delphi survey study to explore which RI topics to address in institutional RI policies by seeking consensus from research (...) policy experts and institutional leaders. A total of 68 RPO and 52 RFO experts, representing different disciplines, countries and genders, completed one, two or all rounds of the study. There was consensus among the experts on the importance of 12 RI topics for RPOs and 11 for RFOs. The topics that ranked highest for RPOs concerned education and training, supervision and mentoring, dealing with RI breaches, and supporting a responsible research process. The highest ranked RFO topics concerned dealing with breaches of RI, conflicts of interest, and setting expectations on RPOs. Together with the research policy experts and institutional leaders, we developed a comprehensive overview of topics important for inclusion in the RI policies of RPOs and RFOs. The topics reflect preference for a preventative approach to RI, coupled with procedures for dealing with RI breaches. RPOs and RFOs should address each of these topics in order to support researchers in conducting responsible research. (shrink)
BackgroundResearch codes of conduct offer guidance to researchers with respect to which values should be realized in research practices, how these values are to be realized, and what the respective responsibilities of the individual and the institution are in this. However, the question ofhowthe responsibilities are to be divided between the individual and the institution has hitherto received little attention. We therefore performed an analysis of research codes of conduct to investigate how responsibilities are positioned as individual or institutional, and (...) how the boundary between the two is drawn.MethodWe selected 12 institutional, national and international codes of conduct that apply to medical research in the Netherlands and subjected them to a close-reading content analysis. We first identified the dominant themes and then investigated how responsibility is attributed to individuals and institutions.ResultsWe observed that the attribution of responsibility to either the individual or the institution is often not entirely clear, and that the notion ofcultureemerges as a residual category for such attributions. We see this notion of responsible research cultures as important; it is something that mediates between the individual level and the institutional level. However, at the same time it largely lacks substantiation.ConclusionsWhile many attributions of individual and institutional responsibility are clear, the exact boundary between the two is often problematic. We suggest two possible avenues for improving codes of conduct: either to clearly attribute responsibilities to individuals or institutions and depend less on the notion of culture, or to make culture a more explicit concern and articulate what it is and how a good culture might be fostered. (shrink)
Most studies are inclined to report positive rather than negative or inconclusive results. It is currently unknown how clinicians appraise the results of a randomized clinical trial. For example, how does the study funding source influence the appraisal of an RCT, and do positive findings influence perceived credibility and clinical relevance? This study investigates whether psychiatrists’ appraisal of a scientific abstract is influenced by industry funding disclosures and a positive outcome. Dutch psychiatrists were randomized to evaluate a scientific abstract describing (...) a fictitious RCT for a novel antipsychotic drug. Four different abstracts were created reporting either absence or presence of industry funding disclosure as well as a positive or a negative outcome. Primary outcomes were the perceived credibility and clinical relevance of the study results. Secondary outcomes were the assessment of methodological quality and interest in reading the full article. Three hundred ninety-five psychiatrists completed the survey. Industry funding disclosure was found not to influence perceived credibility nor interpretation of its clinical relevance. A negative outcome was perceived as more credible than a positive outcome 0.43 to 1.18, p?), but did not affect clinical relevance scores. In this study, industry funding disclosure was not associated with the perceived credibility nor judgement of clinical relevance of a fictional RCT by psychiatrists. Positive study outcomes were found to be less credible compared to negative outcomes, but industry funding had no significant effects. Psychiatrists may underestimate the influence of funding sources on research results. The fact that physicians indicated negative outcomes to be more credible may point to more awareness of existing publication bias in the scientific literature. (shrink)