A conversation with Karen Stroobants and Noemie Aubert-Bonn
TRANSCRIPT
Jo: So today we will talk about research assessments and what comes and doesn't come with it. I'm here today with Karen Stroobants and Noemie Auburn. And, yeah, we'll basically bring a lot of expertise and experience into the topic. Welcome, Karen and Noemie.
Karen: Thank you.
Noemie: Thank you.
Jo: So, to get us started, could each of you briefly introduce yourself and how you got into the topic of conceptualizing around research assessment, working at the projects that you're currently working at and why that's so important to you to engage in? Who would like to start, maybe? Karen?
Karen: Yeah, I'm happy to start. So I'm Karen Stroobants and I'm currently doing a few things. I manage a bit of a portfolio. One of them is that I'm a policy advisor for the Royal Society of Chemistry and actually leading a small team on research innovation policy. But I'm also doing a side of that, a bit of freelance consultancy and that is really focused around research culture. And then also since December, I'm the vice chair of the Coalition for Reforming Research Assessment. And all those things kind of relate to each other around the topic of research culture. It's also part of my job at RSC. And I think the way I got into that topic is as a researcher, actually. So I started my career as a researcher as a PhD in chemistry. And I think I got a bit disappointed initially with what kind of the focus was on progressing in your career as a scientist. I was part of a PhD student of an organization in Flanders at the time. I'm originally from Belgium, where we were trying to convince the Flemish government to actually change the way that they were allocating budgets to the different Flemish universities because they were already relying on metrics and publication based indicators. And that was really the start of my interest in first actually research assessment at the institutional level. But then I did continue as a researcher for some time, came actually to Cambridge in the UK, where I'm based now to do a postdoc and still also saw a lot of the issues. And then I decided after my post doc to actually change to a career in policy. And my first experience there was at Royal Society where I was working on a project around research culture. And that really set the tone of the rest of my career because it kind of reinvigorated some of the ideas I had when I was in Belgium. And it's a topic I've found interesting and really important as a researcher, but also now as a policy professional because I feel we're not getting the best from scientists at the moment because of the way the system is set up and incentivized because of systematic barriers. I think also to things like inclusion, to the effects that that has on integrity as well. But I also have an interest more generally in success and what success means because I think also at a more, much wider level, if we think about, for example, the use of GDP to measure the success of nations, it's almost similar to what we do with the impact factor. It's a metric that doesn't say much, that doesn't include everything, but that we've kind of come to use to define success around. So I'm also interested in that broader picture around how we measure success. And I think somewhere I'm also hoping that if we can solve or address that problem to some extent within academia, maybe that will also give clues for solutions on those broader levels. So that's maybe kind of a few of the things around my motivation on the topic, but also curious to hear Noemie's version.
Jo: Yeah. Thank you so much. And I will happily take you up on the question around how we can measure success and what does success mean in a research context? And that's probably also to be answered quite individually by the respective researchers on the project level. But before we come to that, Noemie, please.
Noemie: Thank you. So, a bit like Karen, I think I am in this domain because I got somewhat disappointed with how research works, how academia works. So. I am originally from Quebec, in Canada. I studied cognitive neuroscience. I did a bachelor's and a master's in that topic. And as I was going on with my studies, I realized that the main focus of my education was to publish papers. So I was asked a lot of things, and I built a lot of skills as a researcher. But the only thing that really mattered for my future was how many papers I had on my CV, how many of them. I was the first author, and that really disappointed me. So I decided to take a sidetrack. I thought it would be just for a few years, but it ended up lasting much longer than that. And I explored why the system is like this. So I first worked in publication ethics as a scientific editor for a little bit, for a few months, and then I did a master's of bioethics looking at research integrity. So research integrity is really interested in how researchers conduct research, also in research misconduct, in falsification of data and things like that. And that led me to the conclusion that, of course, there are some researchers that falsify data voluntarily. They want to reach the top, and they cheat the system a little bit. But most research integrity failures in science are happening because researchers are not evaluated appropriately, because they ask things that are not relevant for the quality and the integrity of science. And to survive in their career, they have to do things that do not value good science. So this is what led me to then work in research assessments. And I did my PhD on that topic, looking at the impact of research assessments on research integrity, on research practices, through qualitative research, through interviews, surveys and et cetera. After my PhD, I worked as a postdoc. So I was on a European Commission project called Subs for RI, which wants to make a tool for research institutions to better promote research integrity. And part of that was on making better research environments, including better research assessment. So that's where I focused most of my efforts. I'm also a postdoc in Hasselt University and I'm since recently also a policy advisor with Research England UKRI in England.
Jo: Right. And UKRI is also the UK Network or Agency for Research Integrity. Right?
Noemie: It's UK Research and Innovations. I think you're probably thinking of UK Rio.
Karen: There's a lot of acronyms that are very similar.
Jo: Karen clarified that, thank you. Okay, so when we think about research assessment and in a way also revolutionizing the same or changing it to more towards quality oriented, but many of us and many of the listeners probably come across is DORA, and you both were also involved and engaged in COARA, which is an acronym for please help me out. C-O-A-R-A.
Karen: Coalition for advancing research Assessment.
Jo: Thank you. And I've listened to you guys on the webinar with Ramo, the researchers,
where you presented basically the plan work for COARA, which builds upon DORA in a way, or is this an independent and yet integrated follow up from DORA, where DORA is basically calling for commitment for a change of research assessment more towards quality orientation. And COARA is now facilitating the actual implementation at institutional levels, is that right? So far.
Karen: So I think Dora is already, I think, celebrating its 10th year anniversary, and they've done really a lot, I think, to create awareness around changes to research assessment. I think COARA is really building a bit on DORA, but also on the fact that there's been, I think, renewed momentum for this agenda. Also in the context of, I think, people also at a political level, realizing that if we want to also make progress on the agendas around open science, around inclusion and diversity, the assessment system is actually forming a barrier. So the coalition was really born from, I think, increased momentum with funders in Europe, with the political level in Europe saying, okay, we really want to do something to progress that agenda. And so the first step in that was a consultation actually by the European Commission of stakeholders across Europe. The second step was to actually formulate an agreement that is, I would say, building on DORA, but going further. So DORA was very focused initially around not using the impact factor. The agreement of the coalition goes a step further and also sets a time frame for action. And I think that's something that's really new. And then that's something actually where Noemie and I were very much involved in getting that agreement drafted and also the co-creation with all the different organizations who are interested in that. And then the third step, which is the kind of phase we're now in, was then in December, the launch of a coalition to actually implement that agreement that indeed builds on Dora. But it does go a bit further and that's kind of where we're at now. So the coalition has been launched. I think we have around 400 members now. And now the steering board has a task to kind of ensure that that momentum continues and that initiative can really see the implementation of the agreement.
Jo: And how do you work with institutions as much as you have entered or engaged in these parts of what the COARA is doing? Because I'm sure there are several levels where the implementation part is okay, the COARA members are there like webinars or meetings that are being facilitated where best practices are being shared and trying to come up with standards for assessments.
Karen: So in the initial phase, we had these big assembly meetings. So that was really for creating the agreement itself. So there was also a survey, but in the meetings itself there were also breakout groups to kind of get the opinions of all the different stakeholders and just to say that actually since the start, it actually went to something that has the aspiration to be global. So it's not only organizations from Europe that were involved in that. Now that the coalition is set up, there's actually still a lot that needs to be worked out of how members will be able to engage. There definitely will already, as one central part, be the option of members engaging via working groups. And the process for applying for those and how those will work is actually being developed at the moment. But we're also thinking about creating some kind of forum where people will be able to share resources and interact. But I must say a lot of that is still really in a development phase. So yeah, we'll need to see in the next months how that develops.
Jo: And understandably so we only think about the diversity of research topics and challenges and approaches within one discipline and one field of study. Really. There are several of the endeavors to look at where assessment can happen.
Noemie, would you like to add otherwise, I have another question maybe to address to you.
Noemie: Yeah, well, go ahead.
Jo: Okay, this is my question then. So where have you seen opportunities to measure quantity? Where is this quantity? What are other quantitative measures that we can take into account other than obviously the impact factor and the number of articles given the high diversity and research output and questions and answers being generated by the various researchers and scientists?
Noemie: Yeah, well, I mean, that's a very good question. I think one that is not really resolved at the moment, but that we are building consensus. Well, we know that the system right now, it's not necessarily the way that it's measured, but it's the limited number of things that we look at and how these things are not showing the whole picture of what's happening. So we look at the number of publications, we look at journal impact factors and that gives us very little information about how the research was done. It's really dependent on luck. It doesn't really tell us anything about the project, the research, the researchers, the openness of the science and the integrity of the science. And this is where the problem starts. So we have very few indicators, they're very limited. And they're also, if you look at how they're calculated, for example, the impact factor, there are problems as well in how it is calculated, in the validity of the impact factor for showing what we think it shows, which is impact. But impact through the impact factor has a very different meaning than what we think of impact. So that's the problem and where it starts. And most people now agree that we have a problem there. How do we then move beyond that to measure something that shows us more information about the quality of the research? Well, one of the main points that is happening right now is that we realize that quality is not necessarily quantifiable. So we're moving towards more qualitative indicators, towards more indicators that provide more variety. So the researchers have more room to say why their project matters, why their career matters, and it's more open for different elements that create their career paths. Not everyone has the same career path. Not everything that researchers do matters in the same way. And we leave more room for that. I could go on and on, but maybe Karen wants to turn me in on that.
Karen: Yeah, I can add to that. Obviously I agree with everything Noemie just said. But I think one of the things we are also, I think, trying to kind of come back from is whether we've in the past maybe conflated certain metrics with quality where that's not really what they tell us about. So a question I've often asked in conversations around the use of quantitative versus qualitative indicators is are we measuring productivity or are we measuring quality? Because if you want to measure something that's a quantity, it makes more sense to use a quantitative measure, whereas if it's quality you want to measure, it would make sense to use qualitative measures. I mean, it's in the terminology for, I think, a logical reason. So I think that kind of really resonates with what Noemie just said, that there is now, I think, a tendency to move more to thinking about, okay, what are qualitative indicators we can use? If it is really the quality that we want to measure, that doesn't mean that there is never a merit in measuring productivity or performance. But I think we need to be clear that in those instances that is what we're looking at and not conflating the two which I think we are often doing at the moment. And again, also agreeing with the fact that it's also about broadening it's also about taking into account more different things. So part of my route into it, as I said, was working on the research culture program at the Royal Society. And a big outcome from that program was that a lot of the issues with research culture and all the topics that sit under it, if you look at kind of what is central to a lot of those research, is the incentive system. It is the combination of very narrow kinds of definitions of how we define success as well as a very competitive environment. And I think that relates more to preparation and the way the funding system is set up. But so the incentive system is one of the things to change if you want to improve the culture. That was one of the findings and one of the things I worked on when I was at the Royal Society was the development of the Narrative CV that then later on was adopted by UKRI. And this indeed is a way to actually do a more qualitative assessment. So it's very much in line with that movement towards more qualitative forms of assessment that can include a lot more things than the things we look at at the moment. So, yeah, very much in line really, with what Noemie just said.
Jo: Can you just say a few words? What the Narrative CV entails other than the template on what did you add as components to make it more informative?
Karen: Yeah, so the Narrative CV, but I think you might want to add in because I think she also knows a lot about Narrative CVS. So the Narrative CV is really kind of giving space to a narrative, giving space to a more qualitative formulation, what researchers think is valuable of the things that they've done. And so it was the initial one, I think we've left lots of space for people to adapt it, tweak it, use the elements that they find useful. But the initial one, I have to think now was structured around four big contributions that you can make. The first one was really contributions to knowledge generation science. So the very core of it, I think it did stipulate where you would talk about publications to only mention the DOI and so not any metrics, but also it gave a lot of examples of other things you could talk about. The second layer was, I think, contributions to the development of people around you. So that could be mentoring, it could be working with PhD students, but it could also be supporting people senior to you, so all directions to people directly around you. And then the third level was contributions you make to the scientific community more broadly. So that could be things like peer review, organizing a conference, all those kinds of things to make sure those are taken into account. And then the final level was your contributions to wider society. So that could be where maybe you have examples of impact for society, but it could also include outreach and engagement directly with, for example, patients when you're in medical research. Of course, those were kind of the four levels and then there were lots of examples given to really inspire people on the kind of things they could talk about, but also with clearly an indication as part of the framework, say, no one is expecting you to do all the things that are listed here. We need to also go, I think, a bit away from this expectation that every academic does absolutely everything that they could do. And I think there's also a danger with metrics that people feel like they need to tick every box. So I think it's my hope that at least we are already using it and evaluating it. But I think there's more work to be done. I do hope it will help with ensuring people feel they can focus on one aspect of that big area and not feel like they need to tick every single box.
Jo: Because the biggest objection I have from here when it comes to purpose, why are you doing this research and what's the society impact? All the researchers, including myself, who work in basic research just for acquiring knowledge's sake, usually have difficulties with answering the questions. I don't know where it might be applied eventually, but this is what I'm curious about for now, and just because I don't see any bigger purpose for it. But even for those, and that took me a while, actually to discover it's only throughout the PhD or later, the more you advance in the career, that you are also more and more capable of seeing the bigger picture and how this might eventually influence also policy making or societal impact in whatever sense or solving climate change. I don't know which at the beginning of your research endeavor, no matter how old you are, but when you're still early into the topic might not be foreseeable necessarily. And for that I agree with you. It's not probably not necessary to tick all the boxes, but to have space to explain why I, as a researcher, find this worthwhile to pursue as a research question or research topic and why it matters.
Noemie: If I can add to that one of the big aspects of the narrative CV, and one of the things that I think I like the most is that it's really not focused on quantity. So in the other CVs, you would have a list of publications, but in most narrative CVs, not all of them, but most of them, they limit you on how many publications you can put on, how many contributions you talk about. So it's really about you reflecting on the content of what you put in there. So it's about a reflection of some of your contributions and explaining why they matter to you. So it gives a better idea of who you are as a scientist, what matters to you, what areas you find important. So like you mentioned about outreach and about contribution to society, some people are really good at that and they really want to focus on that too without necessarily being so involved in the first steps of the research. They are good at bringing research out in the open, bringing it in the field, testing, implementing, piloting it well, these people are able to write that in narrative CV but there are fewer publications coming out of these types of research projects where it's mostly in the field applications and put in real life. So I think it just opens the door to more variety, less hyper productivity without meaning.
Jo: And then additionally, or maybe before even journal publications, I deal with preprints quite a bit. And for me, a preprint is already something that should be equivalent to a general publication because often it has like the biggest difference that's being discussed is, oh, it has undergone peer review, which I would object because normally it has undergone rigorous peer review in the smaller circuits. We don't need to go into the do's and don'ts and what preprints generally are. But what I want to address to you now and also of course, you can have an opinion and voice. It is also here, but is the redesign of how we have scholarly series nowadays and forthcoming also incorporating discussions not discussions, but listing research output in different forms as in not only the journal articles. And if those are mentioned, then I would argue for and some of my colleagues in the past have done that. Not to mention the venue where it's been published. But just giving the DOI and the title and the courses and that should be enough. Unfortunately some of the DOIs also have the actual name of the publisher in it which then can also again lead to prestige building and judgments and assessments based on prestige qualifiers but also listing all the conferences that's not the conference we didn't enter but conference proceedings. Research has contributed to data sets that were published separately, non results that have been shared in a repository and also with the respect of DOIs. Because that's when the purpose of the CV is to a future employer or funder or whatever consortium who looks at it can then directly verify the information given through the DOI, actually look at the data and then have a coherent assessment of the person with their achievements beyond mere journal publishing. Am I going too far?
Karen: I can maybe start with that one. I mean first of all on peer review, I think this could be a whole new episode.
Jo: Peer review report is also the thing that's emerging now to be more pushed.
Karen: Within the agreement we did put peer review as still one of the central ways of assessing because it does allow for a more qualitative assessment. But I think equally recognizing that there are issues with peer review and that in itself needs to be looked at, but that's maybe even another piece that fits within the bigger assessment piece is one area to really look at in terms of what CV should look at. Yes, I do think it's indeed around being able to capture other outputs. But not only outputs, also just more around the research process, about the competencies that people have. Because you say, okay, what are people looking for in a CV? And I have a very different experience outside academia versus inside academia. My academic CV has been absolutely useless to apply for jobs outside. People have told me when I was using the academic CV, this says nothing about your skills. It's completely output oriented. It doesn't tell me anything about how you operate as a person, what your key skills are, what competencies you have. And there's a question around how useful it is within academia. Does it really tell us how well placed a person is for a job, for recruitment? Does it tell us whether they will be able to manage a big research group, how well they will do in collaborations, which is essential to much of the research we do now? So I do have questions around even though there are people who are suggesting a kind of output and metrics based CV, but that includes a lot more things, a number of how many open science publications, a number of how many engagements you've had with the general public. My take on it is I don't want to end up with a much longer list of numbers. I think that is really the opposite of the direction we should go in. I just don't think it will be helpful. I think there is a danger in going in that direction because it's what we're used to within the academic landscape. So I've already seen proposals of an indicator for outreach and public engagement. Start counting how many open access publications you have. Start counting how many open data sets you have. I mean, if you really think about it and reflect on it, what does that really tell you? An awful lot.
Jo: It carries the same possible misconduct like salami slicing and whatnot.
Noemie: Yeah, exactly.
Karen: I do think that's traffic. So I'm really going more towards the quality. Not everyone will agree with me. There are people who really think that that is the direction I've heard it in. Very kind of really defending metrics on everything. I personally don't think that's a direction, but I think as Noemie started the conversation, I think these are all things we're working through at the moment as a community, and there's quite a lot of work ahead of us.
Noemie: And if I can add to that, one of the big things that brings us back a little bit and that's very difficult to solve is if you have very different profiles of researchers and you allow for that. And you understand that they're very different. They have different skills, they have different roles in their teams, different types of outputs. It's very difficult to compare them. And that's where we're still struggling and we still have a lot of work to do on how to make decisions when we have a lot of profiles. Because we know that in science there are many scientists, many excellent scientists, and I don't like to use the word excellence, but there are many really highly skilled, highly passionate scientists and not so much funding. So we have to make decisions. And this is why a lot of people push for metrics. But at the same time we know that the metrics can be damaging and they're not showing the full picture and they have issues. There are efforts to create new metrics partly for these reasons, and there are efforts to embrace more the subjectivity of qualitative data, understanding that all assessments are subjective in a way, even though you use metrics, you have number, you have an impression of objectivity, but it's all subjective in the end. So there's these two views that we still need to work out a little bit. And I think that's for the future, hopefully the near future.
Karen: Yeah. And I think one thing that's also sometimes forgotten is, yes, if you go to qualitative, it is more work. I think we need to be very honest about that. But there is a question about frequency of assessment. I think researchers are probably one of the most assessed professionals if you compare it across all sectors and all different professions that exist. So I do think there is a question about are we over assessing in terms of frequency? Is there a balance to strike where maybe solving the issue of capacity is not about, okay, let's do something that's easy via metrics that can be done fast and we don't need as much capacity of researchers? Or can we say, no, we do think qualitative is the way to go, but perhaps we don't need to evaluate as often as we do. So I think that's maybe something as well that is perhaps not so much part of the discussion yet, that maybe should be in the future.
Jo: There's also pressure because much research is publicly funded, which means taxpayers. So the government has accountability to the taxpayers and researchers have accountability towards the policymakers and that needs to balance. And then there's extensive research, or high throughput research, like in the Stem sciences, physics, what is it like Astronomics, bioscience is all highly expensive research and high throughput also and also the bike of research is being performed across the world, I guess, or none anyway, so where most of the financial resources go. So in that sense to make sure that the money is well spent or efficiently spent, which is highly questionable with the current assessment system. I agree. But I think that's been the argument for years. Need some sort of metrics. But I'd like to bring in the term slow science, which I've come across only recently, I think, about two years ago. And I haven't really dug into it, but from how I thought before I started even my studies, or maybe in my studies, and then towards master's level and then engaging in the working on the PhD, I thought being a scientist is fun. And then there's also things like academic freedom. You have a liberty in designing your research results. You have a timeline, but it's not as not as strict as we find it in industry research, so not as rigorous. So yeah. I don't know. Do you want to comment on that? It's not that I have a specific question, but the concept of slow science, I think, also allows for more quality orientation in the research design and then also puts it in the assessment versus having to publish as soon as possible in as many papers as possible. So it's basically the contradictory approach.
Noemie: First, I want to go back to one of the points you mentioned just before talking about slow science, that we have an accountability to taxpayers, and we probably need metrics to showcase that accountability when you said that. I agree. I hear that very often. But there's two points on this. So first, we think that the metrics are the way to show accountability, that they are the way to show progress. But maybe there are other ways, and we're discovering that there really are other ways. Right now, metrics might be useful on a very big aggregate level, on a country level, but it doesn't mean that you have to impose that on researchers. And I know it normally if you assess a country based on publication numbers, then you assess institutions based on publication numbers, then institutions will assess their researchers based on publication numbers. So there's a risk there. But I think we also have to realize at which level the metrics are necessary, at which level they are useful, and whether they really are necessary in themselves, whether they really are enough, and et cetera. So that's just one point. Not on slow science.
Jo: Yeah, no, thanks for watching this, and I totally agree. It's not that I wanted to push for that argument, just to mention that it's probably the only way that many people feel that we currently have to assess. But I agree it shouldn't be on the individual researchers level. More in a kind of higher overarching.
Noemie: Yeah. And I think actually in the Norcom tool, which is one of the kind of, I think, good practice examples we have for putting this more qualitative assessment in practice. At the moment, there's a very nice graph of where to use quantitative versus qualitative indicators, and it does really change with the aggregate level. So if you're looking at an individual, it's kind of fully qualitative. And when you look at the biggest aggregate, which is indeed countries. It's kind of more on the metric level and it kind of changes with the levels in between. So I think that's quite a nice conceptual way to think about where what is actually useful. But I do think there's real challenges in actually implementing that because of how kind of one level influences the next one. So it's an important question. I do also think when we think about accountability, we are very focused now on the research. What is the merit of the research? And I think sometimes we forget to ask, are we hiring the best researchers? Are we hiring not? And really thinking about is this the best person for the job role, considering all the things they'll need to do in that job role, not only the research they do and how good that is, but also how well will they mentor the next generation, how well will they manage a team if that's part of their role? I do sometimes feel like we are kind of forgetting about that as a more holistic part of the assessment. And I do think if we want to be accountable, we do also need to show that if we acquire funding for a person to do a role, that we need to consider all the aspects of that role and whether they'll be suitable for all the aspects of that role. And I don't know whether we're always doing that at the moment. I would argue that probably we're not.
Jo: Yeah, great. Yeah. I mean, that's another thing like how well our research is equipped to then lead research teams, starting with their postdoc sometimes and then there's quite a few gaps to fill.
Karen: Yeah. And I think in a more holistic form of assessment, it will be easier to take that into account. Of course, we'll still take on the people who do the best research, but sometimes that will mean someone who is on par with the other candidates, but also can show that they have the competencies to lead a team or the competencies to do outreach. That's an essential part of that specific position. So hopefully I'm hoping that direction will go in.
Noemie: Yeah. And also showing that we hire a diversity of people because right now to be a successful scientist, you really have to fit in that Pi profile. You are the leader of a research group. You supervise students, you produce a lot of papers of which you are the last author. But the middle positions and the team members and the people who help support the projects, they cannot really become independent researchers unless they follow this path. So there's a differentiation of skills and profiles that is needed. And I think that relates back to do we hire the best researchers? It's not just one type of researcher that we need. And everybody needs to compete against each other. We need good teams, strong teams, and strong collaboration. And that requires different profiles.
Jo: And also or are the people who engage in the research capacity equipped with the skills they need to pursue a career within academia? Meaning what stepping stones are needed to also provide as an institution or as a sector, academia.
Karen: And I would add to that and also outside academia, because we have to remember, for a PhD student, staying in academia is the exception today.
Jo: Yeah, most PhD students are not aware of that. I keep telling them most of you guys will find a place outside academia, and you have to. But it's also an exciting opportunity. But the expectation, including myself, I thought I didn't even consider there was no, like I didn't have awareness of what's happening after. And the thoughts I gave are like, oh, probably become a research group leader and would I be fit for the job? Was my fear, rather than an expectation.
Karen: Yeah. And I do think there is still this stigma around failure if you don't end up being in that one pathway. So there's a lot of links across and I do think the incentive system plays a big role in it. I mean, my personal vision for how we can alleviate it is that we actually get much more diversity and much more mobility between sectors so that you can actually, within academia, hear firsthand from a supervisor who has been in industry or who has been in the public sector that you get that mobility. But again, I think changes will be needed to the assessment system to make that possible because obviously at the moment, if you are in the public sector, in industry, elsewhere, where publishing is not a requirement for anything you do, it's almost impossible to have a CV that would get anyone interested in academia. Whereas hopefully, if we can move away to some extent from that, there will be merit in understanding the competencies of someone who's been elsewhere and what they can bring to academia. And those could be the kind of mentors who can talk to PhD students about all the other options they have in their career. But now I'm looking very long term, we're not really there yet, but I'm hoping that the direction will move in and that will also alleviate some of the problems that there are now with transitioning from academia. I've done it and I found it quite difficult, it was quite a struggle to understand how other sectors work, what counts in recruitment. So I do think there's already smaller things we can do today to prepare PhDs and postdocs better for that transition. But I do really hope that in the long term, there will just be much more interconnectedness between sectors also in terms of the assessment system, that those interventions become less needed.
Noemie: I couldn't agree more with that. I think that's a real priority for the future and something we're not there yet. It needs a big culture change, but it's really crucial.
Jo: Yeah, just really coming back to the question or the concept of slow science. I think a comment of my common understanding of academia is that it's less supposedly, less stressful, less output oriented, which very much has become in the past two decades. Really, I think it's becoming spring-like as compared to the industry, which is clearly unpopularly committed to developing products, and therefore has to be output oriented as in producing results which then can be turned into products or serve society.
I feel that the lines have blurred too much. And maybe it's an opportunity also to redefine what academia is supposed to be, a good role academia is supposed to fulfill where we have so much bias science and Stem research. Which is very much similar or it's quite similar to industrial research and also engaged with industrial research when it comes to implementable or applied research. The blood lines of blood are much more than just two or three decades ago. Where is academic freedom in such systems? And can we claim academic freedom back as in providing space within academia for conceptualization, for experimenting more than there is in an industry? What's really an excellent point for academia
Noemie: I just wanted to say I agree, like academia is less product oriented, but by definition the product has become the scientific paper. So it has become that, yeah, we don't have something to do, manufacture and implement straight away, but we have become a paper producing machine in a paper producing industry in a way that this is the goal at the moment for most carriers to do so. There's more room to have results that are not as applicable. But we still have a problem.
Karen: Yeah, and just to add to that, I think that perception of academia and industry I would say is probably no longer true and there is indeed the product of the publication that is strived for in academia. And I think in industry a lot has actually happened, probably more on the HR side. There's lots of requirements where I think a lot of progress has been made in terms of culture, not in every company, obviously there's in academia as well. We're talking about a broad sector and we're kind of generalizing. What I do think is a big difference from my own experience is that what I've kind of felt as a big difference outside is that there is much more focus on how you contribute to a team effort, how you kind of contribute to a collective achieving of objectives. Whereas I think academia still is very much focused on individuals and how you as an individual perform. So I think that's a big cultural difference and it does make it difficult to focus on the team effort and to focus on the collaboration. And I think there's lots of conversations around collaboration versus competition, much more in academia I would say, than in any other sector that I've kind of had experiences in. So I also feel to some extent, academia is a bit behind in implementing some of the kind of sometimes in some companies quite progressive initiatives on inclusion and diversity, on good HR practice, on diversifying the type of profiles they need as part of a team. So I do think that that is something where there's also lessons to learn, I think for the academic sector from outside. Equally, I do remember the things that I loved about academia. I did love the flexibility that didn't mean less work pressure because it often meant kind of working all the time. But I do remember there was lots of flexibility about how you fill in your portfolio, what conferences you choose to go to, much less need for signing off from someone, much more flexibility and freedom to follow your own interests. And I do think with all the changes that we're proposing, that that is something that's important to preserve because I do think it's a very attractive element of being an academic that you can make some of those choices.
Noemie: So there's, I think, also a big challenge now around how do you kind of implement change, maybe learn from other sectors where they're ahead, but without losing some of those unique, really great things about academia and that's I don't think something we have an answer to yet, but hopefully we are working towards that.
Jo: Right. We said in the beginning that we were talking about how we can measure success and what is success? I think we've touched on a few things here and there, but just in case, the listener was waiting for us to answer that. Personally, I think it's a personal measure very much, but maybe it can also be institutionalized. So maybe we can each start with our personal measure of success. I had a similar question when it came to happiness. What makes you happy as a professional and as a human being? And I just want to mention that a couple of months ago I had a podcast with Chris Long, Simone Sacchi, Rebecca Kennington. We were looking at a Values Alliance academy for basically working. They have published an assessment from United States University's Twelve program where they asked the university staff, not the researcher, but at the librarians level mostly. And there was a big complaint that people had lost purpose in their work and were very much metrics oriented and metrics assessing approaches. And they were hungry for aligning the work again with their value systems, personal value system. So in that regard, I think now coming back to the question of how we can measure success, isn't it also very much purpose and values oriented and shouldn't that also be a measure or a component to address at institutional and standards levels?
Noemie: I can start with this just because part of my PhD was to look at success in science. So I really interviewed several stakeholders. So researchers, research funders, policymakers, PhD students, research staff, like different types of profiles within academia. And we spoke about success in science. And one of the things that I realized very quickly is that people talk about success in careers and then they talk about success in science, and to them it was different. So you have things that you do to survive in your career, to be successful, to become a good researcher, to be promoted, and there are things that you do to make science advance. So for example, you collaborate, you open up your science, you take the time to make your results reproducible. So this is all advancing science, but it's not advancing your career. So I really found this situation…
Jo: It will be…
Noemie: Definitely. But I find it really interesting to talk about success because there's really this dichotomy and that's where you realize we have a problem in assessment. So that links us back to the assessment. And I think as researchers, our personal perceptions of success are very personal. Some people for my field of research, I work in research, assessment, research careers, changing academia. So my view of success is to actually change something, to make something that will have an impact on the system and make academia better. In my perspective, other people have other goals. They want to advance a small field of knowledge or they want to contribute, create a network. So it's very individual.
Jo: Let me just add an anecdote to this because I will never forget a colleague of mine who was also a PhD student when I was in Mexico a while ago, and I think it was three or five years after we completed each of our PhDs. And then he was on Facebook. Sorry for dropping a friend's name, but he was literally saying, it's not often that I share anything about my work here on Facebook. Usually it's for personal stuff. So he was a basic researcher like me, and he studied genes. And it turned out that other research groups had looked into the gene that he had characterized as a genetics approach that allowed other teams who were looking into a child disease to characterize further and then identify the gene being a cause and therefore now being able to develop a treatment for a rare disease in children. And when he found that article and his work being cited there, that gave him such an unexpected and overwhelming feeling of accomplishment that he actually contributed to eventually saving or helping making children's lives better, he couldn't foresee as a basic researcher. But isn't that why we do research? Like not knowing what we might contribute to some of us, and where there's applied research having a specific purpose and cost to pursue? And isn't that also something that we should measure? To what extent? Well, not measure in a way, hopefully mostly qualitative. What's the best approach to allow the researcher to achieve that goal for humankind or a society in a specific discipline or region of the world, to gain knowledge, to do better on this planet with the other species that we share with but also amongst us people.
Karen: And that's a great example that would fit perfectly in a narrative CV. And the beauty of it is also it doesn't refer just to his publication, it refers to others work, it shows how collaborative it is and how we all indeed can sometimes contribute a small piece of a much bigger goal. So indeed, I think that's the kind of stories, hopefully we'll be able to tell a lot more in the future. I think it's an interesting question. While this success and it is very individual. To come back to your kind of linguist values, I think it's worth mentioning that there is this innorms developed, this scope framework, which is basically a framework to help anyone who does evaluation of research to really reflect on why they do it, how they do it, evaluate how they do it. And it does the first kind of scope, the S stands for start with what you value. So think first about what is it that you value, then think about how can I evaluate it, test it, probe and then evaluate the evaluation itself as well at the end. So there is definitely a very close connection with values. And I think sometimes maybe we've forgotten to reflect on that when we set out how we do an evaluation in research, but elsewhere as well, I think that there's a wider story around thinking about connecting values with assessments. And I think just personally thinking about success, it comes down, I think, to what you say. It's about helping people, first of all, but indeed in a context where it doesn't damage other beings on the planet or the planet itself. And for me that's definitely a driver in all the work I do. I do hope that it helps other people and it helps society more generally, even though it's very small steps and I think everyone working in research culture knows about the set decks that are part of it, the challenges, but also accepting that it is slow steps. And I think what makes me happy in my job is I very often get messages from people saying I'm so happy you're working on this, or It's so great to see some of the work you're doing. I think that is really helping to keep motivated when you also get the other messages, there's disagreements as well.
Jo: We've already done it this way, cannot change because that's how we measure.
Karen: Yeah, and there is lots of critique and it's understandable because it's an important topic, but I also often say to people who start in their fields, it's good to get critique as well because if you don't, there is a question around how important what you're doing is. You're working on a topic where there are different opinions and that's probably a good thing.
Jo: Sharpen the narrative and to really question if it's the best approach we can take to call for change and what's the most efficient way to change without causing too much damage along the way.
Karen: I think one of the things we've been thinking about is unintended consequences. For example, young researchers will be called up a bit in a transition where they will need to navigate, okay, what is it you're asking for me? What will be the requirements for career progression? And we need to be very careful with that. So I do think there are questions around evaluating what we're doing, making sure we do it together with those who will be assessed. So there's definitely some of that that I'm very confident that change will be a positive thing in this context.
Noemie: Yeah, we know we have a problem. So we have if we keep statistics, we will just keep that problem. So by trying out and being open, there's also a big element of this formal research assessment that's about revisiting what we've changed and how it changed the system. Has it improved or not? That's also really important to consider. And just to come back on early career researchers and what we were discussing about success. I mean, I think there's a broad agreement about success being something that helps society, that helps the world. But we also have to be careful that for early career researchers, this may come in 15 - 20 years. So we also have to have measures in place, ways of acknowledging for future success that do not exist yet. So that we don't forget the early career researchers, those who are starting it and do not have anything to show or didn't have any influence in policies or in major documents at the moment because they just started.
Jo: Right, cool. Okay, how can we conclude this? Maybe also to conclude on the most possible positive note. And I think we're on a good path because the changes here, it's happening, it's ongoing, we are just rearranging the puzzle to make every piece fit again in a new format. There's not really standards that can be established on a meta level, but incentives that are being incentives and assessment points or items. I don't know what the taxonomy would be, but I think awareness raising has happened. So thanks to DORA and other Black Minute and similar initiatives, the need for change is obvious and most people outside of film are willing. So what maybe to conclude for this episode, what is one or if you want, two or three things that we all as researchers on whatever capacity you want to focus on now as a take home message, what can we do, each of us, as in climate change, just shower less or whatever. Anyways, what are simple steps that we can all do to help the change happening and to also support ourselves and our own career prospects along the way? Maybe just one or two.
Karen: Yeah, I can start with one or two. So I think one thing that everyone in the research system definitely can do is help to change the narrative. One example that I really like is a lot of questions when researchers talk to each other are still very much driven by the incentive system. So I remember conversations where if someone said, I published the paper we're celebrating and the immediate response would be in which journal? You can ask a different question. You can ask what the research is about? Who will it be important to? Or are there other groups? You know, you will build on it, have you been in touch? There are just many other questions you can ask. And I think that will really already help change the narrative. So that's, I think, a small thing everyone can do. And then I think the second thing is we've talked about lots of examples of, I think, good practice and initiatives that are ongoing. Everyone can help raise awareness with colleagues within their institution. So I would say those are already, I think, two starting points that every researcher can contribute to.
Noemie: Yeah, I completely agree. I think these are the best examples that we can all relate to. So talking about it is really important and trying to think about the way you think about it as well. For example, one of the things that I noticed through my work is that I'm someone who studied and worked in success in science, issues of the scientific system, well being of researchers. And then I realized that in my work habits, my work practice, I'm someone who overworks, who works evenings, who works weekends and all of these things I'm sort of promoting what I say should not happen in science. So questioning our own practice, sometimes putting what we do on the table and asking ourselves does that fit with my value of science? What do I value in science? Does what I do actually fits with my rationale of what I want of science? So it's sometimes good to have these personal reflections and to make changes in how you do research and how you are with yourself.
Jo: Yeah, I think that's another major misconception because also for me it was the understanding that if you are a scientist and you want to be successful, you need to give it 150%, if not 200%, meaning working weekends. And there's also an unwritten rule in many institutions. Like of course you come in on the weekend if it's just for two or 3 hours, often seven. And I didn't question it didn't occur to me to question that. I just felt exhausted. Also to take holidays for me wasn't really a priority just because the pressure was so real. And not that I often realized, but I think that's also an issue with academia and the culture that we have and expectations and unwritten or often spoken. Are you sure you can take a holiday at this stage in your research? Are you sure? Can you not do that later? Once you've written that manuscript? And that might take a few months to establish a healthy work life balance, whatever that no means for everyone. But we are not machines. We're not robots. We're human beings. And we are biological systems that need to recharge. We also run on energy in our bodies, and we need to eat and have conversations outside the research context and nurture friends and family relations. And all of that is important in life and also parts of human beings to keep us happy. Because there's also a whole section in this podcast about, wellbeing, keep us happy and also functioning again in the research context and therefore purpose driven, oriented, and purpose functional in a way, as you mentioned.
Karen: It links all together.
Jo: Right, yeah.
Noemie: I think we have the advantage that these rules are not written, so we have an opportunity to rewrite them. That suits researchers.
Jo: Let's just build a different culture of narrating how we want to work in academia and a scientist and the awareness this year. We know open science also reminds us of the values that are important and are fully aligned with research integrity and best research practices as they've been postulated two or three decades ago in various institutions. So we're good at research, just doing the work and implementing. Okay, thank you.
Karen: We'll get there. Yeah, I think we will get there.
Jo: Sure. Yeah. And we are the change. Like, each of us are the leaders, not only the three of us. Okay. So thank you so much for joining me today for this, and speak soon.
Karen/Noemie: Thank you.