Skip to main content
SearchLoginLogin or Signup

Transcript - The good, the bad, and the unforeseen consequences with the release of ChatGPT

A conversation with Martin Delahunty about ChatGPT

Published onFeb 11, 2023
Transcript - The good, the bad, and the unforeseen consequences with the release of ChatGPT
key-enterThis Pub is a Reply to

Transcript

Jo: Welcome to another episode of our conversations we are having here at Access 2 Perspectives. And welcome back, Martin. I'm so glad you agreed to join me again on record for yet another conversation from one podcaster to another. So, everyone meet Martin Delahunty again. Yeah, yeah. So welcome again. 

Martin: Thank you, Jo, and nice to be back on your podcast program. And I've taken a bit of a sojourn from my Inspiring Stem podcast, but we've got a few lined up now for this month, so I'll be back on the air waves alongside your Access 2 Perspectives podcast, and we'll probably be covering similar topics such as the topic we're going to talk about today. So, yes, looking forward to our conversation. 

Jo: Yeah. And I saw the announcement that you made, like, for the next season or the next round of episodes, that's coming from Inspiring Stem and I find it actually inspiring. And as you said, we have a lot of shared interests and yet each of us have a different approach and different backgrounds in what we've done with our careers so far. I think the common denominator is our interest in passion, growth and science and opportunities and the technologies and people and stakeholder groups and yeah. And making these accessible to a wider audience of listenership. And in that sense, the conversations we're having in each hour podcast might seem similar, but still highly different in nature and thus highly complementary. So, very inspiring. Thanks for doing this show.

Martin: Thank you. And yes, we have mutual objectives in advancing best practices in open science and open publishing practices and sharing insights into particularly innovators that are working hard to do the right thing and doing very interesting things at the same time. So, World Exclusive, the Inspiring Stem podcast kicking off for this month and this year will feature some very prominent Australian scientists and people involved with advancing science, communications and open access from Australia. Look out for that. 

Jo: I’m already geared up for listening. So, today's topic is the famous and infamous Chat GPT, developed and presented to all of us by a company called OpenAI. And AI is not new to academia and scholarship or is mostly deployed by the corporate publishers for all kinds of facilitating things like finding and recruiting reviewers for the quality assessments of the journals. And you probably know more from the inside and the use of AI in scholarly publishing in particular. But now we have this chatbot and is it a chatbot? What is it like? What's the term that can easily define what we're looking at and using?

 Martin: It’s a generative text tool. So, as we all cannot avoid the noise and the interest and the trepidation around chat GPT, it's important also to try and understand what exactly it is and what it isn't. And I, along with other people, are just beginning to really get under the surface, under the hood of what it is. But I think we'll talk about it in this conversation. At the end of the day, it's just another tool and it's one of many, many tools that might have machine learning or text mining ability, automated tools, but it still requires human intervention. So it happens that Chat GPT has had an extraordinary global lens on it, partly because I would say they have spent time, money and effort in promoting that. And the financial support, which is quite extraordinary behind it at the moment, really emphasizes the need to push it out to as wide an audience as possible for all of us who are interested to try and break it. Because, like any new software, tool or service, it's in the interests of the producer to make sure that it's tested to its Nth limit. And that's what Chat GPT is experiencing right now. We are testing it to try and break it, to try and find flaws, to give feedback, what we like, what we don't like. And all that will then feed into open AI's development of Chat GPT, version four. So right now we've got version 3.5 and then it makes you wonder, well, where was version one? Because I certainly didn't pick up on version one. But looking back now, you can see that it was originally launched, version one was launched in 2018. So it's been up and running for four years, it's been building up its capability and now we've got this big burst of global interest across every sector, not just science or business, but the arts and humanities as well, which is really interesting. And Microsoft has invested $10 billion, $10 billion in OpenAI and chat GPT. 

Jo: Say that again. Like $10 billion?You already said it twice. But it's a number and an amount of money that is really difficult. What are other things that get an equal amount of financial investment? It's like a western country's state capital. 

Martin: Yeah, they are extraordinary amounts of money and obviously you've got billions invested in Twitter, courtesy of Mr. Musk. But 10 billion from Microsoft as an investment would be significant, but not an extraordinary investment for them. But it just shows that the interest and developing the capability, the motivation is there for companies like Microsoft to make it work and make it work better. And you'll follow lots of people who are much more tech savvy than me talking about the ins and outs of AI and AI industries. But one of the postulations is that Microsoft is investing in Chat GPT to challenge Google. So we know how extraordinarily prevalent Google is as a search and discovery tool. It had many competitors at the very beginning of its life, including Bing, which is the Microsoft which is still going. So one assumption might be, as part of a broader set of return on investment from Microsoft, that Chat GPT might become the challenger and take over from Google. So it's not billed at the moment as a search tool, but you can see how there's potential there for it to become a search tool. 

Jo: Yeah, I think it takes Google or the way we search the Internet for information to the next level in it being a semantic approach and connecting dots, which sometimes it fails in, maybe also highlight that where many people have been asking questions, where it actually brought up references and put information together that makes sense. And in many instances those are just made up like fabricated information. And that's highly, highly misleading. 

Yeah. So in that sense, I think we're going to appreciate how it's not only listing sources of information, but also contextualizing, which is coming closer to how the human brain works and likes to be informed as what we know from conversation. So it's actually a conversational information sharing approach. And I think it also asks from the person who puts the query to have a certain amount of context knowledge to ask an informed question so that it can produce a meaningful response. So that's interesting, but yeah, I think we all agree that the fabrication approach is still a huge flaw and should be treated with caution. I think a good practice is that we can work trying to find solutions to the gaps that we find along the way. I think the practice that or how these flaws were discovered is when people were asking, oh, that sounds interesting, can you give me the source for that? And then the chatbot replied, oh, sorry, there's no source that actually fabricated that. And then actually also apologizing, that's interesting. And that's how we can easily like, we should always add that question, like, give me information about this and also cite the sources that you're using. I don't know. 

Martin: But as a generative tool, it's not parsing information that is out there, not representing synopsis or chunked down information that's out there. It's generating new text based on underlying data sets. So, for example, a big part of its data set, of which it's now getting up to trillions of documents across the web that are freely accessible. So Wikipedia is one data set. So it draws upon Wikipedia, but it's teaching and learning itself to present information that makes sense. And that means currently, to make sense of maybe three or four sets of conclusions that chat GPT might make around a search or a question, it's learned through moderators. And there are human moderators that have tested version one, two and three and asked chat GPT to produce three or four sponsors or however many. And then it's the human intervention saying, well, actually that makes sense, or that doesn't make sense, and they're tagging it. So there is a human tag behind all of this. It's not a sentient AI. And you may have seen also in terms of that modern moderation, there's that level of moderation to say this makes sense, a human being would make sense of this or not make sense of that. And then there's the other moderation to filter out what is potentially racist, sexist, whatever on what's out there on the web. And again, they have human moderators to do that, as they do or used to do on Twitter. And for example, they employed 200 people in Nairobi just to filter the data sets and remove anything that was unsavory from that data set. So there are still human beings involved behind this tool. And then when we get to using it, we should always just see it as a tool. But clearly it has the potential for malevolent purposes because it's such an easy tool to use, for example, to create what looks like and feels like an original scientific research paper. This is where you're now getting the responses from Nature and Science magazine who have already very quickly moved to putting out new guidelines for how to acknowledge use of Chat GPT. If it's used as a tool to support as you might use Google or you might use..

Jo: Grammarly?

Martin:  Literature tool, it should just be listed as a tool.

Yeah. So Nature and Science have put together these ground blues rules for use of Chat GPT, and they're very aware that everybody, including scientists and others, are diving into using Chat GBT and seeing how it can be used for good, but also for bad. So clearly there's a case here for more prevalent creation of paper mills by using this technology. At the moment, there's good research around fraudulent paper mills that create papers that are plagiarized from other papers. And there are software tools that just parse a full eight page article and then chunk it down to four pages, and then it slips. It's the same paper, but it's been summarized, and the tools are already prevalent out there to do that. And this is one tactic for these malevolent paper mills to submit to journals, and they're not picked up by authenticate or other tools, so they go under the wire because, again, Chap GPT doesn't have potential for plagiarism. So it's creating new text. It's not plagiarizing or chunking down or parsing content that's already out there.

Jo:  But it might. And the notion that you mentioned of people, because there have been inquiries for papers being submitted to list Chat GPT as a coauthor, and that's when the publishers were put to the corner and having to make a decision if they wanted or not coming up with that, it's just treated to what it is. It's a tool, nothing more, nothing less. 

Martin: Yes

Jo: And I think as much as we now have a taxonomy for author contribution, with such versatile tools at hand, we should probably have a similar taxonomy to specify to what degree the tool was deployed to create the paper. 

Martin: Exactly. As far as I know, there are four research articles or academic journal articles that credit an AI tool as a coauthor. And then more recently, Chat GPT. There's a journal called Nurse Education Practice. And it wasn't an original research paper, it was an editorial, but accredited along with the human author, Chat GPT. So of course, that created a lot of noise. There's no such thing as bad promotion. So everybody's talking about nurse education and practice now. So it's a very good journal published by Health Care very good journal. But obviously everybody's focused on that, saying, well, how can you have an AI tool as a co author? And I think they let it run for a while because why not? Because you're getting a lot of attention to that specialist journal. But I've just got a quote just on the screen here from just looking at it recently, that the journal said it in chief, said actually that the credits for Chat Gptpt slipped through in error and it was an oversight on their part. Now, okay, not too sure about that. 

Jo: But I mean, the journals have their own editorial board, so to some degree also make their own decisions irrespective of the or they might have different varieties of editorial freedom within a certain publisher framework.

Martin: But what I like about Chat GPT and the whole range of discussions around Chat GPT is that it's throwing lenses again back on long standing problems. So long standing problems around research integrity in publishing. In the last number of years, there have been quite a number of papers around gift authorship. So this is where an academic will do an experiment, write up a paper, and then they may be asked by their head of department or some learned academic who has had no involvement in the research to be credited on the paper. So gift authorship is still very prevalent and it raises the standard and accountability for being an author on a scientific research paper. And I liked Magdalena Skipper, who's editor in chief for Nature. She responded to Chat GPT, but she reiterated the standards for authorship and said that I quote her an attribution of authorship carries with this accountability for the work which cannot be effectively applied to learning machines. And authors using these tools in any way while developing a paper should document their use in the methods or acknowledgements if appropriate. And so that's good to throw back the lens on what it is to be an author and to be accountable for the work. But that has been a long, long standing issue. And then I work quite closely with medical publication professionals and medical writers who have tried to move forward from the stigma of ghost authorship where communications agencies and medical writers will write on behalf of an academic or lead investigator or a paper and aren't acknowledged as an author. So there's now a set of standards which has been long standing over ten years, good publication practice guidelines which very eloquently and in detail states the accountability and requirements for an author. And so that means that what we have now in the publication of clinical trial papers supported by medical writers who do a fantastic job, that we're, a medical writer, has put a significant effort into the writing of the paper or maybe looking at the data from the research, distilling the data, making conclusions, working with the academic that they're credited as an author. So you've sort of gone away from that stigma of ghost shop and lack of transparency.

Jo: I'm new to the kinds of request authorship and I'm aware of the issue of paper mills. It's also a term that's highly judgmental but maybe it's also a service that's highly needed within academia and there's just not enough resources to cover up for that. So of course things like that happen that some services like writing a paper or contextualizing the information that's been gathered as in research output now have to be written up and why not delegate that and then the researcher can still proofread and how? I don't know if the paper would do that with integrity. They wouldn't be paperless, they would be service entities which might be corporate or nonprofit whichever way but they would follow some sort of revenue system and be beneficial for the academic machinery at large. So I think these fraudulent practices are to be seen as fraudulent, and are often worn out of necessity. And the gaps and the pressure points that then lead to gaps within the system, as we know, because they're never before. Or it keeps growing. Like the amount of papers that are being published in a day around the world is just insane. So we need more personal, more expertise, more professionalism to tackle all of these aspects and what it means to produce research output in whichever way. The question was like what is ghost authorship and would that then refer to academics sometimes within the same research group who write the paper but don't get acknowledgment because they're too junior or two of the actual research project execution to be regarded as an author? What scenarios did you see in those? 

Martin: Ghost authorship. Again, it's a practice that, again, within clinical trials and medical publications worlds. We've moved away from basically Italy, but it would be where an academic is billed as the author for a paper, but it's actually written by a professional medical writer or medical communications agency. And again, because they're not visible, they're a ghost. They're not accountable for the work. We're getting paid. What we have now is we have medical writing and science communication teams that will work with academics to again either process data, help interpret the results as well as formatting the article into something that is a good piece of science communication and that they're acknowledged because then it's a fully transparent process. This is an author that's a medical writer with either an independent or working with a company and that person will be accountable for that work. So particularly for clinical trials publications or any clinical publication you want all the authors to be accountable for something that potentially can indicate a new drug or intervention or a management of a disease. And if it's based on poor data or fraudulent data, then the authors are accountable. 

Jo: Right? 

Martin: Because the consequences for getting that wrong are people will die. And lots of examples of that in subjects like mathematics and physics. Hopefully, nobody's going to die because of a poor paper. But authorship should really be fully transparent. The author should be accountable, should be able to stand behind the data, be queried around the data in the paper, and take full credit. So to your point as well, if they're early career researchers working with senior academics and they're doing the work and they're not credited, well, that's not good. They should have full accountability for their contribution. And again, I would point people to the

Good Publication practice is abbreviated to GPP. It was updated last year, so it's called GPP 2022. If you Google that, you'll find that it's relevant for medical writers and those working on clinical publications, but a very detailed set of guidelines attribution accreditation for authors is what it is to be an author. And I'd really like to see more of that across all academic sciences and social science humanities where they're publishing, where, again, authors are fully accountable. 

Jo: We also have, COPE right, the community for publication ethics. Is that complementary or is it more or less the same as what they post today?

Martin: That's complementary. So, again, that has for a long while given guidelines on what it is to be an author. And then we have, again, from a clinical world perspective, the International Committee of Medical Journal Editors have always had very clear and explicit guidelines on what it is to be an author and how authors should be accredited and attributed to papers. 

Jo: Which means, again, like, AI cannot be an author because it cannot be held accountable, even if it could be, but it shouldn't be because it's a machine that's human made. And if anyone, the people of the company represent or have their bills and contributed to that, AI should be held accountable. So we still need human beings in the worst case, we can’t sue an algorithm.

The consequence of benefits, okay, assuming the best intentions and also seeing the benefits that it brings to the table, like we already highlighted as potential or extra downsides and fabricate information, even if it's human only made or claims, we should always treat information with caution. And having seen that chat GPT actually also apologizes in the second and third iteration for having fabricated and not telling us upfront, but it's still a machine or an algorithm, some sort. What are the use cases in academia that we see other than it can produce, contributes to papers, but it should be mentioned under the methods part maybe or yeah, maybe another methods part extending the methods to the actual paper writing process where it's been mentioned not only the experiments, but also the paper writing has made use of chat GPT for the purpose of and then been humanly vetted against by the team who's actually listed as co authors and core contributors. 

Martin: Exactly. And I think actually, where it can and will be a legitimate and very helpful tool is for authors of complex original research papers who are then asked to produce scientific summary, plain language summary, to ensure that the objectives and the output from the work are communicated to a wide audience beyond just scientific research community for the scientific research paper where it can be communicated to the public, to policymakers, to government and is understandable again coming from clinical medical point of view. About the last ten years, the grouping of those stakeholders involved in medical communications have been looking at developing plain language summaries and advancing the development of plain language summaries to ensure that they're at the optimal level for communication to a particular audience. Again defining your audience as patients or government or policy makers, there will be different levels of vocabulary that you will use to communicate optimally to those three. So Chat GPT would be a perfect tool to say from this complex original research paper, from its learning across trillions of sets of data, please produce a 250 word plain language summary that will make sense to a patient or another one that makes sense to a government policy maker. I think that will be hugely valuable because that still is a bit of a challenge to get the tone and the language right. 

Jo: Yeah, it's an enormous challenge and I think as scientists we have been untrained or we have unlearned how to communicate with common people. Oh, sorry, we are all common people in one way or the other, but I mean, others are specialists in other things, like non academic ones. It's getting worse the more I try to rectify my wording. What's concerning is what's commonly known as late summaries. But yeah, I think researchers have been punished for thinking it has to sound very technical to be scientifically valuable information or whatever, or to be seen as scientific information. So we've unlearned to communicate with other societal stakeholders. That I was trying to say, and for that I agree. It's brilliant because we're currently still doing a project with every archive where we translate English research articles first authored by African researchers across disciplines and we thought we would translate at least the abstracts and maybe some parts of the introduction into African extra languages like indigenous African languages. Turns out it's not possible because it's too technical, too like, too many acronyms, too many research topic specific terminologies. So then science literacy organizations of Africa took to the effort to first produce comprehensible humanly comprehensible, non scientist comprehensible summaries. I agree. And that can be nicely done by chat GPT and then being translated being translated into whichever language, which is another passion topic for me that we serve multilingualism better in academia and I think; have you seen any non English content being produced by JBT or is that for a later stage or a later release of it?

Martin: I personally haven't, but I'm sure that's been tested so I'd be interested like you to see how that works in practice. I know it's been tested, I just haven't seen any discussion around that. 

Jo: But even if not, there's other ISOs for, like, natural language processing algorithms who are quite powerful. Not for all the languages we have, but for, you know, a good number, like in the ten, like 10, 20 languages can be relatively well translated back and forth for late summaries because we have enough digitized content thanks to Wikipedia and other standardized text bodies online. And another use case I see it would be nice for it to embark on is producing reviews on research topics. 

Martin: Yes

Jo:  Because there was something I was very passionate about as a PhD student. Like digging into a research topic, learning the variety of thought processes that goes into a research topic. The complexity of viewpoints, of methodological approaches you can take on the branches it also reaches out to and then the conclusions researchers from different labs and groups make about a certain topic with a specific approach. And to summarize that, I think chat GPT is much better off than an individual researcher or even a group of researchers because it can just better process information. And then the group of researchers can again go in and verify the information that's been produced by the algorithms. I'm using plural assuming as a combination of algorithms that are going to action. But again, I'm not a tech person. Quite trying.

Martin: I agree. I think you'll see a range of text enabled tools, not just Chat GPT, but other text enabled tools that will facilitate, as you were just describing, doing a stage of the art review for an author is always a huge burden because it's a huge amount of effort and you want to make sure that you're comprehensive in your view of the references that you're searching.

Jo: What I also wanted to add to this is then the reviewer team, the actual researchers, individual researcher who summarizes the topic or who has then Chat GPT summarize the topic can go in and just verify the sources make draw their own conclusion from the information presented and also the expertise he or she or them bring to the topic. And I think this can be a very nice approach and also verifiable and applicable to a wider community generally accepted as a way forward. I think it will help to make sense of all the niche topics, the huge topics that we now see with molecular medicine and also the solution oriented topics that we need to make sense of, like all the research being produced with climate research and then summarizing that into meaningful analysis of what we actually know about the climate today. And how can we intervene with the planet from us? Basically I think that it can be really useful. Yes. Chat GPT let's just be cautious as we use thoughtful. 

Martin:I think again, it's nice to see that OpenAI, the company producing Chat GPT, are reactive and responsive to what we're all talking about now, which is mostly the negatives. And they've reacted, for example, they've reduced their antidote to malevolent use of chat GPT which they're calling AI text classifier. This is following Open AI having weeks long discussions with schools and colleges in the United States over the fact that now chat GPT can write on anything and where students are doing essays and having essays based work that they're clearly using chat GPT to do those essays. Why wouldn't they? So OpenAI have created the antidote to that AI text classifier which is meant to just verify if a piece of text has been produced via chat GPT. But again, they've already said that that just cannot be foolproof. But they're willing to work to ensure the integrity of the use of the tool and create other tools and services that will support whatever area is currently using chat GPT and wanting to use it for positive reasons. And they've made very clear that at the moment, because it's not foolproof, it gets things wrong, that it shouldn't be solely relied upon for making any kind of decisions. 

Jo: Yeah, true. Where to go next? There's so many topics. Okay, so another thing, another AI I would like to bring to the discussion is site AI, as in citation. The idea from the founders was instilled and is very useful. As such, the more people use it to weigh the citations and measure if they're actually supportive of the research presented or contradictory, which serves very well the idea. Of scientific discourse which I think we've lost sight of with being paper producers first and not while having lost patience to actually listen to each other and have discussions and discourses around. Also accepting that other researchers in other parts of the world or the same institute might have another conclusion from what they observe in experimental outcomes. And then it's just a matter of having a conversation about how did you go about and what did you see doing what and what did we see doing what? And how can we all make sense of what we see as a result instead of claiming that they are wrong and we are right, like we can say that we did anyway. So site AI trying to make sense in that regard and now that was the transaction, trying to help to fill the gap that was shown to exist by JGPT in the sense of is there an actual citation for this claim? And I think they just announced on LinkedIn that they're going to release it soon but they're kind of on it as we speak, thankfully. And they're probably also not the only ones who are trying to find ways to work with Chat GPT in a way that's meaningful and also to correct mistakes that are happening as it's not perfect. It's not a perfect tool. And I just love David Boy's music, but also for him to be excited. As having said, if it works, it's out of date. So let's just appreciate that aspect. And again, because it's like human made, humans are not perfect, and that's okay. And anything human made cannot be perfect by its very nature because it's made by humans. And we are all biased, each of us. Anyway, So yeah. So have you seen other organizations jumping to action and response rather than opening items? Docs also, and as such, another, we're creating a nice ecosystem and we are collaborating and making this a meaningful endeavor. 

Martin: Yes. And there's been so much to talk about Chat GPT, and we are sort of embedded in a science research ecosystem, science communications ecosystem. But outside of that and other industries that have been sort of worrying, will we be replaced by Chat GPT? An example, which I quite like is from musician Nick Cave. Formerly of the Bad Seas. He lived in Berlin for a number of years,

Jo: Awesome

Martin: Featured in one of my favorite films, Wings of Desire, if you've seen that, no vin vendors. Oh, my goodness. It's a fantastic film based in Berlin. Anyway, that's another story. But Nick Cave did respond to comments. He runs a blog and he's very active in responding to fans and anybody else who poses some questions. So one fan used Chat GPT to generate a song in the style of Nick Cave. 

Jo: I saw the headline.

Martin: Yes. So the concern then amongst musicians will be, is our time up? Is this just going to be, you know, Chat GPT is going to, you know, create music and, you know, write lyrics and songwriters will become obsolete? Well, you know, he was very polite in response to the fan, but called Chat GPT a travesty. He said, Chat GPT might be able to write a speech or a sermon or obituary, but I cannot create a genuine song. And what he says is, if any of you know Nick Cave, then you'll resonate with this. I'll just quote him. I'll just get on the screen here. He says, Songs arise out of suffering, by which I mean they are predicated upon the complex internal human struggle of creation. And, well, as far as I know, algorithms don't feel. Data doesn't suffer. Chat GPT has no inner being, has been nowhere, has endured nothing. It has not the audacity to reach beyond its limitations and hence doesn't have the capacity for a shared transient experience. It has no limitations from which to transcend amazing words from a poet and a lyricist, another novelist, that in itself could not be generated by Chat GPT. But he makes an important point there. Again, it is a tool. And where you get into the creative arts at the moment, whether it's text and writing, lyrics, songs, novels, poems, it's not going to do a good job because it doesn't reflect the inner term or creative instinct of the person. 

Jo: Yeah, the human part and the heart and soul that artists put into their works cannot be plagiarized or copied or recreated.

 Martin:So again, lots and lots of really good discussion. I mean, you can maybe not fail to get a bit tired of all the Chat GPT discussion and people trying it out and then, you know, publishing long, long trains of text from Chat GPT but it does again make us focus on things that matter and have mattered for a long time. Just makes us refocus on those things and maybe look at using Chat GPT and other similar AI tools, text mining tools, image creation tools, as just tools, but to make sure that they're accredited in if it's a scientific or academic paper that's accredited in the methodology. It's very clear that we've used this tool as well as we've used a reference manager tool, et cetera. 

Jo: Yeah, also to be specific around what's the word, the brand behind the service because that also matters in how transparent or untransparent the processes are and not judging that as a matter of fact, but just to point out we're using this tool by that manufacturer, end of the story. And only then it's comparable if it's transparent or not or consult or not. Because as I remember from my work as a PhD student in the molecular biology lab we would reference in the methods part, we were using a ditch to extract DNA from what? But then you have to be specific in particular from what manufacturer and also from what charge of year when it was produced, because that might also influence the experiment, because products are always being developed further. And you can compare an experiment that has been conducted with solutions produced by different manufacturers or vendors in different parts of the world in different years of production. It's just not comparable. Most probably influences the outcome of the experiment. Same with this now. Oh, so interesting. And as we use them, we had a few exchanges before this recording. I've also registered but I haven't actually used it. Now it's not accessible as much anymore because so many people are trying to reduce chat GPT server capacity but it's just being mindful of what data we're sharing as we use it and what devices we're accessing it from. Basically a gentle reminder of checking the terms of service that is a tool and we comply with the terms of service by the producers of the tool in using it. And there is like for OpenAI, they make it explicit that they do share the information that is being recorded or that they capture from your browser or our browser history and cover information that's accessible through the internet if you're using it from your workplace. So it might also have secondary consequences as we happily jump on opportunity and are curious to check it out and also to prove correct. But there is not a whole long list, but there's some other measures to be considered.

I'm a bit brain-fright now because we've already discussed topics. Are there some other aspects you would like to highlight today? 

Martin: I can leave you with, I think, a very positive and quite smart approach from a college professor in the United States, the Wharton School. And this struck me as rather than being in fear of chat GPT and similar tools, trying to ban it, block it, not allow it, this professor has been quite smart in already establishing guidelines for his students and a policy, a formal policy in the college handbook. And what he says is, and I'll quote here, I expect you to use AI, GPT and image generation tools at a minimum in this class. In fact, some assignments will require it. Learning to use AI is an emerging skill. I provide tutorials on canvas about how to use them. I'm happy to meet and help with these tools during office hours or after class. So he's embracing these technologies, realizing that these genies are out of the bottles. There's nothing really you can do about them. They're out there. And rather than seeing them as a negative, seeing as a positive and encouraging and actually here, he says, I expect you to use AI in this class. And he will provide training and learning about using the tools. And I think this is so important to educate people around the use of these tools, in what circumstances they can be used and beware of the limitations as well. 

Jo: You just threw a huge ball into my corner because as I shared with you, I am building an online academy for researchers, research department of groups to sign up to. And I hear the call to action like now that we've talked about the possible use cases within academia, scholarly writing and publishing. So I'm taking this on to put together some training tutorials on how to use Chat GPT in particular or other AIS for that matter, what are the pros and cons and opportunities and then how to use them purposefully and cautiously with as much caution as necessary, feasible. There's always a trade off anyway, but to be as informed about the potential pitfalls as feasible for the time being. So, yeah, thanks for that. That's another to do item on my ever growing to do list. I think I'm right. 

Martin: Good. 

Jo: Okay, so then we meet again in this show or yours, inspiring them. Please go over to the link in the show. Notes to explore the past is already a long list of highly informative conversations that you can find in Martin's podcast shop with representatives from within academia across the spectrum, mostly concerned with open science as well. And the last thing, because you mentioned to me in the preparation of this episode that you queried Chat GPT for, so what did you ask the machine to produce about Open science. 

Martin: I asked Chat GPT to tell me about the current and future trends in open science and it produced a very nice two page summary of everything that I would have picked out as a consultant. So I guess I'm slightly concerned that my job will become redundant. But it's again about the interpretation of that data. But it made a very good case for open science, talked about fair data, everything that you and I would cover and colleagues advancing open science. It did a good job, and I can see myself using it as a way of drafting a piece of report or a study and then I will need to interpret it and stand behind it if I'm authoring it, particularly if I'm choosing for a client as well. 

Jo: That's the accountability part. That's fair. 

Martin: Yes. And there'll be so many more conversations around Just Chat GPT. But it'd be quite nice when whoever is having these conversations that we broaden the discussion beyond Just Chat GPT because there are hundreds of similar tools that are out there that are already being used and can be used for positive benefits in whatever circumstances. And educating around these tools, as that professor of the water the school is doing, is actually helping to make best use of these tools within a particular circumstance or scenario. And you will see plenty of webinars and conference presentations over the next number of months. I've just been invited to talk about AI and Chat GPT at the Institute of Professional Editors of Australia and New Zealand in May. 

Jo: Are you traveling down? 

Martin: It's online.

Jo: I would have envied you.

Martin: But no, it's online. 

Jo: Well, awkward presentation hours then. Good luck. 

Martin: But I'm going to Prague also in May to present in person at the European Medical Writers Association. And we'll be talking about AI technologies there as well. 

Jo: Oh, it's lovely. I haven't been surprised not too long ago, a couple of years. We have just driven for the time, but it is such a beautiful city.

You listeners will hear more from us on the topic, both of us for sure, moving forward. So, yeah, welcome back and I'm sure we'll discover other shared interests to talk about in future episodes. Thanks so much, Martin. It's always a pleasure. 

Martin: Thank you.

Jo: So thank you so much for joining me again and I'm sure we will have other topics shared. Well, we do have a lot of shared interest, so there will be more episodes where we can join in one or the other podcast show. Thanks so much, Martin, it's always a pleasure having you and discussing with you as well. 

Martin: Thank you Jo, and likewise a pleasure for me. And I look forward to the next opportunity to chat and we perhaps will have you on the inspiring Stem Club podcast in a few weeks time so we can reciprocate our conversation, keep the conversation going, which is always nice. 

Jo: Yeah, thanks for that and see you soon. 

Martin: Thanks



Comments
0
comment
No comments here
Why not start the discussion?