Inclusion Criteria: a Clinical Research podcast

How to Verify if Someone is Really an AI Expert in Clinical Research

John Reites Episode 13

John Reites discusses the proliferation of self-proclaimed AI experts in the clinical research field and provides insights on how to identify genuine expertise. He emphasizes the importance of credentials, proven work experience, the ability to explain complex concepts simply, and a focus on responsible AI practices. Reites outlines four key criteria to evaluate AI experts, encouraging listeners to be cautious and discerning when seeking AI expertise.

Click here to message/text me your insights and ideas for future episodes

Thank you for joining Inclusion Criteria: a Clinical Research podcast hosted by me, John Reites. This is an inclusive, non-corporate podcast focused on the people and topics that matter to developing treatments for everyone. It’s my personal project intended to support you in your career, connect with industry experts and contribute to the ideas that advance clinical research.

Inclusion Criteria is the clinical research podcast exploring global clinical trials, drug development, and life‑science innovation. We cover everything clinical research to deepen your industry knowledge, further your career and help you stay current on the market responsible for the future of medicine.

Our episodes discuss current industry headlines, career tips, trending topics, lessons learned, and candid conversations with clinical research experts working to impact our industry everyday.

Watch on YouTube and listen on your favorite podcast app. Thank you for supporting and sharing the show.

Please connect with me (John Reites) at www.linkedin.com/in/johnreites or www.johnreites.com.

The views and opinions expressed by John Reites and guests are provided for informational purposes only. Nothing discussed constitutes medical, legal, regulatory, or financial advice.

SPEAKER_00:

With AI everywhere, everyone seems to be calling themselves an AI expert, especially in our industry. And probably just like you, I've seen multiple people who didn't work in AI just a few months ago now promoting themselves as AI experts or consultants. So how do you know if someone is actually an AI expert or if they're just aligning with the latest trend to stand out? And how do you... verify if someone has the expertise you want to support your AI efforts in clinical research. Let's talk about it. Now before we unpack today's topic, I just wanted to spend a minute to say thank you. Thank you for spending your time watching and listening to Inclusion Criteria. If we haven't met before, I'm John Reitz. I'm a corporate entrepreneur turned digital entrepreneur with my entire career focused on enabling clinical research for everyone, everywhere. Now, I fund and I created this non-corporate podcast to support your career, connect with both you and industry experts, and continue contributing to the actions and discussions that I believe advance clinical research. Now I'm just a few months into now my second podcast, and I'm so grateful for your messages, for your comments, your ratings of the show. Because of you, Inclusion Criteria has been moving up in the charts in major podcast apps, on YouTube, and on LinkedIn. So thank you all for your support. I really appreciate it. Now a few weeks ago, I took... A very short call with an AI expert who's now focused on clinical trials. Their pitch was that they had solved this complex challenge in patient recruitment, and if they solved it, it'd be a significant step forward as an AI use case. So I took the call, and I was genuinely interested in learning about their solution. Now, unfortunately, it doesn't appear to solve the right problem for me, but the red flag was actually that the AI expert didn't use several key AI terms correctly. They used every AI buzzword in a book, from LLM to a GenTech, without really any clear explanation of how the product worked. And when I asked what the differentiating or proprietary features were, they would say things like, that's proprietary, and then shift and begin to sell me on another functionality of the solution. So these... AI experts are popping up everywhere and history does often repeat itself because I saw the same trend with decentralized clinical trials as the approach I worked more than 10 years of my career to help develop, to advocate for, and to regulate magically generated so many DCT experts in just a couple months. at the height of the pandemic when it was needed most. So I think we all realize that to be an expert in most things, it takes years of focus, years of dedication and specific actual delivery of work product in that area. And I've been working with LLMs using AI tools, vibe coding and contributing technology features with AI in my company since early 2022. So I have some expertise in AI, but I don't consider myself an AI expert yet. And I'm sure like you, I'm continuing to put in the time and actual work to get there with a hyper focus on clinical trial use cases that matter. So in my opinion, it takes a wealth of trial, error, and what I would term successful mistakes to gain proven experience and to call yourself an expert. I think Werner Heisenberg, he's a Nobel Prize winning physicist, may have said it best when he said, an expert is someone who knows some of the worst mistakes that can be made in their subject and how to avoid them. So as we watch this AI boom in motion, I'd strongly advocate for you and anyone in our industry to take a measured but rapid approach to adding some element of AI expertise to your skill set. And to do that, you're going to have to learn from others and partner with those that have expertise in specific AI areas and use cases. So I believe there are four criteria you can use to verify if someone is actually a clinical research-focused expert in AI. It's not an all-inclusive list, but I find these helpful in my personal assessments, and I hope they're helpful to you. You can also apply this verification process to yourself if you want to be an AI expert, and apply it to any educational training you may find on the internets. Just be careful of the YouTube AI experts. Now, the first criteria is about credentials, and not all credentials are important or created equal. I'm sure there are now degrees forming in artificial intelligence, but these are not actually what we're looking for. Credentials come in a variety, so I would start by asking the expert questions like, what field is your undergraduate or advanced degree in? And what degrees or certifications do you hold specifically in AI? Do you have any peer-reviewed, not sales-related publications around your work in AI? Have you completed any industry validations with customers around your AI solution? Or do you have any case studies or references from successfully completed AI projects with either a biopharma, a CRO, a research site, or someone else in our industry? And if you're really bold and you feel like you can ask this on the call, you can also ask an expert if they've been cited or partnered with other AI experts and leaders in the field. Now, I'm typically looking for a degree or some kind of an advanced degree in computer science and machine learning, maybe in data science or similar, if they're involved in the actual coding or development of the solution. So I would start here, but no, this is not how I would write in or write off an AI expert. If they've worked at a larger AI company, such as OpenAI or Anthropic, or if they've worked on an AI project within a company that really had credibility and used the tools publicly, I do think this matters more than anything else you may be able to find about their credentials on the internet. Now, on the flip side, if an AI expert openly refers to themselves or uses terminology on their social platforms like AI visionary, AI luminary, AI futurist, mentor to the stars, or similar, whatever those terms are, they may be right. And this may be an accurate reflection of their expertise. But openly, in my experience, it's typically the opposite. So this is something to be cautious with and look out for when you're trying to select an expert to partner with. Now, if after all these questions, they have little to none of these credentials, I would consider passing. on working with or learning from this AI expert. The second criteria, and what I would consider arguably the most important, is actual proven work experience. The ability to speak in depth on a specific use case that solved a specific challenge for a specific type of client is what makes me feel confident in an expert. Experience also has an interesting way of helping experts to educate people on complex activities with the utmost simplicity. Not too long ago, I had an opportunity to tour a facility with a working nuclear reactor. And as cool as that was to see the reactor, what actually really impressed me was the ability for the experts in the field to explain the complexity of nuclear power in a way that I couldn't even understand. They didn't just manage the reactor. They knew every component of it so well that they could explain how each of the parts they managed worked at mostly a high school grade level. Unpacking someone's AI expertise and seeing an expert light up and explaining how they're using AI and going truly in depth on how the solution works and what you figured out is such a good way to learn about expertise. So if this is what you want to learn about, I would start with asking questions like, could you explain to a high schooler how an LLM or a large language model works? How many specific projects have you supported with this AI solution? Could you show me the live platform and how the AI works for your clients? Could you provide a case study with specific data showing where the AI solution performed well? And on the flip side, could you please provide a case study with specific data showing where the same solution underperformed? Other questions include, how did you support the data requirements to enable your AI solution? Or, How do you review, reference, and clean the data that is powering your technology? And for extra points, do you have a GitHub repo with any of your AI projects in it? If you get vague answers, blank stares, or if you feel confused by the AI expert's responses, this should be a key red flag for you. One of my favorite words to use in business is measured. And it's the third criteria. The ability to be confident, direct, and maybe even a little cocky can always be balanced when an expert is measured. This ability to know a problem and its solution so deeply while also being clear about both the positives and negatives is so important. Real experts see positives and negatives as just data points and not a reflection on their personal ability or their personal performance. So if someone only talks about how amazing AI is without mentioning any problems or limitations, it's a huge red flag. And if you hear anything similar to AI will solve all your problems or we'll just automate this system so it works automatically for you, or if they use the word automagically, I would consider ending the call early. AI experts understand that the verification, the math, and the process behind their solution. They don't need to recite formulas for you, but they should grasp beyond the fundamental AI concepts and be able to explain them in detail. A real AI expert will tell you about the successes and the challenges of their AI solution. And if they don't, this is also a potential red flag. And I'd recommend asking them questions like, What gaps does AI have today and how does it fail? What parts of AI are you not an expert in? What's the specific return on investment I should get with an AI solution before scaling it? Have you ever removed AI from a solution because it did not improve it? Or what's the biggest limitation of current AI technology in your field? You know, real experts are excited to talk about problems because that's where the interesting work happens. Fourth and final criteria is about ethics. If an AI expert doesn't mention the term ethics, bias, or safety, I'd flag that as a concern. Now, this is a massive topic and not something I'm gonna do justice to in this short reference, but at a high level, Real AI experts are obsessed with responsible AI practices because they've seen how things can personally go wrong. They also understand that AI is a specialized field with many focus areas. Nobody is an expert in everything AI, and we should all be concerned with safe implementation of this tool in society. So if the expert doesn't bring this topic up, I would advise asking questions like, how are you managing the safety and privacy of your AI solutions? How do you manage AI hallucinations? What about AI and its growth? is the most concerning to you today? Or what are the constraints we need to manage when we're implementing our own AI solution? How should we think about bias in our AI solutions? These four criteria should be a helpful start in your journey to identify and learn from AI experts in clinical research. So please remember, real experts explain clearly. They discuss limitations. They've built actual systems and they can get specific about their work. Don't be impressed by buzzwords, by titles, or by grand promises about what AI can deliver. Be impressed by clear explanations, honest discussions of challenges, and concrete examples of real work an expert can show you, not just tell you. So have you encountered any AI experts that generated similar red flags? Do you have a list of any critical questions to add to our list? Please comment. or send them to me on LinkedIn if you do. Thanks for listening and look forward to next time.