In systems biology, we often speak of emergence—how complex systems yield behaviors not apparent from their individual parts. The rise of generative AI is an emergence moment for science itself. Suddenly, tasks that once took days — literature search, modeling interpretation, regulatory formatting — can now be scaffolded, tested, and even semi-automated by large language models (LLMs) to produce desired results in minutes. But scientific work isn’t just output. It’s judgment, context, and integrity.
In my work at Simulations Plus, where we utilize machine learning enabled tools like GastroPlus® and ADMET Predictor®, this duality is clear: AI accelerates the mundane, but humanics is the catalyst.
This blog post is a field guide for scientists navigating this shift. It’s about working with GenAI-intelligently, responsibly, and curiously.
Mapping Our Relationship with GenAI
Before deciding how to engage with AI, it’s worth reflecting on your current mindset. Over the past year, I’ve noticed a spectrum in how scientists approach AI:
The GenAI-Hesitant: This group is skeptical of the technology, often uncertain about its accuracy or role in the research process. They ask valid questions: Will it compromise scientific rigor? Who is accountable for AI-generated conclusions? People in this category often wait for institutional policy before exploration.
The GenAI-Harmonist: Harmonists see GenAI as a collaborator. They use LLMs and other tools to enhance their productivity. Curious practitioners use GenAI for code generation, figure annotations, model documentation, or PK summary tables – without handing over scientific ownership and keeping the policies of use in mind.
The GenAI-Maximalist: These are the early adopters and technical tinkerers. They are building custom AI agents, deploying multimodal systems, and integrating GenAI into every phase of the scientific workflow – from hypothesis generation to regulatory documentation.
The goal is not to label yourself, but to understand your position and decide how to move forward responsibly. Personally, I oscillate between the second and third groups. For instance, I use GenAI to extract physicochemical features from literature but rely on ADMET Predictor® for robust machine learning-based predictions of properties like logD, solubility, and CYP inhibition-ensuring scientific grounding.
Expanding the Modern Scientific Skill Set
What’s different now than in the past is what it takes to be an effective scientist. Domain expertise remains foundational, but the edge lies in interdisciplinary skills. We must broaden our skill set to include what I call a dual foundation of humanics and technical literacy.
Core Human Skills (humanics) | AI augmented capabilities |
Critical Thinking and Scientific Rigor | Prompting and text summarizations while benchmarking the errors |
Scientific Adaptability | Prompt Engineering Techniques and Large Language Models (LLM) Agent Orchestration |
Communication and Empathy | Familiarity with AI Tool Ecosystem, Visual, Audio, and Multimodal Synthesis |
Curiosity and Inquiry | Data & Tech Literacy, AI assisted Code, Report Writing and Summarization |
Collaborative Leadership | |
Communication and Story-telling | Presentation storyline or regulatory memo draft generation with GenAI |
These skills form a resilient foundation, enabling scientists to interact effectively with AI while retaining our role as decision-makers and creators of meaning.
From prompting to prompt engineering as an AI augmented scientist
At the heart of GenAI is the prompt. Whether you’re working with ChatGPT, Claude, Gemini, or another model, the quality of your prompt determines the utility of the output. In scientific research, vague or general prompts often yield irrelevant or inaccurate results.
Basic Prompt
What are the pharmacokinetics of ketoconazole?
Refined Prompt Using CARE Framework (Context, Action, Result, Elaboration)
Act as a regulatory scientist. Based on recent EMA and FDA guidance, summarize the pharmacokinetic parameters of ketoconazole relevant to its use as a CYP3A4 inhibitor in drug-drug interaction (DDI) studies. Include Cmax, Tmax, AUC, and pKa values, and cite references from PubMed.
Limit response to 250 words and output as a table followed by a brief narrative.
Writing effective prompts is not a trick, it’s a skill that must be developed and refined, akin to crafting a good research question or writing a precise methods section.
Integrating GenAI Responsibly
Adoption without accountability is dangerous. To support ethical integration of GenAI into research, I advocate for the FASTER framework:
- Fair: Review and question the biases in training data and outputs. Avoid using AI-generated content for legally impactful decisions without human oversight.
- Accountable: Maintain transparency and ownership over what AI generates. Do not delegate expertise to the model.
- Secure: Understand the privacy implications. Opt out of model training where possible and avoid inputting sensitive data.
- Transparent: Always disclose when GenAI was used, how it was used, and to what extent outputs were reviewed or modified.
- Educated: Stay informed. Read terms of service, study prompting strategies, and engage with AI education resources.
- Relevant: Ask yourself—Is AI the right tool for this task? Avoid over-relying on GenAI for tasks better done with human judgment. Not every task needs GenAI. Sometimes, a pen is mightier than a sword.
This framework is particularly critical in biomedical and regulatory contexts, where accuracy, reproducibility, and ethical integrity are non-negotiable.
Applying GenAI in the Scientific Workflow
Using GenAI tools as a scientific swiss knife is incredibly useful and rewarding. The augmentation to such tools helps us envision that not every nail deserves a hammer. Here are practical ways I’ve seen GenAI used effectively in the life sciences and pharmaceutical research:
- Scientific Writing: Drafting study protocols, summarizing literature, or formatting regulatory reports. (e.g., Scispace, Scite, TLDR , jenniAI, NotebookLM, Answer this)
- Data Interpretation: Extracting pharmacokinetic parameters from study tables or automating dose-response curve fitting. (JuliusAI for R/Python tasks)
- Code Generation: Automating Excel macros, generating Python/R scripts, or converting data to tidy formats. (Cursor, Github copilot, Mintify)
- Knowledge Discovery: Building knowledge graphs or semantic search agents that query PubMed, ChEMBL, or internal databases.(Amass, Alchemi)
- Communication: Translating complex data into plain language for non-specialist stakeholders or patient audiences.
When thoughtfully deployed, GenAI is a powerful enabler for “human-in-the-loop” scientific reasoning.
Human-AI Collaboration is the Future
The future of biomedical discovery will not be AI-led or human-led. It will be human-guided AI — collaborative agents that analyze data at scale, identify novel patterns, and suggest new directions, while the human scientist exercises judgment, creativity, and ethical oversight.
A recent publication in Cell outlined the rise of “AI scientists” collaborative agents that learn from data and human feedback to co-pilot discovery. These systems don’t diminish our role. They expand it.
Context is the Endgame
The role of a scientist is not shrinking, it is evolving. We are no longer just experimenters. We are also designers of intelligent systems, curators of trustworthy data, and stewards of responsible innovation.
If I had to summarize what matters most in the GenAI transition for scientists, it’s this:
- Contextual awareness: Know your data, your tools, your regulatory boundaries.
- Curiosity: Be willing to explore the edges of what’s possible. Build core skills in prompting, data analysis, and AI literacy. Stay grounded in scientific rigor and critical evaluation.
- Clarity: Design every prompt, every project, every publication with precision and purpose.
- Collaboration: This is not a solo game. The best outcomes will come from multi-disciplinary teams, hybrid thinkers, and inclusive practices.
GenAI is not replacing scientific insight. It’s creating more room for it. If we build our skills with intention, we’ll not just survive the AI wave-we’ll define its impact. The work ahead will be shaped not by the machines we build, but by how wisely we choose to work with them.
Curious about how AI is transforming biomedicine and how the FDA views its use? Learn more by exploring our recent webinars on the topic and discover how AI is being applied in the field, as well as the regulatory considerations shaping its future.