Interview: Paul Neyman, Co-Founder and Chief Revenue Officer of Areti Health
Most people who sign up for a clinical trial never hear back. They click an ad, fill out a form, and wait. Days pass. Then weeks. The phone call never comes. Somewhere on the other side of that silence, a research coordinator is buried in spreadsheets, chasing leads that go nowhere, falling behind on enrollment targets that were already unrealistic. And the trial, the one that might have produced a treatment someone desperately needed, quietly falls apart.
Paul Neyman noticed this before he built anything. A Berkeley-trained engineer who spent nearly two decades in enterprise technology, he had friends who waited months for a screening call that never arrived. When his longtime co-founder Ilya showed him a prototype using generative AI to automate the recruitment process, the problem and the solution clicked immediately.
That conversation over lunch became Areti Health. The platform now connects to over 250 million patient records, matches candidates to trial criteria in seconds, and handles outreach, prescreening, and scheduling without a human coordinator making a single call. In an Alzheimer’s study where manual recruitment was expected to take six months, Areti scheduled 305 patients in three weeks.
We interviewed Paul Neyman because clinical trial recruitment is one of those problems everyone in healthcare knows about and almost nobody has solved. He explains how AI is changing that, why the clinical research coordinators he works alongside have gone from skeptics to collaborators, and what it actually takes to build technology that earns the trust of patients at their most vulnerable.
Part 1: Foundation: Understanding Areti Health and why clinical trials keep failing before they finish
1. Paul, thank you for joining us. You started your career as a software engineer, then made a deliberate move into sales, and eventually co-founded a company in healthcare. Most people don’t take that path. Can you walk us through how that journey unfolded?
I started my career in software engineering, and while I valued the technical foundation it gave me, I found myself feeling unfulfilled over time. The day-to-day rhythm of sprints, tickets, and bug fixes didn’t quite align with what motivated me most – I wanted to be closer to real-world problems and the people experiencing them. I was less interested in just building features and more interested in understanding why those features mattered.
That realization led me to move into sales engineering, which became a turning point. It allowed me to stay technical while getting directly involved with customers – understanding their pain points, mapping those to solutions, and seeing the tangible impact of what we were delivering. Over time, I developed a strong appreciation for the ownership that comes with sales: you’re responsible for identifying opportunities, deeply understanding the customer’s needs, and shaping the right solution. I especially enjoyed being able to bridge both worlds – speaking the language of business stakeholders while also building credibility with technical teams through transparency and technical depth.
That experience ultimately pulled me further into sales leadership, where I progressed to VP of Global Sales. But the entrepreneurial pull was always there. Eventually, I reconnected with my longtime friend Ilya Gluhovsky, who was exploring a new idea – using generative AI to engage with and screen patients interested in clinical trials. It immediately resonated with me. It felt like a frontier application of AI, leveraging its strengths in conversation and reasoning to solve a real, high-friction problem in healthcare.
We both saw the potential right away and decided to go all in. It was the kind of opportunity where my background – technical, customer-facing, and commercial – came together naturally. That’s how the journey into co-founding a healthcare company really began.
2. Before we get into what you built, help readers understand the problem. What actually happens today when someone tries to join a clinical trial, and what are the consequences when that process breaks down?
Today, when someone tries to join a clinical trial, the process is far more fragmented and manual than most people realize. It often starts with a patient seeing an ad or being referred to a study, then filling out a basic form or leaving their contact information. From there, the responsibility shifts to the clinical site to follow up – but that’s where things begin to break down.
Sites are typically understaffed and juggling multiple studies at once, so outreach can be delayed or inconsistent. When contact does happen, it’s usually a phone call during business hours, which many patients miss. Even when they connect, the initial screening process can be lengthy and repetitive, requiring patients to recall detailed medical history without much guidance. If they don’t qualify, they’re often left without direction on alternative options.
On the other side, sites are working with incomplete information. They may spend significant time chasing down patients who ultimately aren’t eligible, while missing those who are a strong fit but slipped through the cracks due to timing or lack of follow-up. There’s no scalable, intelligent way to pre-screen and prioritize patients before that human interaction.
When this process breaks down – and it often does – the consequences are significant. Patients lose interest or fall through the cracks, sometimes missing out on potentially life-changing treatments. Sites struggle to meet enrollment targets, which delays trials. And at a broader level, sponsors face extended timelines and increased costs to bring new therapies to market.
It’s a system where inefficiency isn’t just an operational problem – it directly impacts patient access and the pace of medical innovation.
3. For someone who has never heard of Areti Health, describe what your platform actually does. If a patient is out there right now who might qualify for a clinical trial, what does your system do that would not happen without it?
At its core, Areti Health is a platform that identifies the right patients for clinical trials and then ensures they actually make it all the way through enrollment – something that rarely happens reliably today.
On the identification side, what makes us fundamentally different is how deeply we understand patient data. Instead of relying on surface-level filters or keyword matching, we analyze a patient’s complete medical record using advanced NLP – everything from lab results and prescriptions to imaging reports, physician notes, and attached documents. A huge portion of clinically relevant information lives in unstructured data, and that’s where traditional systems fall short.
We look at that data holistically. It’s not just about checking boxes for inclusion and exclusion criteria – it’s about detecting meaningful signals. That includes signals explicitly present in the record, as well as implicit signals, like what might be missing or what patterns suggest about where the patient’s condition is heading next. That allows us to match patients to trials with a level of precision that simply wouldn’t happen otherwise.
But identifying the right patient is only half the battle. The bigger challenge is actually engaging them and guiding them through the process. That’s where our platform becomes fully end-to-end.
We use an agentic engagement model that reaches out to patients via phone, text, and email – based on their preferences – and does so in a highly personalized way. Because we understand their medical context, every interaction is tailored. We can explain the study, answer detailed questions, and address concerns in real time. And unlike traditional site staff, we’re always on, 24/7, so patients don’t fall through the cracks, bad timing or missed calls.
From there, we prescreen patients conversationally or via mobile web experience, collecting structured and unstructured inputs to determine eligibility before a site ever has to get involved. And we don’t stop once a patient expresses interest – we actively nurture them through the entire process, reducing drop-off and no-shows by staying engaged up to the actual visit.
We can even handle consent digitally, including running a “teach-back” process – where the patient demonstrates understanding of the study in their own words. We validated this approach in a large-scale exercise with Duke University, showing that patients can be both informed and engaged without adding burden to the site.
So if a patient out there today might qualify for a clinical trial, what we do differently is this: we find them with far greater accuracy, we engage them in a way that feels personal and immediate, and we stay with them all the way through enrollment. Without a system like Areti Health, most of those patients would never be identified – or they’d simply drop off before ever participating.
4. Patient medical records can be extraordinarily complex, hundreds of pages of lab results, prescriptions, imaging, and clinical notes. How does your AI actually read and interpret that volume of data, and how fast does it work?
You’re absolutely right – patient medical records are incredibly complex, and reviewing them thoroughly is both time-consuming and cognitively demanding. On average, it takes a qualified medical professional about 2.5 to 3 hours to fully review a single patient record, especially when you’re trying to assess clinical trial eligibility against detailed inclusion and exclusion criteria.
When we pull records, we’re typically dealing with around 185 individual documents per patient. That includes everything from lab results and demographics to longitudinal patient history, primary care and specialist visits, imaging reports, and pharmacy prescriptions. In most cases, we’re able to retrieve this data within about five minutes of initiating a request. Occasionally, if additional systems or exchanges need to be queried, it can take up to 24 hours to get a complete dataset – but that’s the exception rather than the norm.
Once we have the data, our AI models take over. We ingest the full record and evaluate it directly against the study protocol, specifically mapping to inclusion and exclusion criteria. At the same time, we generate a structured AI summary that distills the most relevant clinical information – vitals, demographics, problem lists, treatments, prescriptions, and other key signals a medical professional would need to make an informed decision.
Importantly, we don’t just produce a binary answer. Our output is graded: eligible, likely, possibly, or ineligible. And transparency is critical here – we provide a clear, navigable view where the treating provider or study staff can see exactly why a patient was categorized a certain way, including direct references back to the underlying medical record that support that determination.
The entire analysis process takes, on average, about five minutes per record. So what traditionally requires hours of manual review can now be done in a fraction of the time – while still preserving the depth, traceability, and clinical rigor needed to make confident decisions.
5. Once your system identifies a potentially eligible patient, what happens next? How does a patient actually experience the outreach?
Once our system identifies a potentially eligible patient, the next step is engagement – and that’s where Areti Health really transforms the experience. We reach out proactively and thoughtfully, using a combination of calls, texts, and emails. But it’s not random or one-size-fits-all; we establish a tailored cadence based on therapeutic area, timing, and patient preferences, reaching them at different times of day and different days of the week to maximize the chance of connection without being intrusive.
What we ask of the patient is also carefully calibrated. Because we’ve already analyzed their medical record, we know what information we’re missing and what’s already clear. This allows us to fill in just the gaps needed to make an accurate prescreening decision, avoiding unnecessary or repetitive questions that could frustrate the patient.
Throughout the process, patients are encouraged to ask any questions they have about the study, and our system responds in real time. We stay on script to ensure we capture the necessary prescreening information, but we also bring empathy and patience to every interaction. Many patients are hesitant, curious, or simply on the fence – our approach is designed to educate and guide them, addressing concerns and providing clarity so they feel confident in their decision to participate.
This combination of strategic outreach, targeted questioning, and empathetic communication allows us to move patients through prescreening efficiently while preserving a human-centered experience. We’re not just connecting patients to trials – we’re making sure they feel informed, respected, and supported every step of the way.
Part 2: AI in Practice: How Areti’s platform performs in real studies, with real patients, under real regulatory constraints
6. Can you walk us through a specific study where Areti made a measurable difference? What did the numbers look like before your platform was involved, and what changed?
After running 200 studies in 30 therapeutic areas, and over 240,000 patients engaged in two years, it’s hard to pick just one – but let me give you a few that illustrate the range.
Start with a recent dermatology study for a CRO that needed 32 randomized patients. We run a weekly sync with that team. At the end of week two, we were sitting at 31. To anyone outside clinical research, that might sound unremarkable. Inside this industry, where recruitment timelines are measured in months and slippage is treated as a given, it was almost difficult for people to process. That study essentially closed before most trials have finished onboarding their sites.
Then consider scale. We recently wrapped a large Covid study where we were engaging thousands of leads per day – across multiple customers, simultaneously – while running all of our other studies in parallel. That’s not a sprint; that’s what our infrastructure was built to sustain.
For a different kind of number, look at an eye pain study where we sifted through more than 5,000 patients to identify the right candidates and closed recruitment in eight weeks. Or an early Alzheimer’s study – one of the most notoriously difficult therapeutic areas to recruit in – where the sponsor came in expecting a screen failure rate of 98%. We brought that down to just over 55%, delivered more than 60% of the patients who were ultimately randomized, and boosted overall recruitment by 150%. Those aren’t incremental improvements. That’s a different category of performance.
We now launch an average of four studies a week across our customer base, and demand continues to grow. The numbers across any single study are compelling. The pattern across all of them is what we think tells the real story.
7. Beyond the numbers, have you seen patients respond to your AI in ways that surprised you personally?
Honestly, yes – and some of these moments have stayed with me.
One that stands out came during recruitment for a postpartum depression study. We expected women to answer screening questions and move on. What actually happened was something much more profound. Women began opening up about pregnancy complications, struggles with intimacy, the quiet chaos of new motherhood – details they might hesitate to share with a clinician on a tight schedule or a stranger on the phone who has twelve more calls to make. What struck us wasn’t just that they were sharing; it was why. They knew they were talking to an AI – we’re transparent about that from the very first interaction, without exception. And yet, perhaps because of that, they found something they weren’t expecting: a space where they wouldn’t be rushed, wouldn’t be judged, and could speak honestly. For many of them, the conversation itself seemed to matter as much as the enrollment.
We see a similar pattern with depression studies more broadly. People with depression often struggle with sleep, and our data reflects that – we have meaningful engagement happening at two, three, four in the morning. Someone who can’t sleep, sitting with their thoughts, finds our AI and starts talking. Not just about eligibility criteria – about their lives. What they do for work. Why they want to participate. Sometimes it’s about the extra income; sometimes it’s a genuine hope that a new treatment might help them. We’ve been told, more than once, that our AI feels more empathetic than a doctor who’s watching the clock. That’s a humbling thing to hear – and a reminder of how much unmet need exists simply for someone to listen.
And then there are the moments that keep you humble in a different way. We had one prospective participant who was so genuinely thrilled to be contacted – so excited about the study – that they expressed their enthusiasm in, let’s say, very colorful language. The kind that triggers content moderation. Which it did. Conversation closed. We had to laugh, because here was someone who wanted nothing more than to enroll, and our system flagged their excitement as a problem. It’s a good reminder that AI, for all its capabilities, still has some work to do when it comes to the full spectrum of human expression. Joy, apparently, can look a lot like a policy violation.
8. Clinical trial messaging is tightly regulated. Regulators require pre-approved, specific language for anything said to a patient during recruitment. How do you run a conversational AI inside those boundaries without it feeling like a scripted phone tree?
We built something genuinely different from a scripted system – and then we proved it could work within regulatory frameworks, rather than assuming it couldn’t.
Most “AI” in this space is essentially a decision tree with a chatbot skin. A patient says something unexpected, and the system either mishandles it or falls back to a canned response. That’s not us. Our system thinks and communicates like a medical professional who has deeply internalized every aspect of a given study – the protocol, the eligibility criteria, the patient population, the therapeutic area, and the broader clinical context around it. It doesn’t recite; it reasons. That’s what makes conversations feel natural rather than robotic.
Now, to your point about regulation – this is exactly where we invested heavily, because we knew that conversational fluency means nothing if it can’t survive regulatory scrutiny. We worked directly with the largest commercial IRB bodies in clinical research, as well as an academic IRB known for taking a particularly rigorous and conservative approach. All of them evaluated our conversational methodology and application – and all of them gave us a green light. More specifically, we received official generic IRB approval, which is significant: it means our conversational approach and its underlying ethics have been validated for use in recruitment by any clinical research organization, not just on a case-by-case basis.
That said, we still operate within the appropriate submission process for individual study materials – specific language used in confirmations to patients, for example, goes through the standard review. Generic approval isn’t a blanket exemption; it’s a recognition that the method itself is ethically sound. The result is a system that gives site teams and sponsors the compliance assurance they need, while giving patients a conversation that actually respects their intelligence.
9. Hallucination is a known risk with AI language models. In healthcare, a wrong answer to a patient is not just an inconvenience. What happens when your system reaches the edge of what it knows?
This is probably the question we take more seriously than any other, because the stakes are real. A confused patient who receives a wrong answer about a study doesn’t just have a bad experience – they may make a decision about their health based on it. That’s not acceptable, and we designed our system with that reality at the center.
The first layer of protection is what the system is built on. Unlike general-purpose models that are trained to range broadly and, frankly, incentivized to produce an answer even when certainty isn’t warranted, our knowledge is deliberately grounded – rooted in the informed consent documents and patient-facing materials specific to each trial. The system knows what it knows, and that boundary is intentional.
The second layer is behavior at the edge of that boundary. When our system encounters a question it cannot answer with confidence, it doesn’t speculate. It doesn’t reach. It flags. That’s a fundamental design choice, and it’s one we demonstrate explicitly as part of our IRB application – regulators can see exactly how the system behaves when it hits a knowledge limit, because that behavior is as important as anything else we submit.
What happens next is where the human element comes in. The conversation escalates – either through a warm transfer to a member of the recruitment team in real time, or through a callback so that no patient is simply left without an answer. A human takes over, handles the question with the appropriate context, and the interaction is resolved properly.
And then we close the loop. That knowledge gap doesn’t just get patched in the moment – it gets filled with verified, factual information so that the next time a patient asks something similar, the system can handle it correctly. The edge of what we know today becomes part of what we know tomorrow.
We think that’s the only honest model for AI in healthcare. Not a system that never admits uncertainty, but one that knows exactly what to do when it encounters it.
Part 3: Building the Business: How a young AI company earns trust in one of healthcare’s most conservative industries
10. Clinical research is a conservative, relationship-driven industry. You are a young company asking large pharmaceutical companies to change a process that has been manual for decades. How do you actually get in the door?
Carefully – by taking a step at a time. We didn’t start by pitching large pharma. We started at the site level – the individual research centers where recruitment actually happens – and we stayed there until we were confident the tool was genuinely ready for a wider audience. That meant real studies, real patients, real feedback, and a lot of iteration. We weren’t going to walk into a conservative industry with a half-built product and hope the pitch carried us through.
What happened next was less about sales strategy and more about the nature of the industry itself. Clinical research is an extraordinarily tight-knit world – people move between sites, CROs, and sponsors; everyone knows everyone; and reputation travels fast. We started getting recommended. Sites that had worked with us pointed others in our direction, and we grew organically through exactly the kind of trust-based referrals that this industry runs on. From sites, we elevated to CROs, and from CROs to sponsors. Each step was earned rather than forced.
But the moment that truly changed our trajectory came when we weren’t chasing a deal at all – someone came to us. Duke University’s innovation center, i-Cubed, was conducting a competitive RFP among industry contenders. We were invited, we competed, and we won. What followed was a large-scale exercise across several studies – and we didn’t just perform, we delivered results that were measurable, validated, and confirmed by one of the most prestigious medical institutions in the world. That work culminated in a published press release and a presentation at SCOPE, one of the largest and most respected conferences in clinical research.
When conservative pharma asks why they should trust a young company, there’s no better answer than: because Duke asked the same question, ran a rigorous process to find out, and published what they found. That kind of third-party validation doesn’t just open doors – it makes the conversation entirely different when you walk through them.
11. Rather than competing with the large research organizations already operating in this space, Areti works alongside them. Why did you choose that approach, and what does that partnership model look like in practice?
Because the most powerful engine in the world still needs a chassis.
We think of Areti as a V8 – purpose-built, high-performance, ready to run. But we have no interest in also building the car. The established players in this space – patient recruitment companies, CROs, research sites – have spent years, sometimes decades, constructing something extraordinarily difficult to replicate: trusted sponsor relationships, executed MSAs, completed security audits, compliance frameworks, operational infrastructure. That’s not overhead. That’s a moat. And rather than spend years trying to build one of our own, we made a deliberate choice to plug directly into the one that already exists.
In practice, what that looks like is straightforward. Their salesforce walks into a sponsor conversation with something new to offer – a demonstrably better recruitment capability that compresses timelines and removes the operational bottlenecks that have plagued this industry for years. They win more studies, they perform better on the ones they have, and they deepen relationships with sponsors who are under enormous pressure to hit enrollment targets. We’re behind the scenes making that happen, which is exactly where we want to be.
The framing we always come back to is this: we are a disruptor of old processes, not a killer of the industry. The people who built this ecosystem didn’t do anything wrong – they built it with the tools that existed. We’re simply a better tool, and we work best in the hands of people who already know how to use the ones they have. That’s not a compromise. That’s a strategy.
12. There is a common fear in healthcare that AI will replace clinical staff. What is your honest answer to a Clinical Research Coordinator who is worried about their job?
Look around your office right now. Is it overstaffed? Because I’ve never walked into a research site that was.
The fear of AI displacement in clinical research assumes there’s a surplus of people doing this work. There isn’t. CRCs are stretched thin, coordinators are juggling more studies than is reasonable, and the operational load – the follow-up calls, the screening backlogs, the repetitive outreach to patients who may or may not qualify – never shrinks. It just accumulates. The industry isn’t struggling to find ways to reduce headcount. It’s struggling to find enough people to do the work that already exists.
What we do is fill gaps that are already there. The cold calling that nobody wants to do – the tenth attempt to reach a patient who filled out a form three weeks ago – that’s not a task anyone went to school for. That’s not why someone becomes a Clinical Research Coordinator. They came to this work to be close to science, to support patients through a meaningful process, to contribute to research that eventually changes how medicine is practiced. We handle the part that pulls them away from all of that.
Think of it less as replacement and more as finally getting the operational support that should have existed years ago. When our AI is handling the top of the funnel – engaging, screening, scheduling – the coordinator gets to do the job they actually trained for. The human relationship with the patient deepens rather than disappears, because the person representing the site now has the time and bandwidth to be present for it.
AI is a powerful tool. And in clinical research right now, the people doing this work don’t need fewer tools. They need better ones.
13. Running an AI company in healthcare means carrying responsibility to multiple groups at once: patients, customers, investors, and your own team. How do you manage that pressure, and what keeps you focused on what matters most?
The pressure is real, and I don’t think it should be managed away. The fact that a wrong decision can affect a patient’s experience in a clinical trial, or erode trust that a customer spent years building with a sponsor, or cost a team member something they believed in – that weight is appropriate. I’d be more worried about a founder who had figured out how to stop feeling it.
What actually keeps me grounded is the team, and I don’t say that as a platitude.
We made a deliberate choice early on to hire from within the industry – people who had lived these problems before we came along to solve them. I’m genuinely grateful, every day, that I had the opportunity to share our vision with them and that they believed in it enough to come on board. That kind of trust is not something you take lightly.
One of the people I’m most grateful for brings something that can’t be taught quickly: she has built patient recruitment operations from the ground up, done medical research, worked alongside recruiters and PIs. Another: a former bench scientist who built a company focused on medical record review before, and wanted to do it again, the next generation of it. When we’re navigating a complex IRB question or thinking through how a patient population will actually respond to a conversation, experience of those people is the compass. They keep us attuned to the world we’re operating in – not the world we imagine from the outside.
And then there’s my founder, who I consider one of the most talented builders I’ve had the privilege of working with. From day one, he organized and built our engineering processes and shaped the first version of the product with clarity and discipline. That foundation is what the entire company stands on. And because he built it so well, I’ve been able to stay focused on what I do best – building relationships, developing the business, and telling this story to the people who need to hear it.
The way I think about managing multiple responsibilities is that you don’t balance them so much as you build a team where each person is deeply accountable to one of them – and you trust each other completely. Patients are protected because the people closest to our product care about patients. Customers are served because the people running those relationships came from this industry and understand what’s actually at stake. The business grows because the engineering is solid enough that I can go focus on growth without worrying about what’s behind me.
Part 4: AI Adoption and the Future: What it takes to make AI stick inside a company, and where clinical research goes from here
14. Every person at Areti uses AI daily across every department. But getting there was not automatic. What did it actually take to drive real adoption inside your own company?
We started by looking at the best companies coming out of Silicon Valley – studying how they run engineering processes, how they think about tooling, how they stay ahead rather than catching up – and making a genuine commitment to replicate what was working. That’s not a one-time exercise. It’s a posture. If you can’t move with the times in this industry, you’re off the boat. That sounds blunt, but it’s the reality, and we didn’t shy away from it when we had to make difficult calls about people who weren’t willing to adapt.
Adoption looked different across the company, which I think is worth being specific about.
In engineering, the challenge was never getting people started – it was pushing utilization high enough that AI stopped being a useful shortcut and became a genuine extension of how people think and build. That required active encouragement from leadership, not just permission. There’s a difference between a team that uses AI when it’s convenient and one that reaches for it instinctively, and closing that gap took deliberate effort.
For customer success it came naturally. Drafting compliance responses, filling out complex forms, managing documentation – these are exactly the tasks AI was built for. There was no need to manufacture a use case. The use case was already sitting in the inbox every morning.
Business development took the most convincing, and I’ll admit that. Drafting emails is useful, but it wasn’t enough to change how the team actually operated. The shift came when we started using a desktop tool – Cowork – that genuinely changed the workflow rather than just augmenting it. And the moment that clicked for everyone was almost comically practical: someone realized they could point AI at a conference website, pull the attendee list, structure it into an importable file, push it into the CRM, enrich the contacts, and use that to book meetings – in a fraction of the time the manual process would have taken. Once people saw that, the conversation about adoption was over. Everyone became a believer, not because we told them to, but because the results were impossible to argue with.
That’s ultimately what drove real adoption: not mandates, but moments where the tool made something genuinely hard feel embarrassingly easy.
15. You have built an AI product, deployed it in a regulated industry, and navigated adoption inside your own team. When you think about someone whose job is being changed by AI right now, what is the one thing you wish more people understood about making that transition work?
I want to be careful here, because I think the people asking this question deserve honesty more than they deserve comfort.
We are in genuinely unprecedented territory. Automation has always displaced certain kinds of work – repetitive, physical, predictable. What’s different now is that it’s moving into areas we assumed were safe: creative work, judgment-based work, the kind of tasks that used to require years of accumulated human experience to do well. That assumption is being stress-tested in real time, and I don’t think anyone has a clean answer for what comes next.
And I’ll say something that might be uncomfortable: I don’t think creative people are safe. I don’t know anyone serious who does. Graphic designers, translators, writers, the film industry – we are watching AI-generated content flood every channel, and the public is consuming it. Eagerly, for now. The concern I have is a specific one: people will keep reaching for the cheaper, faster, AI-produced version until the day they’ve had enough – until the sameness becomes suffocating and the demand for genuine human creativity comes roaring back. But by then, the infrastructure that produced that creativity may be gone. The designers who didn’t get hired. The translators who left the field. The junior filmmakers who never got their first job. You can’t just turn human creative culture back on like a faucet once you’ve let it run dry.
What concerns me most isn’t today’s displacement – it’s the compounding effect. A lot of the people being impacted right now are earlier in their careers. Junior roles. Entry points. And I find myself asking: if those roles shrink, who becomes the senior person in ten years? How does expertise develop when the apprenticeship layer disappears? What does AI train on when humans are no longer doing the work that generates the signal? These aren’t rhetorical questions. I genuinely don’t know the answers.
So what’s the one thing I wish more people understood? Maybe this: the transition isn’t something you make once. It’s something you keep making. The people I’ve seen navigate it best aren’t the ones who found the right AI-proof skill and planted a flag. They’re the ones who stayed genuinely curious, stayed uncomfortable, and never stopped paying attention to what was changing around them. That’s not a formula. But in the absence of certainty, it might be the closest thing to one.
16. Every day a clinical trial runs behind on enrollment carries a real financial cost for the companies funding the research, but more importantly, it delays access to treatments for patients who are waiting. If Areti reaches the scale you are aiming for, what does a clinical trial timeline realistically look like in five years?
We don’t have to speculate entirely – because we’ve already seen a glimpse of it.
During our work with Duke’s i-Cubed, we ran an exercise that wasn’t just a recruitment test. It was a look at what an end-to-end agentic clinical trial could actually look like in practice. We started with a study synopsis. From there, AI generated the protocol and informed consent form. That fed into agentically generated IRB materials, followed by agentic IRB review. Then came patient record review conducted by our AI Coordinator, agentic patient engagement, consent, teachback, and even an exercise in patient reported outcomes – with all of the resulting data moved into the EDC with practically no human intervention at any stage.
What would normally take six to nine months took fourteen days total. Seven and a half of those days were pure agentic work. The majority of the remaining time was spent on something refreshingly human and unsolvable by automation alone: actually reaching people. Phones ring. Lives are busy. That part, for now, still takes time.
But everything around it – the documentation, the review, the engagement, the consent process, the data flow – moved at a speed the industry has never seen, validated by one of the most respected medical institutions in the world.
So what does a clinical trial timeline look like in five years, at scale? If the infrastructure catches up to what we’ve already demonstrated is possible, the question stops being how long a trial takes to recruit and starts being how quickly the science itself can move. The bottleneck shifts. Months of operational overhead compress into days. Patients waiting for a treatment that’s stuck in an enrollment backlog get access sooner – not as an aspiration, but as a structural outcome of the process working the way it should.
We’re not describing a future we’re hoping to build. We’re describing something we’ve already done once, at smaller scale, under real conditions. The five-year question is really about how broadly and how fast the industry is willing to let it happen.
17. AI is moving fast in every industry, but healthcare carries a different kind of stakes. Looking at where things are heading, what gives you the most hope, and what genuinely concerns you?
Two very different things, and I’ll take the hopeful one first.
What genuinely excites me is what’s happening at the intersection of AI and drug discovery. The pace of AI-driven compound identification, target validation, and molecular modeling is compressing timelines that used to be measured in decades. And beyond that, the progress on digital twins – computational models of individual patients that can simulate how a specific person might respond to a specific treatment – points toward a future where we stop treating populations and start treating people. That’s not a marketing phrase. That’s a fundamental reorientation of what medicine can be. If AI delivers even a fraction of what the science currently suggests is possible, we are looking at breakthroughs in the near future that would have seemed implausible ten years ago. That gives me real hope.
What concerns me is the architecture underneath all of it.
Almost everything being built in AI today – the models, the compute, the data infrastructure – runs through a very small number of hyperscalers and technology powerhouses. A handful of companies control the engines, and increasingly, the data those engines learn from. In most industries that’s a concentration-of-power problem. In healthcare, it’s something more serious. Medical data is among the most sensitive and consequential information that exists. The insights derived from it – about populations, about risk, about what treatments work and for whom – carry enormous commercial and geopolitical value. When that much power consolidates that quickly into that few hands, the questions about access, equity, and who ultimately benefits from these breakthroughs become very hard to answer optimistically.
I believe in what AI can do for patients. I’ve seen it. I’ve built part of it. But the most important work in this space over the next decade may not be scientific at all. It may be ensuring that the infrastructure carrying all of this promise doesn’t end up owned by too few people to serve the many.
Origianl Creator: Ekaterina Pisareva
Original Link: https://justainews.com/industries/healthcare-and-medical/interview-paul-neyman-co-founder-and-cro-areti-health/
Originally Posted: Fri, 20 Mar 2026 15:00:53 +0000












What do you think?
It is nice to know your opinion. Leave a comment.