FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

NEW BOOK. Peter Lee. The AI Revolution in Medicine: GPT-4 and Beyond Paperback – 22 May 2023.

INTERVIEW. DR ERIC TOPOL. Ground Truths. Peter Lee and the Impact of GPT-4 + Large Language AI Models in Medicine.

Well worth investigation. Dr. Topol includes the transcript.

Below is table of contents with author’s note and a selected section related to regulation of AI.

FOR EDUCATIONAL PURPOSES.

AI is about to transform medicine. Here’s what you need to know right now.

”The development of AI is as fundamental as the creation of the personal computer. It will change the way people work, learn, and communicate–and transform healthcare. But it must be managed carefully to ensure its benefits outweigh the risks. I’m encouraged to see this early exploration of the opportunities and responsibilities of AI in medicine.”

–Bill Gates

Just months ago, millions of people were stunned by ChatGPT’s amazing abilities — and its bizarre hallucinations. But that was 2022. GPT-4 is now here: smarter, more accurate, with deeper technical knowledge. GPT-4 and its competitors and followers are on the verge of transforming medicine. But with lives on the line, you need to understand these technologies — stat.

What can they do? What can’t they do — yet? What shouldn’t they ever do? To decide, experience the cutting edge for yourself. Join three insiders who’ve had months of early access to GPT-4 as they reveal its momentous potential — to improve diagnoses, summarize patient visits, streamline processes, accelerate research, and much more. You’ll see real GPT-4 dialogues — unrehearsed and unfiltered, brilliant and blundering alike — all annotated with invaluable context, candid commentary, real risk insights, and up-to-the-minute takeaways.

  • Preview a day in the life of a doctor with a true AI assistant.
  • See how AI can enhance doctor-patient encounters at the bedside and beyond.
  • Learn how modern AI works, why it can fail, and how it can be tested to earn trust.
  • Empower patients: improve access and equity, fill gaps in care, and support behavior change.
  • Ask better questions and get better answers with “prompt engineering.”
  • Leverage AI to cut waste, uncover fraud, streamline reimbursement, and lower costs.
  • Optimize clinical trials and accelerate cures with AI as a research collaborator.
  • Find the right guardrails and gain crucial insights for regulators and policymakers.
  • Sketch possible futures: What dreams may come next?

There has never been technology like this. Whether you’re a physician, patient, healthcare leader, payer, policymaker, or investor, AI will profoundly impact you — and it might make the difference between life or death. Be informed, be ready, and take charge — with this book.

Link to the book: The AI Revolution in Medicine

Link to Dr. Topol’s review of the book

The GPT-x Revolution in Medicine

·
27 MAR
The GPT-x Revolution in Medicine

“How well does the AI perform clinically? And my answer is, I’m stunned to say: Better than many doctors I’ve observed.”—Isaac Kohane MD The large language model GPT-4 (LLM, aka generative AI chatbot or foundation model) was just released 2 weeks ago (14 March) but there’s already been much written about its advance beyond ChatGPT, released 30 November 2…

Link to the Sparks of Artificial General Intelligence preprint we discussed

Link to Peter’s paper on GPT-4 in NEJM

Below is a selected section related to regulation of AI. Well worth investigation. Dr. Topol includes the transcript.

FOR EDUCATIONAL PURPOSES.

Peter Lee (22:43):

Right. You know, at, at, at least when it comes to medicine and healthcare. I personally can’t imagine that this should not be regulated. it, it just and it just seems also more approachable to think about regulation because the whole practice of medicine has grown up in this regulated space. if there’s any part of life and of our society that knows how to deal with regulation and can actually make regulations actually work it is medicine. And so now having said that I do understand coming from Microsoft, and even more so for Sam Altman coming from open eye, it can sometimes be interpreted as being self-serving. You’re wanting to set up regulatory barriers against others. I would say in Sam Almond’s defense that at back to 2019 prior, just prior to the release of GPT-2 Sam Altman made public calls for thinking about regulation for need for external audit and, you know, for the world to prepare for the possibility of AI technologies that would be approaching AGI..

(24:05):

and in fact just a month before the release of GPT-4, he made a very public call saying even at greater length, asking for the for the world to, to do the same things. And so I think one thing that’s misunderstood about Sam is that he’s been saying the same thing for years. It isn’t new. And so I think that that should give people who are suspicious of Sam’s motives in calling for regulation, that it should give them pause because he basically has not changed his tune, at least going back to 2019. But if we just put that aside you know, what I hope for most of all is that the medical community, and I really look at leading thinkers like you, particularly in our best medical research institutions would quickly move to take assertive ownership of the fundamental questions of whether, when, and how a technology like this should be used would engage in the research to create the foundations for you know, for sensible regulations with an understanding that this isn’t about GPT-4 this is about the next three or four or five even more powerful models.

(25:31):

And so, you know, ideally, I think it’s going to take some real research, some real inventiveness. What we explain in chapter nine of the book is that I don’t believe we have a workable regulatory framework no, right now in that we need to develop it. But the foundations for that, I think have to be a product of research and ideally research from our best thinkers in the medical research field. I think the race that we have in front of us is that regulators will rightfully feel very bad if large nervous people start to get injured or, or worse because of the lack of regulation. and so there, you know, and, and you can’t blame them for wanting to intervene if that starts to happen. And so, so we do have kind of an urgency here. whereas normally our medical research on say, methods for clinical validation of large language models might take, you know, several years to really come to fruition. So there’s a problem there. But at the, I think the medical field can very quickly come up with codes of contact guidelines and expectations and the education so that people can start to understand the technology as well as possible.

Eric Topol (26:58):

Yeah. And I think the tricky part here is that, as you know, there’s a lot of doomsayers and existential threats that have been laid out by people who I respect, and I know you do as well, like Geoffrey Hinton who is concerned, but you know, let’s say you have a multimodal AI like GPT-4, and you want to put in your skin rash or skin lesion to it. I mean, how can you regulate everything? And, you know, if you just go to Bing and you go to creative mode and you’re going get all kinds of responses. So this is a new animal, this is a new alien, the question is that as you say, we don’t have a framework and we should move to, to get one. To me, the biggest question that you, you, you really got to in the book, and I know you continue, of course, it was with within two days of your book’s publishing,  the famous preprint came out, the Sparks preprint from all your team at Microsoft Research, which is incredible.

(27:54):

169 page preprint downloaded. I don’t how many millions of times already, but that is a rich preprint we’ll, we’ll put in the link, of course. But there, the question is, what are we seeing here? Is this really just a stochastic parrot a JPEG with, you know, loose stuff and juxtaposition of word linguistics, or is this a form of intelligence that we haven’t seen from some machines ever before? Right. and, you get at that in so many ways, and you point out, does it matter? I I wonder if you could just expound on this, because to me, this really is the fundamental question.

Peter Lee (28:42):

Yeah. I think I get into that in the book in chapter three. and I think chapter three is my expression of frustration on this, because it’s just a machine, right? And in that sense, yes, it is just a stochastic parrot, you know, it’s a big probabilistic machine that’s making guesses on the next word that it should spit out, or that you will spit out. It, it, and it’s making a projection for a whole conversation. And you know, in that, the first example I use in chapter three is the analysis of this poem. And the poem talks about being splashed with cold water and feeling fever. And the machine hasn’t felt any of those things And so when it’s opining about those lines in the poem, it can’t possibly be authentic. And so you know, so we can’t say it understands these things.

(29:39):

It it hasn’t experienced these things, but the frustration I have is as a scientist, and here’s now where I have to be very disciplined to be a scientist, is the inability to prove that. Now, there has been some very, very good research by researchers who I really respect and admire. I mean, there was Josh Tenenbaum’s team, whole team, and his colleagues at MIT or at Harvard, the University of Washington, and the Allen Institute, and many, many others who have just done some really remarkable research and research that’s directly relevant to this question of does the large language model, quote unquote, understand what it’s hearing and what it’s saying? And often times providing tests that are grounded in the foundational theories about why these things can’t possibly be understanding what they’re saying. And therefore, these tests are designed to expose these shortcomings in large language models. But what’s been frustrating is, but also kind of amazing is GPT-3tends to pass most, if not all of these tests!

(31:01):

And, and so it, it leaves you kind of, if we’re really honest, as scientists, it and even if we know this thing, you know, is not sentient, it leaves us in this place where we’re, we’re without definitive proof of that. And the arguments from some of the naysayers who I also deeply respect, and I’ve really read so much of their work don’t strike me as convincing proof either, you know, because if you say, well, here’s a problem that I can use to cause GPT-4 to get tripped up, I, I have no shortage of problems. I, I think I could get you to trip, get tripped up <laugh>, Eric. And yet that does not prove that you are not intelligent. And so, so I think we’re left with this kind of set of two mysteries. One is we see GPT-4 doing things that we can’t explain given our current understanding of how a neural transformer operates.

(32:09):

And then secondly we’re lacking a test that’s derived from theory and reason that consistently shows a limitation of GPT-4’s understanding abilities. and so in my heart, of course, I, I understand these things as machines and I actively resist anthropomorphizing these machines. But as it, I, maybe I’m fooling myself, but as a discipline scientist, I, I’m, I’m trying to stay grounded in proof and evidence. and right at the moment, I don’t believe the world has that I, we’ll get there. We’re understanding more and more every day, but at the moment we don’t have it.

Eric Topol (32:55):

I think hopefully everyone who’s listening is getting some experience now in these large language models and realizing how much fun it is and how we’re in a new era in our lives. This is a turning point.

Peter Lee (33:13):

Yeah. That’s stage four of amazement and joy

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.