We were delighted to speak with Christian Blüthgen, a radiologist at the Institute for Diagnostic and Interventional Radiology at University Hospital Zurich. Blüthgen was previously a postdoctoral research fellow at the Center for Artificial Intelligence in Medicine and Imaging (AIMI) at Stanford University in California, USA, where he gained practical experience in the design and implementation of deep learning pipelines for medical imaging and text data.
What is your background/experience with artificial intelligence and what first attracted you to the topic?
In my first year of residency, I noticed that many of our daily tasks as radiologists could be automated – or at least handled in a more efficient way. Since then, I have applied machine learning (ML) for extracting structured information from free-text radiology reports, repurposed computer vision systems for tasks like fracture or aneurysm detection and extracting quantitative radiomics information from tumors. A research fellowship in the United States also provided me with hands-on experience in technically challenging, cutting-edge research in the field of generative vision-language models and large language models.
What are the biggest challenges to AI adoption in clinical practice?
Healthcare needs to provide patient-centered care, which beyond diagnostic excellence requires empathy and safeguards to protect patients. For AI to significantly contribute to these goals and achieve high levels of adoption, the added value of its undoubtedly powerful models must be rigorously evaluated in studies that extend beyond the predominant proofs of concept. These studies should be driven by clinical needs and human domain expertise. Successful creation and translation to practice mandates effective communication between clinical domain experts guiding development and human evaluation, people with technical backgrounds developing AI models, and hospital IT staff responsible for robust implementation. A lack of communication among those groups can be a significant bottleneck.
Give us an example (or an educated guess) of what you think AI will be able to do in 3 years. What about in 10 years?
I believe that in three years, AI tools like those involving large language models will increasingly be able to assist us as copilots in workflow-related tasks (e.g. note summarization, information retrieval, scheduling). They will facilitate or partially automate administrative tasks, thereby allowing us to perform at the peak of our competence.
In 10 years, I hope that multi-modally capable models with a wide range of abilities and access to increasingly linked information from all available sources will allow us to significantly augment our capabilities. These models may be able to grant us new scientific and clinical insights and could extend our expertise beyond the currently possible levels by curating information that is already in our data but is currently hidden or prohibitively difficult to extract.
As a radiologist, what is your advice for younger colleagues wanting to dive into the topic of Artificial Intelligence, Machine/Deep Learning, and/or Radiomics? What are your tricks for staying up to date in this fast-evolving field?
You do not need to be part of a big tech company or have access to expensive GPUs to make meaningful contributions to the field of radiology AI. There are high-quality, free educational resources available online. Many of these resources provide hands-on experience, e.g. with preprocessed datasets. A good way to learn is to pick a small project and complete all steps end-to-end. Once you develop an understanding for the basics, you can move on to more challenging projects and eventually follow current research in this fast-moving field.


