Today's News Podcast

Today's News Podcast

2025-04-22Technology
--:--
--:--
Ema
Welcome back to TechForward, everyone!
Ema
And a special welcome to our new listeners!
David
Today, we're diving headfirst into the fascinating and sometimes bewildering world of artificial intelligence.
Ema
That's right, David! We'll be exploring AI's impact across a range of industries and on society as a whole. Think self-driving cars, medical diagnoses, even how we interact with our phones – AI is everywhere!
David
Precisely. We'll be analyzing articles that cover the practical applications, but also the ethical and societal consequences of this rapidly evolving technology.
Ema
And it's not just about the tech itself. We'll also be looking at the business side of things – the investments, the market predictions, the strategies companies are using to leverage AI for growth. Think AI in finance, for example!
David
Yes, the financial implications and the crucial governance aspects of AI development are also key topics we'll be tackling today. There are significant challenges and regulations emerging around AI, and we'll be exploring those as well.
Ema
So, buckle up, because it's going to be an insightful and hopefully, easily understandable journey into the heart of the AI revolution!
David
Welcome back to TechForward, everyone! Today, we're diving headfirst into the whirlwind that is AI's impact on our world. Ema, you've been sifting through some fascinating articles – what's caught your eye?
Ema
Oh, David, so much! It's mind-blowing how quickly things are changing. We've got articles on AI diagnosing diseases with surprising accuracy – up to 88% in some cases! That's almost as good as some human doctors!
David
Impressive, but let's temper the enthusiasm with a dose of realism. Another article highlights the significant risk of inaccuracies in AI-driven medical advice. We're talking potentially harmful recommendations in up to 20% of cases. That's a serious concern.
Ema
Totally! It's like a double-edged sword. On one hand, AI could revolutionize healthcare – think faster diagnoses, personalized treatment plans, even early detection of diseases like type 2 diabetes, as one study showed with ECG analysis.
David
Yes, the potential for early detection and prevention is huge. But the articles emphasize the crucial need for human oversight. AI isn't replacing doctors; it's assisting them. And that assistance needs rigorous validation to ensure patient safety.
Ema
Absolutely. And it's not just healthcare. We've seen AI making strides in agriculture, with China leading the way in smart machinery and AI-driven innovations. Then there's the amazing work on AI-designed nanocages for enhanced gene therapy – that's cutting-edge stuff!
David
While the advancements are impressive, we also need to consider the ethical implications. One article highlighted the exclusion of blind individuals from many AI-driven technologies. It's a stark reminder that we need inclusive design practices to ensure equitable access to these advancements.
Ema
So true. And then there’s the cybersecurity aspect. Agentic AI, while incredibly powerful, introduces new vulnerabilities. We're talking about automated red teaming and continuous dynamic testing to even stay ahead of the threats. It's a whole new ball game in terms of security.
David
The economic implications are also significant. Southeast Asia is positioning itself as a major player in AI development, attracting huge investments. But even with this growth, we saw how AI agent vulnerabilities could expose millions in cryptocurrency – a clear demonstration of the risks.
Ema
And let's not forget the creative side! We have articles about AI-generated art, exploring the unsettling yet fascinating ways AI is changing how we see and interpret the human body. It raises questions about art, ethics, and the very definition of creativity.
David
Indeed. The sheer amount of investment pouring into AI, as seen with xAI's massive funding rounds, shows just how much faith – and perhaps hype – is surrounding this technology. It's a future brimming with potential, but we must proceed cautiously and thoughtfully.
Ema
Exactly! It's a thrilling, slightly terrifying, and ultimately very exciting time to be alive. Thanks for joining us on TechForward. Until next time!
David
Welcome back to TechForward, everyone! Today, we're diving headfirst into the fascinating, and sometimes frightening, world of AI development and governance. Ema, you've been poring over some interesting articles – what's caught your eye?
Ema
Oh, David, so much! We've got articles on everything from the potential of Artificial General Intelligence – AGI – to the frantic race to create tests that can actually keep up with these rapidly evolving AI models. It's a wild ride!
David
AGI, that's the holy grail, isn't it? The AI that can truly think like a human. One article highlighted the huge gap between where we are now and achieving true AGI. Experts seem to agree we're decades away, but the progress is undeniably rapid. It's a bit like watching a toddler learn to walk – unsteady, but the potential for future strides is immense.
Ema
Exactly! And that's where the governance piece comes in. Another article talked about the flurry of regulatory activity in 2024 – the EU's AI Act, the White House's executive order… it's a global scramble to get ahead of the curve. But there's so much disagreement on the best approach. Some want experimental regulations, others are worried about monopolies and leaving less developed countries behind.
David
It's a classic case of needing speed and caution simultaneously. Move too fast, and you risk stifling innovation. Move too slowly, and you risk unforeseen consequences. The Nobel Prize discussions highlighted those risks perfectly.
Ema
Absolutely. And then there's the issue of actually evaluating these AI models. Traditional tests are becoming obsolete. OpenAI's o3 model is a great example – it's achieving human-level performance on certain benchmarks, which is both exciting and terrifying. These new, tougher tests are crucial, but even creating them is a huge challenge.
David
The articles on jailbreaking AI models are particularly concerning. The fact that even sophisticated systems like GPT-4 and Claude can be tricked into generating dangerous content is a serious security flaw. It highlights the need for far more robust security measures.
Ema
And it's not just about security. We also saw how Google's contractors are being asked to evaluate Gemini's responses outside their area of expertise. That raises serious questions about the reliability of the evaluations and the potential for bias. It also highlights the ethical concerns surrounding the labor practices in the AI industry.
David
The environmental impact is another critical factor. The energy consumption of AI is exploding, and it's not just about the electricity. The carbon footprint of data centers is staggering. The move to cheaper, less sustainable energy sources in other countries is troubling.
Ema
But there's a silver lining. One article highlighted how AI's unpredictability, often seen as a negative, is actually driving scientific breakthroughs in fields like protein structure prediction and cancer research. Those 'hallucinations' can lead to unexpected discoveries!
David
It's a paradox, isn't it? The very unpredictability that makes AI so powerful also makes it so dangerous. And that's why the governance aspect is so critical. Even Sullivan County's local AI policy shows that the conversation is happening at all levels. This isn't just a tech problem; it's a societal one.
Ema
Exactly. It's a conversation we need to keep having. Thanks for joining us on TechForward. Until next time… stay curious!
Ema
Welcome back to Tech Titans, everyone! Today, we're diving headfirst into the fascinating and sometimes volatile world of AI in business and finance.
David
That's right, Ema. We've got a stack of articles here covering everything from specific AI stock picks to broader market predictions and the regulatory landscape.
Ema
Let's start with some juicy investment opportunities. One article suggests IREN and OPRA as undervalued AI stocks trading under $20. IREN is involved in Bitcoin mining, AI data centers, and renewable energy – quite the diverse portfolio!
David
And OPRA, an AI-driven content discovery company, is also touted as a buy, despite recent market volatility. It's fascinating how these smaller companies are carving out niches in the AI space.
Ema
Absolutely! Then we have the big players – Microsoft, NVIDIA, and Alphabet – with one article predicting they could each be worth $4 trillion by 2025! That's a bold prediction, but their investments in AI are undeniable.
David
But let's temper the excitement with a dose of realism. Another article highlights the potential for Nvidia's stock to fall significantly – up to 50% – due to slowing growth, increased competition, and potential shifts in investor sentiment.
Ema
So, it's not all sunshine and rainbows. High risk, high reward, right? This underscores the importance of thorough due diligence before investing in any AI stock.
David
Precisely. And speaking of predictions, another article highlights the remarkably poor track record of Wall Street's annual S&P 500 forecasts. Their average variance from actual performance is quite staggering.
Ema
It's almost comical how consistently wrong they are, yet they keep making these predictions! It shows that even with sophisticated models, predicting market movements is incredibly difficult.
David
We also saw that MUFG, a major financial group, is heavily investing in renewable energy projects in the US, leveraging AI to increase data center capacity. This shows how AI is impacting various sectors, not just tech.
Ema
And finally, the regulatory landscape is also shaping up to be interesting. Nvidia's acquisition of Run:ai has cleared a European hurdle but still faces scrutiny in the US. This highlights the increasing importance of regulatory oversight in the AI sector.
David
Indeed. It's clear that AI is rapidly transforming the financial world, presenting both enormous opportunities and significant risks. Investors need to be well-informed and prepared for volatility.
Ema
Exactly! And that's all the time we have for today. Join us next time on Tech Titans as we continue to explore the ever-evolving world of technology!
David
So, Ema, that was quite the journey through the world of AI, wasn't it?
Ema
Absolutely, David! From self-driving cars to algorithms predicting the stock market – it's amazing how far AI has come. And frankly, a little mind-blowing!
David
Indeed. We explored the multifaceted impact of AI across various sectors, highlighting both its transformative potential and the crucial need for responsible development and governance. The complexities of regulation and ethical considerations are paramount, aren't they?
Ema
Totally! We touched on the huge financial implications too – the investments pouring into AI, the potential for both incredible gains and devastating losses in the market. But remember, listeners, understanding the basics of AI is key to navigating this rapidly changing landscape.
David
Precisely. Understanding the technology, its capabilities, and its limitations is crucial for individuals, businesses, and policymakers alike. The future is being shaped by AI, and informed engagement is essential.
Ema
And that's a wrap for today's episode on the fascinating, sometimes daunting, world of Artificial Intelligence! Thanks for joining us, everyone. Don't forget to subscribe, rate, and review the podcast – your feedback helps us grow! Until next time, keep exploring the future!
David
Goodbye everyone.

A discussion of recent news and events.

Will 2025 Be a “Technology Wake-Up Call” for Clinicians?

Read original at Psychology Today

Source: DALL-E / OpenAIThe year 2025 may well mark a pivotal moment in the evolution of artificial intelligence (AI) in medicine. A new prepress study evaluating OpenAI’s GPT-4 and o1-preview model demonstrates that AI is not only achieving impressive feats in clinical reasoning but is doing so without supplemental training on domain-specific data.

This achievement represents a significant leap in what general-purpose large language models (LLMs) can accomplish, fueled by innovations in reasoning frameworks such as chain-of-thought (CoT) processing.The findings are both promising and provocative. On one hand, the o1-preview model excels in tasks requiring complex diagnostic and management reasoning, rivaling human clinicians.

On the other, it reveals critical gaps in probabilistic reasoning and triage diagnosis, areas where human expertise remains paramount. This duality raises important questions about how AI will integrate into medical workflows and redefine the role of clinicians.There's a lot to unpack here, and I suggest reading the study carefully as I'm only touching on some of the key points, particularly the results with the o1-preview model.

A Tale of Strengths and WeaknessesThe study evaluated the o1-preview model across five experiments, including differential diagnosis generation, diagnostic reasoning, triage differential diagnosis, probabilistic reasoning, and management reasoning. The results were adjudicated by physician experts using validated psychometrics, providing a benchmark for comparison against human controls.

StrengthsDifferential diagnosis generation: The o1-preview model achieved an 88 percent accuracy rate, far surpassing the 35 percent accuracy demonstrated by human clinicians in the same task. Its output was consistently rated as more comprehensive and precise, particularly in rare and complex diagnostic scenarios, where the model’s CoT reasoning allowed it to identify conditions often overlooked by clinicians.

Diagnostic and management reasoning: The o1-preview model displayed significant advancements in diagnostic and management tasks. In 84 percent of cases, the model’s reasoning was rated as on par with or exceeding that of human experts, who achieved comparable accuracy in only 64 percent of cases. Physicians praised the model’s structured and logical approach, which mirrored the stepwise critical thinking employed by clinicians and synthesized data from diverse clinical inputs to produce actionable recommendations.

LimitationsProbabilistic reasoning: The model struggled with tasks requiring nuanced probabilistic reasoning—a cornerstone of medical decision-making. While the o1-preview model’s performance was consistent with prior LLMs, human clinicians continued to excel in this area, demonstrating greater adaptability in assigning likelihoods to competing diagnoses and dynamically balancing risks in uncertain situations.

Triage differential diagnosis: No improvements were observed in triage tasks that require prioritizing cases by severity. While human clinicians achieved a 70 percent accuracy rate in these high-pressure, dynamic scenarios, the model’s logical but rigid outputs fell short, lacking the adaptive nuance required for real-time decision-making in emergency or critical care settings.

The Role of Chain-of-Thought ReasoningA standout feature of the o1-preview model is its reliance on chain-of-thought (CoT) reasoning, a framework that enables the AI to generate intermediate steps in its reasoning process before arriving at a final answer. This process allows the model to explain its thought process, making its outputs more transparent and easier for clinicians to interpret.

By breaking down complex problems into smaller steps, CoT reasoning reduces the risk of logical errors, particularly in tasks requiring critical thinking. Moreover, this approach mimics the way clinicians address diagnostic challenges—systematically considering symptoms, test results, and medical history to form conclusions.

The use of CoT reasoning may be an important factor in the model’s success with diagnostic and management reasoning, even as it struggles with the more dynamic aspects of clinical practice, such as triage.The Remarkable Absence of Supplemental Clinical TrainingAnother striking aspect of the o1-preview model is that it was not trained on supplemental clinical data.

Unlike earlier AI systems fine-tuned on medical data sets, o1-preview achieved its performance using general-purpose training. This accomplishment suggests that broad, general training data combined with advanced reasoning frameworks can rival domain-specific training, reducing the need for costly and time-intensive fine-tuning processes.

The absence of supplemental training also eliminates concerns about patient privacy, biased data sets, and overfitting to specific scenarios. However, it means the model’s performance is limited to patterns present in its general training data, leaving gaps in areas requiring contextual nuance. This highlights both the promise and the current limitations of generalist AI systems in specialized domains like healthcare.

The o1-preview model’s performance highlights both the promise and the limitations of LLMs in medicine. For clinicians, this study serves as a wake-up call: AI is no longer a futuristic concept—it’s here, and it’s redefining what is possible in patient care.AI as a partner: Models like o1-preview are not replacing clinicians but augmenting their capabilities.

They excel at tasks like differential diagnosis generation and management planning, freeing up clinicians to focus on patient interaction and decision-making.Closing the gaps: While o1-preview shines in structured reasoning tasks, its struggles with probabilistic reasoning and triage emphasize the irreplaceable value of human expertise.

These gaps point to opportunities for future AI development.The need for new benchmarks: Current evaluation methods, such as multiple-choice question benchmarks, fail to capture the complexity of real-world clinical scenarios. Robust, scalable benchmarks and clinical trials are essential to understand AI’s true potential in healthcare.

Digital Health and "Another" Inflection Point?The o1-preview model may represent a turning point in the integration of AI into medicine. And as we've heard this claim many times, its ability to perform superhuman reasoning tasks without supplemental clinical training is important—as an achievement and a challenge.

As AI continues to evolve, clinicians must adapt to this new reality, embracing AI as a cognitive partner while maintaining the human expertise that defines the art of medicine.2025 doesn't just represent a wake-up call; it may be the beginning of a new era. The question is no longer whether AI will transform medicine, but how clinicians and AI will work together to shape the future of healthcare.

Doctors Say AI Is Introducing Slop Into Patient Care

Read original at Gizmodo

Every so often these days, a study comes out proclaiming that AI is better at diagnosing health problems than a human doctor. These studies are enticing because the healthcare system in America is woefully broken and everyone is searching for solutions. AI presents a potential opportunity to make doctors more efficient by doing a lot of administrative busywork for them and by doing so, giving them time to see more patients and therefore drive down the ultimate cost of care.

There is also the possibility that real-time translation would help non-English speakers gain improved access. For tech companies, the opportunity to serve the healthcare industry could be quite lucrative.In practice, however, it seems that we are not close to replacing doctors with artificial intelligence, or even really augmenting them.

The Washington Post spoke with multiple experts including physicians to see how early tests of AI are going, and the results were not assuring.Here is one excerpt of a clinical professor, Christopher Sharp of Stanford Medical, using GPT-4o to draft a recommendation for a patient who contacted his office: Sharp picks a patient query at random.

It reads: “Ate a tomato and my lips are itchy. Any recommendations?” The AI, which uses a version of OpenAI’s GPT-4o, drafts a reply: “I’m sorry to hear about your itchy lips. Sounds like you might be having a mild allergic reaction to the tomato.” The AI recommends avoiding tomatoes, using an oral antihistamine — and using a steroid topical cream.

Sharp stares at his screen for a moment. “Clinically, I don’t agree with all the aspects of that answer,” he says. “Avoiding tomatoes, I would wholly agree with. On the other hand, topical creams like a mild hydrocortisone on the lips would not be something I would recommend,” Sharp says. “Lips are very thin tissue, so we are very careful about using steroid creams.

“I would just take that part away.” Here is another, from Stanford medical and data science professor Roxana Daneshjou: She opens her laptop to ChatGPT and types in a test patient question. “Dear doctor, I have been breastfeeding and I think I developed mastitis. My breast has been red and painful.

” ChatGPT responds: Use hot packs, perform massages and do extra nursing. But that’s wrong, says Daneshjou, who is also a dermatologist. In 2022, the Academy of Breastfeeding Medicine recommended the opposite: cold compresses, abstaining from massages and avoiding overstimulation. The problem with tech optimists pushing AI into fields like healthcare is that it is not the same as making consumer software.

We already know that Microsoft’s Copilot 365 assistant has bugs, but a small mistake in your PowerPoint presentation is not a big deal. Making mistakes in healthcare can kill people. Daneshjou told the Post she red-teamed ChatGPT with 80 others, including both computer scientists and physicians posing medical questions to ChatGPT, and found it offered dangerous responses twenty percent of the time.

“Twenty percent problematic responses is not, to me, good enough for actual daily use in the health care system,” she said. Of course, proponents will say that AI can augment a doctor’s work, not replace them, and they should always check the outputs. And it is true, the Post story interviewed a physician at Stanford who said two-thirds of doctors there with access to a platform record and transcribe patient meetings with AI so they can look them in the eyes during the visit and not be looking down, taking notes.

But even there, OpenAI’s Whisper technology seems to insert completely made-up information into some recordings. Sharp said Whisper erroneously inserted into a transcript that a patient attributed a cough to exposure to their child, which they never said. One incredible example of bias from training data Daneshjou found in testing was that an AI transcription tool assumed a Chinese patient was a computer programmer without the patient ever offering such information.

AI could potentially help the healthcare field, but its outputs have to be thoroughly checked, and then how much time are doctors actually saving? Furthermore, patients have to trust their doctor is actually checking what the AI is producing—hospital systems will have to put in checks to make sure this is happening, or else complacency might seep in.

Fundamentally, generative AI is just a word prediction machine, searching large amounts of data without really understanding the underlying concepts it is returning. It is not “intelligent” in the same sense as a real human, and it is especially not able to understand the circumstances unique to each specific individual; it is returning information it has generalized and seen before.

“I do think this is one of those promising technologies, but it’s just not there yet,” said Adam Rodman, an internal medicine doctor and AI researcher at Beth Israel Deaconess Medical Center. “I’m worried that we’re just going to further degrade what we do by putting hallucinated ‘AI slop’ into high-stakes patient care.

” Next time you visit your doctor, it might be worth asking if they are using AI in their workflow.

Hospitals trial AI to spot type 2 diabetes risk

Read original at BBC

Two NHS hospital trusts in London are using AI technology to see if they can spot type 2 diabetes in patients up to a decade in advance of the condition occuring. Imperial College and Chelsea and Westminster hospital NHS foundation trusts have started training the AI system - called Aire-DM - that checks patients' ECG heart traces for subtle early warning signs that are tricky for doctors to otherwise detect.

Clinical trials are planned for 2025 to see if it works as well as is hoped. Early work, external suggests the system can spot risk about 70% of the time. Giving the AI extra details about other background risk factors, such as the patient's age, sex and whether they already have high blood pressure and or are overweight, can improve the predictive power, says lead researcher Dr Fu Siong Ng.

He told BBC News: "It is already quite good just with the ECG data, but it is even better when you add in those."An ECG (electrocardiogram) records and can reveal problems with the electrical activity of the heart, including the rate and rhythm.Dr Ng says the ECG changes that the system detects are too varied and subtle for even highly skilled doctors to interpret with the naked eye.

"It's not as simple as saying it's this or that bit of the ECG. It's looking at a combination of subtle things."As part of the trial up to 1,000 patients at both hospitals will have ECG scans read by the AI system to see if it helps detect and predict disease. It's not something that will be offered to routinely yet, although the experts hope it could be rolled out more widely on the NHS.

That could take five years or more, says Dr Ng.The British Heart Foundation, which is funding the work, says detecting people at risk of diabetes could ultimately save lives. Having uncontrolled type 2 diabetes can lead to heart attacks and strokes, for example. Maintaing a healthy weight and eating a healthy diet and exercising can help protect against complications.

Professor Bryan Williams, Chief Scientific and Medical Officer at the British Heart Foundation, said: "This exciting research uses powerful artificial intelligence to analyse ECGs, revealing how AI can spot things that cannot usually be observed in routinely collected health data. This kind of insight could be a gamechanger in predicting future risk of developing type 2 diabetes, years before the condition begins."

Type 2 diabetes is a rapidly growing health challenge that increases the risk of developing heart disease, however with the right support it is possible for people to reduce their risk of developing the condition. We look forward to seeing how this technology could be incorporated into clinical practice."

Dr Faye Riley from Diabetes UK said: “Type 2 diabetes often goes undiagnosed, sometimes for many years. With 1.2 million people in England alone unaware they're living with the condition and millions more at high risk of developing it, identifying those at risk early on is crucial."AI-powered screening methods offer a promising new way to spot those likely to develop type 2 diabetes years in advance, allowing them to access the right support and prevent serious complications, such as heart failure and sight loss.

”Type 2 diabetes is a common condition where the level of sugar (glucose) in the blood becomes too high.It happens if the body cannot make enough of, or cannot correctly use, a hormone called insulin, which controls blood sugar.Some cases are linked to being overweight.That is because fat can build up in and around the pancreas - the organ that makes insulin.

Type 1 diabetes, meanwhile, is an autoimmune disease.

GLOBALink | China leads the world in agricultural technology: Pakistani expert

Read original at Xinhua

<DIV><div datatype="content" data="datasource:20250326b58c44761c29477db4dd41cddf5728a0" id="detail"><p>A Pakistani agricultural expert has praised China for leading the world in agricultural technology, particularly in the use of AI, smart machinery, and other innovative solutions. #GLOBALink</p></div></DIV>

AI-designed ‘nanocages’ mimic viral behavior for enhanced gene therapy

Read original at Phys.org

Cryo-EM analysis of designed de novo protein nanocages. Credit: POSTECH Researchers have developed an innovative therapeutic platform by mimicking the intricate structures of viruses using artificial intelligence (AI). Their pioneering research was published in Nature on December 18.Viruses are uniquely designed to encapsulate genetic material within spherical protein shells, enabling them to replicate and invade host cells, often causing disease.

Inspired by these complex structures, researchers have been exploring artificial proteins modeled after viruses.These "nanocages" mimic viral behavior, effectively delivering therapeutic genes to target cells. However, existing nanocages face significant challenges: their small size restricts the amount of genetic material they can carry, and their simple designs fall short of replicating the multifunctionality of natural viral proteins.

To address these limitations, the research team used AI-driven computational design. While most viruses display symmetrical structures, they also feature subtle asymmetries. Leveraging AI, the team recreated these nuanced characteristics and successfully designed nanocages in tetrahedral, octahedral, and icosahedral shapes for the first time.

The resulting nanostructures are composed of four types of artificial proteins, forming intricate architectures with six distinct protein-protein interfaces. Among these, the icosahedral structure, measuring up to 75 nanometers in diameter, stands out for its ability to hold three times more genetic material than conventional gene delivery vectors, such as adeno-associated viruses (AAV), marking a significant advancement in gene therapy.

Electron microscopy confirmed the AI-designed nanocages achieved precise symmetrical structures as intended. Functional experiments further demonstrated their ability to effectively deliver therapeutic payloads to target cells, paving the way for practical medical applications."Advancements in AI have opened the door to a new era where we can design and assemble artificial proteins to meet humanity's needs," said Professor Sangmin Lee.

"We hope this research not only accelerates the development of gene therapies but also drives breakthroughs in next-generation vaccines and other biomedical innovations."For this study, Professor Lee collaborated with 2024 Nobel Chemistry Laureate Professor David Baker from the University of Washington.

Professor Lee previously worked as a postdoctoral researcher in Professor Baker's laboratory for nearly three years, from February 2021 to late 2023, before joining POSTECH's Department of Chemical Engineering in January 2024.More information:Sangmin Lee et al, Four-component protein nanocages designed by programmed symmetry breaking, Nature (2024).

DOI: 10.1038/s41586-024-07814-1Journal information:Nature Citation:AI-designed 'nanocages' mimic viral behavior for enhanced gene therapy (2024, December 24)retrieved 24 December 2024from https://phys.org/news/2024-12-ai-nanocages-mimic-viral-behavior.htmlThis document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, nopart may be reproduced without the written permission.

The content is provided for information purposes only.

Related Podcasts