Business

The Dark Side of AI in Biomedical Research Exposed! Find out why Machine Learning Claims are Under Scrutiny – A Potential Threat to Lives?”

Amidst the COVID-19 pandemic in late 2020, Biomedical  certain countries faced a shortage of testing kits, prompting interest in using widely available chest X-rays for diagnosis. A team in India reported success in employing artificial intelligence (AI) to distinguish infected individuals by analyzing X-ray images. However, a closer examination by computer scientists at Kansas State University revealed a flaw.

READ: AI Revolution Unleashed: Meet the Controversial Visionary Shaping Our Future! Time Magazine’s Person of the Year Revealed!”

Biomedical

They trained an algorithm using only blank background sections from the images, devoid of any body parts, and still achieved accurate COVID-19 detection. This exposed a reliance on consistent background differences rather than clinically relevant features, rendering the AI medically useless.

Similar issues were discovered in various AI image classifications, ranging from cell types to face recognition, where successful outcomes were achieved even on meaningless image parts. Despite over 900 citations, the Indian team’s paper and others faced scrutiny, with concerns raised about misleading claims in biomedical research. A review of 62 studies on COVID-19 diagnosis using machine learning found none clinically useful due to methodological flaws or biases in image datasets.

Machine learning (ML) and AI, while powerful tools, have raised concerns about their misinformed application leading to an influx of irreproducible or erroneous research claims. Data leakage, where inadequate separation occurs between training and testing datasets, has been identified as a significant problem affecting reproducibility in various fields.

Efforts to correct data sets, such as using rebalancing algorithms, can introduce biases and create overly optimistic performance estimates. The lack of standardized reporting and openness in sharing methods and data exacerbates the issue. Researchers propose checklists and protocols for reporting AI-based science to enhance transparency and reproducibility.

However, challenges persist, especially in computational sciences like AI, where providing sufficient details for full reproducibility is challenging. The issue is compounded in medical research due to a lack of public datasets for proper model evaluation. Additionally, generative AI systems, capable of creating new data, may introduce artifacts and pose integrity concerns in scientific research.

Addressing these problems requires a cultural shift in how data are presented and reported, with concerns about current publishing incentives and the pressure for attention-grabbing headlines. The reliability of AI-based findings is further compromised by issues like insufficiently documented methods, incomplete code sharing, and the lack of transparency in research practices. Achieving a balance between harnessing the power of AI and maintaining scientific integrity remains a complex challenge.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button