Introduction
The integration of automation and machine learning (ML) has led to an unprecedented revolution in laboratory medicine.1 This change signals an evolution from conventional manual and semi-automated methods to a digital era characterized by increased consistency, precision, and efficiency. Improving quality assurance (QA) procedures is essential to this transformation since it guarantees accurate diagnosis and patient welfare.2 The infusion of ML into QA has introduced new capabilities, including advanced pattern detection, predictive analytics, and sophisticated data handling,3 effectively navigating the complexities of biomedical data through advanced algorithms.4 However, this swift embrace of cutting-edge technologies also brings to light various challenges. This review delves into these innovations in laboratory medicine, dissecting their impact, roles, and the diverse challenges they introduce, as well as offering strategic approaches to fully leverage their benefits. The exploration of the contemporary laboratory landscape aims to provide a critical analysis of ongoing trends and forecast future directions in the synergy between technology and healthcare QA.5
In the context of laboratory operations, automation is characterized as the application of technology to execute lab processes with minimal human input, aiming at augmenting productivity, minimizing errors, and enabling technicians to concentrate on complex tasks.6–8 This encompasses a spectrum of technologies, from basic automated pipettes to advanced analyzers and robotic handling systems. These systems perform routine and repetitive tasks with exceptional precision and speed, thereby boosting the operational effectiveness of the laboratory.9
The integration of automation into medical laboratory tests will enhance precision, reduce economic burdens, and provide platforms for multidisciplinary teams in the healthcare process. Automation will help to improve the outcomes, safety, satisfaction, and the optimal use of healthcare resources.10 For the efficient management of serious clinical cases in the laboratory, new clinical regulatory frameworks and financing models focus on QA, risk management, technology assessment, patient satisfaction, and patient empowerment.11 The expansion of automation in the sample collections & testing methods, smartphone health-related applications and software, reporting, and record-keeping systems has been suggested and implemented to manage chronic diseases.12 Interventions in laboratory medicine have contributed to improved disease control patterns.13 Patient empowerment is a second major trend relevant to this automation loop in the sense of patient empowerment. Several research studies have indicated that the reliable self-management and self-care process depends on automated mobile healthcare interventions and has shown effectiveness in improving patient health outcomes associated with chronic diseases.14 Digital laboratories, smartphone applications, and advanced software have illustrated better management and self-monitoring of diseases like diabetes and cardiovascular diseases by glycemic control, blood pressure estimation and oxygen level at regular intervals. It will also digitally connect data information from one setup, case, or facility to another using the same data-sharing platform/cloud computing. These factors of automation and digitalization will impact test ratio and regulation and will permit efficient and advanced monitoring in rural settings.15 Artificial intelligence (AI), is an important factor that can influence the activation of laboratory medicine by repeat testing. ML and data analysis integrated with advanced intelligent systems may prove an efficient tool for appropriate test prescription. The integration of digital pathways with primary and secondary healthcare sectors will facilitate efficient value-based healthcare systems.16 Safety and cost-effectiveness are crucial for the reliability and credibility of such digital laboratory environments. Transforming the healthcare system will boost novel human-machine interfaces, although the implementation depends on reliable and clear interpretation.17
Conversely, within the sphere of laboratory medicine, ML algorithms are utilized to analyze intricate datasets, identify patterns, forecast outcomes, and aid in decision-making processes.18 This includes predicting sample stability, estimating workload for optimal resource management, or detecting subtle irregularities in test results that might elude human observation. In laboratory medicine, the introduction of automation and ML brings in a new era that redefines conventional lab procedures.19 This transformation allows advanced analytical capabilities, improves production capacity, and streamlines workflow procedures.20 The combination of these two technical spheres profoundly transforms patient care and diagnostics while also enhancing quality assurance procedures.21 However, the adoption of these technologies is not without its challenges. Prominent problems include initial financial investments, data security concerns, potential biases in algorithmic training, and the need for ongoing monitoring to ensure system effectiveness.22 To overcome these challenges is crucial for the proper utilization of automation and ML in laboratory medicine. In laboratory medicine, where accurate diagnoses are necessary for efficient patient care and therapeutic decision-making, QA is crucial. To guarantee the accuracy and dependability of test results, laboratories implement standardized protocols known as QA.23 The repercussions of faulty results are severe; they may result in an incorrect diagnosis, ineffective treatments, and unfavorable health outcomes, including death. A strong way to support QA is through the incorporation of automation and ML into laboratory procedures. Automation standardizes processes, reducing the possibility of human error, while ML provides labs with cutting-edge instruments for thorough data analysis. This involves better prediction of possible inaccuracies and improved identification of anomalies. Westgard rules are procedures used in management to detect errors or deviations in laboratory testing procedures. They are designed to monitor the effectiveness of the assessment and ensure the reliability of the test.24 AI can help improve Westgard policies in different ways. AI algorithms can identify variability patterns in laboratory test data to predict deviations from standards. For this purpose, ML approaches like clustering or classification may be helpful.25 This monitoring system can assess and interpret results rapidly compared to manual methods, saving a lot of time.26 These algorithms can also adjust the threshold used in Westgard rules contrary to the fixed threshold of conventional Westgard’s rules based on previous data assessment. It will enhance the sensitivity and specificity of laboratory tests by lowering the chances of false positive and false negative results.27 Moreover, AI algorithms can be combined with Laboratory Information Systems to identify additional information on clinical cases for better interpretations of lab results.28 It can also be used to predict future disease pattern analytics for better preventive or therapeutic measures. However, it is compulsory to understand that AI is a complementary option to enhance the efficiency of the field of lab medicine alongside existing conventional regulatory methods.
The ultimate goal is to uphold a level of service quality consistently aligned with set standards, ensuring that each patient receives trustworthy test findings.29 The demand for these cutting-edge technologies is driven by the necessity for strict quality assurance. However, it also means that labs have to come up with elaborate plans for the verification and maintenance of automated systems and ML algorithms. This involves ensuring that lab staff members are properly trained and skilled in their work. It is imperative to have a dynamic and reliable quality assurance system that is updated and reviewed frequently to incorporate new technology and adapt to evolving clinical requirements.30
AI encompasses two major types: weak AI and strong AI. Weak AI, or artificial narrow intelligence, describes the classification of data based on a statistic model which is well-established and has already been trained to execute specific tasks. In contrast, strong AI, also known as artificial general intelligence, can create a system, which can function intelligently and independently by executing ML from any available normalized data.31 ML is generally divided into three categories: supervised learning, unsupervised learning, and incremental learning. Learning management should include inputs and results, which are the desired outcomes or results so that the computer can be trained from the data listed as learned under the supervision of a teacher.32 Literally, learning management focuses on finding mathematical operations that represent how to access written information. Unsupervised learning, on the other hand, can work with unstructured data, where the computer algorithm’s role is to find patterns in the data; where these central patterns may reflect categories or underlying data. Some supervised learning algorithms are included (Logic, LASSO, Ridge). Regression, Support Vector Machines (SVM), Random Forests, Neural Networks (NN), etc. Examples of unsupervised learning include principal component analysis, Laplacian eigenmaps, t-SNE, p-SNE, autoencoders, etc. takes place. Clinical applications by Dawson et al. Unsupervised principal component analysis was used to show whether there was a distinction in xerostomia (dry mouth) data in high- or low-risk patients after exposure to parotid gland radiotherapy.33 Intuitively, supervised learning can often classify information better due to additional guidance from known answers (scripts). Therefore, in this context, unsupervised learning is generally considered to be a more difficult problem than cognitive learning is thought to be relevant.34
The type of learning in which the ML model uses the whole dataset is called batch learning. After the finalization of the training, the algorithm’s weights are fixed, and it can analyze new data in a required production setting. The new information obtained during the production process does not alter the fixed weights of the algorithm hence, the system is not learning. The positive aspects of such systems are that they are stable and robust, and their performance and accuracy can be easily identified in advance. But the disadvantage is that the system cannot adapt to the newly obtained information. It has to be trained from scratch to update it with new data using both the previous and new data samples. So, the approach needs a lot of computing resources and is time-consuming. This system has disadvantages i-e. the inability to identify its accuracy and instability of the system performance due to continuous change in the algorithm, leading to problems for licensing.35
This review aims to provide a thorough analysis of the impact of automation and ML on quality assurance in laboratory medicine. Our goal is to provide a comprehensive analysis of developments, a realistic assessment of obstacles, and workable plans for implementing these technologies. The assessment includes an examination of present practices, a review of obstacles ranging from data processing to compliance with regulations, and a discussion of approaches for seamless integration into current systems.36 It is important because it offers a thorough and empirically supported examination of the subject, serving as a fundamental reference for laboratory medicine professionals. We highlight the role of automation and ML play in raising quality assurance standards by highlighting their transformational potential.37 In addition, we address how to overcome the obstacles preventing their widespread use and provide doable plans of action for all parties involved, such as scientists, policymakers, and laboratory personnel. The purpose of this review is to make a significant contribution to the discussion on technical developments in laboratory medicine, with an emphasis on improving patient care standards.38 This study aims to impact future research, inform policymaking, and foster innovation in laboratory medicine quality assurance by aligning with current research and technological advancements.
Progress in automation within laboratory medicine
Historical background
The historical evolution of automation in laboratory medicine is characterized by pivotal developments that have reshaped the field. It began with basic mechanization, such as the introduction of automated pipetting, and evolved with the introduction of the first automated analyzers in the 1950s. These initial advancements laid the groundwork for a shift from manual, labor-intensive methods to more efficient, automated processes.39 Driven by the need to handle growing test volumes while ensuring accuracy, significant progress included the incorporation of conveyor systems, barcode-based specimen tracking, and the adoption of computerized systems for analyzing test results.40 The transition from manual to automated methodologies not only enhanced throughput but also reduced human error, leading to standardized operations and setting the stage for today’s high-capacity automated systems that are fundamental in contemporary laboratory medicine.41
Present-day applications of automation
In contemporary laboratory settings, automation manifests through an array of advanced systems, encompassing everything from auto analyzers for biochemical tests to robotic arms for precise sample handling.42 These systems are seamlessly integrated into laboratory information management systems, facilitating an efficient workflow from the initial logging of samples to the final delivery of results.43 Quantitatively, automation has resulted in a marked escalation in laboratory throughput.44 Modern auto analyzers are capable of processing hundreds, if not thousands, of samples daily, a volume that would be unfeasible manually.45 Additionally, there has been a significant reduction in error rates. Automated systems boast error rates below 1%, starkly contrasting to manual methods, which can see error rates exceeding 5%.46 This enhancement in accuracy can be attributed to precise control over aspects like sample volume, reagent addition, and reaction timing, along with the implementation of sophisticated detection and analysis technologies.47
Role of ML in the field of clinical chemistry
Machine learning plays vital role in chemistry as it allows easy analytical assays approach for rapid detection.
Quality review of laboratory results
The pre-analytical phase is a major step in the sample testing process, with 70% of errors in laboratory diagnosis. One common mistake is using the wrong tube for blood sample collection, as demonstrated by Rosenbaum and Baron. ML-based multi-analytic delta checks show great potential to dominate previous single-analytic delta checks. The most promising algorithm is an SVM based on variations in laboratory values between sequential collections among eleven commonly measured chemistry analytes. The proposed algorithms realized an area of 0.97 under the receiver operating characteristic curve in the identification of write-back-invalidate-tag (WBIT) errors and ruled out the univariate delta checks. Considering a 1% error prevalence in WBIT and 80% test sensitivity, the most accurate univariate delta check covered 13% of pay-per-view, while the SVM model achieved 52% of pay-per-view. Factors like hemolysis can affect numerous laboratory parameters. Benirschke and Gniadek developed a multivariate Logistic Regression model to detect pseudo-enhanced point-of-care (POC) potassium results because of hemolysis.20
Role of ML in the field of hematology
Peripheral smear reporting
The peripheral smear is the initial step in classifying Anemias and diagnosing more than 80 percent of hematological diseases. Several methodologies, such as Bayes classifiers, K-nearest neighbors, multilayer perceptrons, and multiclass SVMs, have been used for the classification of leukocytes. A public dataset of cell images of 17,000 individuals was used to train the model. Public datasets can develop integrated medical laboratory systems in routine clinical laboratories, bypassing some drawbacks of commercially available testing reagents, such as high costs and low sensitivity. Another recent study reported high accuracies in different types of white blood cells and myoblasts classes in acute myeloid leukemia with sensitivity and precision above 90% based on a convolutional neural network (CNN). After classifying White blood cells, CNNs proved helpful in morphologically classifying red blood cells. The use of CNNs can offer variable accuracy and limited specificity of commercial analyzers for example CellaVision used for red blood cell classification, without the necessity of reclassification using manual operators.48
In the diagnosis of Malaria
The gold standard for the laboratory diagnosis of malaria is a microscopic examination of thick and thin stained blood films. Highly trained professionals are required for the microscopic quantification of parasites present in blood and their different stages of life cycle. It is a time-consuming and laborious procedure. Several ML approaches have been reported to distinguish different parasite stages or species and quantify parasitemia. These systems were primarily developed to differentiate infected and non-infected erythrocytes. However, a framework with an accuracy of 97.7% was developed by Molina et al., based on the SVM and linear discriminant analysis, which differentiated red blood cells infected with malaria from other non-infected normal cells such as Pappenheimer bodies, Howell–Jolly bodies, and basophilic stippling.20 In another study, Li et al. presented a cost-effective and compact automated microscopy platform with an ML approach to detect Plasmodium falciparum parasites in stained blood smears. The system was efficient enough to screen almost 1.5 million red blood cells per minute for parasitemia quantification with a simulated diagnostic specificity and sensitivity of over 90%.21 The study showed that logistic regression analysis was found to be the best performing model with 92% accuracy for predicting solely Plasmodium infections and 85% in prediction of mixed infections of Plasmodium falciparum and Plasmodium ovale.49
Role of ML in molecular diagnostics
The development of highly advanced and complex high-throughput nucleic acid technologies has increased competency in the field of molecular diagnostics. These processes have been enabled by advances in ML. Massive multiplexity needs sophisticated approaches in order to identify analytically valid interpretations and results. Modern next generation sequencing assays produce high-dimensional, structured datasets that can provide useful prognostic and diagnostic insights. Previously, techniques were implemented to check the similarity of sequence and often had low efficacy in predicting clinical impact. However, new technologies have been designed for interpretations from functional analysis to clinical impact. ML techniques are used to generate interpretation of complicated findings from broad genomic assays, which are available through both clinician-ordered and direct-to-consumer pathways. Molecular diagnostics in laboratory medicine involve probing with nucleic acid sequences and quantifying specific molecules. These omics-oriented tests can support studies including metabolomics, microbiomics, epigenomics, transcriptomics, and proteomics. These tests often include an ML component in the analysis of raw data produced and processed, often at a large scale. However, the ability to combine multiple sets of -omics data (i.e., multiomics) as a new clinical diagnostic area and integrate high-fidelity phenotypic data represents a challenging data-driven direction for molecular diagnostics.50
Role of ML in the field of immunology and serology
In immunology, imaging-based studies have been combined with immunofluorescence for the identification and classification of anti-neutrophil cytoplasmic antibodies. Currently, only a few examples of digital imaging in chemistry analysis using mechanical devices are important. There are also many new detection methods available where simple and fast equipment is essential. For example, there is significant interest in integrating mass spectrometry systems into the operating room for biochemical analysis of surgical samples. In a newer application, tissue samples (such as gas-phase ionic species or water droplets) are collected from surgical instruments and sent to a spectrometer. The mass spectrum is then analyzed in real-time to quickly perform biochemical analysis. Although this new in vitro diagnostic (IVD) technology is still in development, the method now provides ML to distinguish hard tissue from soft tissue. Recent publications describe this method for identifying tumors in various tissue types, including ovary, thyroid, and lung.51
Role of ML in the field of microbiology
In the field of microbiology, ML-based automation streamlines repetitive high-volume tasks, allowing laboratory staff to focus on more efficient work. Generally, the workload in the urine section increases due to the testing required to confirm urinary tract infections in samples. Burton et al. reported the implication of supervised ML models to determine whether urine samples can be cultured. The authors declared a decrease of 41% in the workload of these cultures while still identifying 95.2% of samples showing positive cultures using XGBoost. Another alternating approach is the use of ML for the analysis of digital images. Faron et al. recently determined WASPLab colony segregation software designed by Copan (Brescia, Italy) to automatically detect significant growth of urine cultures on MacConkey agars and standard blood agars. The authors determined that the diagnostic workload was significantly reduced by the software, which showed a sensitivity of 99.8% and can be used in microbiology labs for batch review of negative cultures. ML-based analysis of digital images has also been reported for the microscopic interpretation of stained smears, which is one of the most time-consuming and manual tasks in the microbiology lab. Smith et al. designed a system based on a deep CNN and automated image acquisition to automate the process of Gram stain classification. An overall accuracy of 94.9% has been achieved for the classification of Gram-positive cocci in chains, Gram-negative rods, Gram-positive cocci in clusters, and background (without cells).48
Detection and identification of microorganisms
The traditional methods for identifying and determining the antimicrobial susceptibility of microorganisms are still considered the gold standard. However, these methods are time-consuming, taking several days, starting with gram staining, antimicrobial susceptibility testing, and culture. Conventionally, the macroscopic analysis of colony morphology is the initiation of the classification of bacterial species before conformational testing using advanced techniques such as mass spectrometry. To decrease workload and produce a reference tool for microbiological analysis, Huang and Wu designed a bacterial colony morphology identification automatic system by using both supervised and unsupervised deep CNNs. The authors achieved a 73% classification accuracy for all bacterial species (n ¼ 18) and 90% specificity for each bacterial species. ML algorithms are obtaining value in the interpretation and analysis of complex spectral output of different analytical techniques, including matrix-assisted laser desorption ionization-time of flight mass spectrometry (MALDI-TOF MS), vibrational spectroscopy, and LC–MS/MS. ML models have been designed for the classification of group B Streptococcus serotypes, distinguishing between Shigella species and Escherichia coli, typing Staphylococcus haemolyticus strains, and differentiating between Clostridium species and Klebsiella species. While MALDI265 TOF MS is used for microbial identification in vibrational spectroscopy (i.e., IR and Raman spectroscopy) and routine clinical microbiology lab, it is gaining interest as an alternative technique for classification of different microorganisms due to its rapid, nondestructive nature. Lasch et al. proposed Fourier transform-IR spectroscopy hyperspectral imaging in combination with a hierarchical system of artificial neural networks for rapid and highly accurate identification of Gram-positive (S. epidermidis, S. aureus, B. subtilis, B. cereus and E. faecalis) and Gram-negative bacteria (P. aeruginosa, E. coli and C. freundii). Roux-Dalvai et al. designed a culture-free and fast method for identifying fifteen different uropathogenic bacteria using a combination of ML and LC–MS/MS. Within less than four hours, 97% accuracy was reported in the classification of predominant infecting bacteria.48–50
Detection of antimicrobial
Traditional methods of identifying and testing pathogens can lead to long-term effects, such as the use of broad-spectrum antibiotics and the spread of disease. Various analytical methods, including MALDI-TOF MS, vibrational spectroscopy, whole-genome sequencing, microscopy-based platforms, and acoustic-enhanced flow cytometry, play an important role in the rapid and reliable detection of anti-microbial resistance. Several research groups have reported the potential of MALDI-TOF MS and ML algorithms in the classification of methicillin-susceptible Staphylococcus aureus (MSSA) and methicillin-resistant Staphylococcus aureus (MRSA). In addition, an ML classifier was developed to distinguish vancomycin intermediate-resistant Staphylococcus aureus (VISA) and vancomycin-susceptible Staphylococcus aureus (VSSA) from heterogeneous VISA (hVISA) and MRSA isolates. Asakura et al. reported an open-access RF model with 99% sensitivity and 88% specificity in hVISA classification. Using the RF pattern of the MALDI-TOF MS spectrum, Huang et al. identified 93% of carbapenem-resistant bacteria and all carbapenem-sensitive Klebsiella pneumoniae. Additionally, a successful approach using a combination of MALDI-TOF MS data and ML algorithms detected broad-spectrum β-lactamase-producing Escherichia coli (E. amide enzymes in Bacteroides fragilis strains) and identified fluconazole resistance in Candida albicans.48,49,51
Role of ML in the field of blood bank
Blood banks are facilities that purchase, store, process, and distribute blood and exist to ensure that there is sufficient blood for hospital patients.21 Despite the efforts of different organizations, blood transfusions and safe delivery are major challenges in blood supply chain management, especially in case of high demand.52 Consequently, reducing uncertainties and coping with the blood demand, avoiding blood wastage are the primary goals. The integration of ML algorithms in blood banking management can offer an efficient blood demand and supply chain solution to overcome these challenges and achieve primary goals. ML approaches can be used in forecasting models to develop AI or ML decision support systems for forecasting blood demand, classifying blood donors, and establishing blood donation schedules.23 As a result of this updated system, blood shortages and wastage can be reduced.24
Diagnostic algorithms
The following ML diagnostic algorithms are commonly used.
Whether it is for prevention, early diagnosis, or corrective treatment options, combining AI and ML with Internet of Things (IoT)-enabled wireless sensor networks can provide significant benefits in healthcare. Figure 1 describes the clinical management system, while Table 1 provides predictions and characteristics of ML algorithms based on relevant research.35–41 Better and more personalized medical services may be offered in the future. ML is an important technology in AI. Because proprietary algorithms use known patterns to create new patterns, they require a lot of data for sampling.24,53 The increasing connectivity of laboratory systems is referred to as Lab 4.0 or the Internet of Laboratory Things and brings a number of security and safety features due to the integration of devices like sensors and systems in risky areas.54 These risks include data breaches, malware and ransomware attacks, unauthorized access and control, IoT device vulnerabilities, and equipment risks. Various measures can be taken to ensure the performance of the browser against cyber threats like network segmentation, authentication and access control, data security and protection, field control, and vulnerability analysis: Regularly updating the software and firmware of test equipment, conducting vulnerability analysis, and periodically applying patches to fix the vulnerability of the system; this can reduce the risk of exploitation by attackers.55
Table 1Summary of characterization of machine learning algorithms
Objectives and machine learning tasks | Major themes | Best model | References |
---|
Predicts iron deficiency and serum iron level from CBC indices | Prediction | Neural network | 35 |
Predict liver function test results from other tests in the panel, highlighting redundancy in the liver function panel | Prediction, utilization | Tree-based | 36 |
Predicts ferritin from other tests in iron panel | Prediction, utilization | Tree based | 37 |
Predict normal reference ranges of ESR for various laboratories based on geographic and other clinical features | Interpretation | Neural network | 38 |
Classify whether other lab results are valid or invalid using other lab values and clinical information | Automation and interpretation | Tree based | 39 |
Classify blood specimens as clotted or not clotted based on coagulation indices | Quality control | Neural network | 40 |
Automatically identifies mislabelled samples | Assurance and quality control | Neural network | 41 |
Quality assurance in laboratory medicine
Quality assurance in laboratory is comprised of three phases.
Pre-analytical phase
This is an essential part of quality assurance. Currently, pre-measurement error is considered the largest contributing factor to error throughout the testing process. This can be alleviated to some extent by point-of-care test (POCT), but new possibilities emerge as we move to different matrices and the collection and storage of samples (e.g., home-collected dried blood and biobanking).56
Analytical phase
The identification of all diagnostic modalities has been greatly impacted due to advances in POCT, MS, and genomics. The number of publications in this field has increased exponentially in recent years; this growth must continue. The translation of laboratory technology and analytical methods outside the laboratory will also be further developed. This section discusses our predictions regarding the laboratory “screening” of drugs.24,56
Post-analytical
Appropriate interpretation of test results forms the basis for clinical decision-making, which is influenced by well-established time and decision limits. This section discusses three predictions regarding the “post-analysis” phase of the experiment, including the role of cognitive development.
The influence of automation on quality assurance
The advent of automation has notably transformed quality assurance in laboratory medicine, leading to measurable improvements in quality metrics. For instance, the utilization of automated hematology analyzers has standardized blood cell counting, thereby enhancing reproducibility and accuracy while reducing variability across different operators and institutions.57 A comparison of pre- and post-automation scenarios illuminates the profound impact of automation. Before automation, manual microscopy used for cell counting often resulted in a coefficient of variation (CV) of over 10% in some cases.58 In stark contrast, post-automation, automated counters consistently demonstrate CVs of less than 5%, as detailed by Hawkins.59 The efficiency benefits are equally remarkable; tasks that formerly took minutes per sample can now be executed in mere seconds, significantly increasing the number of samples analyzed without sacrificing and often improving the accuracy of the results.
The emergence of ML in laboratory data analytics
Machine learning has evolved the data analysis of the raw data.
Foundational concepts and application in laboratories
ML has emerged as a pivotal force in the field of laboratory data analysis, founded on algorithms that autonomously learn from data, discern patterns, and make informed decisions with minimal human oversight.60 Central to ML is its ability to recognize patterns and forecast outcomes, harnessing algorithms such as neural networks, decision trees, and support vector machines. These are particularly effective for the multivariate and intricate datasets typical in laboratory medicine.47 In the realm of laboratory medicine, the data amenable to ML spans both structured forms, like test results, and unstructured types, such as textual reports and imaging. Structured data is naturally suited for ML processing, enabling predictive analysis in areas like patient outcomes based on laboratory test patterns. On the other hand, unstructured data is amenable to analysis via natural language processing and advanced deep learning methods, which facilitate the extraction of critical clinical insights that can refine diagnostic and prognostic approaches.61
ML’s role in advancing predictive analytics
ML has significantly elevated predictive analytics in laboratory medicine by enabling more nuanced trend analyses and quality predictions. ML algorithms, including random forests and gradient boosting machines, offer powerful tools for unraveling complex interrelations in laboratory data, leading to enhanced predictive models for patient diagnosis and prognosis.49 For example, random forests have effectively predicted patient outcomes by analyzing various laboratory parameters, demonstrating notable accuracy and providing comprehensive insights into variable significance.62 Furthermore, deep learning, particularly through CNNs, has proven highly effective in image-based assays, excelling in tasks like cell classification and anomaly detection. These algorithms often achieve accuracy rates that exceed those of human evaluators. Their continuous learning capability makes them invaluable for perpetually enhancing quality assurance in laboratory practices.9
ML in diagnosis and prognosis
The adoption of ML in laboratory medicine has markedly advanced diagnostic and prognostic capabilities, significantly refining precision and patient care outcomes. An exemplary case is its application in the early detection of diseases such as diabetes.63 Here, ML algorithms surpass traditional methods in predicting disease onset, analyzing patient data to forecast diabetes development with heightened accuracy.64 In oncology, particularly cancer diagnosis, ML’s impact is profound. Models trained on extensive histopathology image datasets have demonstrated high accuracy in identifying cancerous cells, assisting pathologists in quicker and more precise diagnoses. These advancements elevate patient care standards and streamline lab operations by reducing diagnosis time, enabling earlier treatment interventions.65
Future role of total laboratory automation
Many future perspectives of laboratories see robotics and mobile phones playing a major role, following predictions in other areas of the industry (commerce and business). Mobile robots were used to transport diagnostic samples, and two-arm robots were also used in the scanning process. Collaborative robots (Cobots) are a new class of robots that are safe alongside humans, easy to deploy and train, and inexpensive. Designed to be reprocessed with light equipment (e.g., 5 kg). A hospital in Denmark used two Universal Robot UR5 collaborative robots to cut blood samples. The first cobot takes the sample, places it on the barcode scanner, identifies the color cap (via the camera), and places the tube on the rack according to the color cap. The second cobot collects the places and racks them on the feeder for centrifugation and subsequent analysis. The footprint of these cobots meets the laboratory’s space limitations, eliminates the need for a safety cage, processes at 7–8 tubes/minute, and allows the laboratory to absorb 20% additional models without the need for additional personnel. Future momentum for the use of robots in the laboratory may result from increased use of robots in other areas of the hospital. For example, robots are used in surgery, medication and diaper distribution, sterilization, drug distribution, medical care, patient consultation, etc. Another application of robots is total laboratory automation (TLA) in the form of analyzers directly to sample transport paths. This is now normal and is unlikely to change in the 2020s. For example, new TLA systems feature two-way, variable-speed magnetic transport models, multi-view cameras, and radio frequency identification tracking.48,56
Green technologies, and sustainability?
To achieve sustainability is a new goal. Climate change and environmental concerns are national and international issues. The general public is more security-conscious, often using security arguments to guide their choices and practices. Governments, businesses, and individuals worldwide are striving to ensure their operations are environmentally friendly. This includes “planting” projects focused on energy, infrastructure, waste, water, infrastructure development, and housing. Results from selected hospitals across the country that have implemented programs to reduce energy use and waste, achieving room efficiency, show that savings from these interventions could exceed $5.4 billion in five years and $15 billion in 10 years. By 2020, community laboratories will have the opportunity to drive and develop new models and practices for sustainability. Professionals in the workplace play an important role in creating a healthy environment. This sustainable thinking will ensure efficient and responsible resource use, creating new values for the health mission rather than solving fewer problems. Providing safe and cost-effective care to patients and their families must be a priority, but environmental management can be achieved. New technologies will play an important role in this quest. AI and data science in pharmaceutical laboratories, increase efficiency, utilize reagents and resources, and contribute to leadership. AI can improve energy efficiency and measure and manage carbon and water footprints. “Smart” decisions will be encouraged. In fact, a sustainable approach includes reducing unnecessary testing. Results from a pediatric heart attack study focusing on blood pressure measurements showed positive results on biochemical tests and a reduction in carbon dioxide emissions of approximately 17.8 tons at 32-month follow-up. IVD reagent manufacturers are also addressing environmental concerns, working to reduce reagent packaging to reduce both carbon footprint and environmental footprint. Consultation and cooperation with IVD stakeholders ensure better supply chain security, reagent production, and production equipment.66
How will POCT evolve?
Predicting the future balance between testing and self-assessment or self-monitoring is challenging. Factors contributing to this evolution include the important role of mobile health (mHealth) and care-related information, the emergence of diagnostic tools, and new diagnostic tests (e.g., medical scanners, toilet tests, and compare with medical records). The emergence of smartphones in 2007 changed many aspects of daily life, offering electronic devices with functions such as telephone, photo/video camera, MP3 player, media and weather forecast, and easy access to more information on the Internet. Moreover, the functionality of smartphones can be expanded through downloadable applications, particularly in health and wellness sectors expected to grow substantially by 2025.
The next phase of POCT evolution involves devices that connect to smartphones, creating diagnostic tools with capabilities ranging from blood tests to ultrasound scans. Another advancement is diagnostic equipment that connects wirelessly to smartphones (e.g., Bluetooth-connected pregnancy test; ClearBlue-connected ovulation test). Another use of smartphones in POCT is to obtain urine output using the built-in camera. It then uses color recognition, computer vision, and AI to make accurate measurements across different conditions and devices. These results can then be securely shared and integrated into patients’ electronic medical records. The menu of tests based on this table should be expanded to include the urine albumin:creatinine ratio. An example of the proliferation of mobile medical devices is the increase in the number of smart devices, clothing, or appliances that have sensors integrated or woven into their structure to provide health information unobtrusively in daily life. Wearable devices include wrist-worn devices (e.g., Apple Watch for monitoring heart rate and tone; bracelets for diagnosing epilepsy), mouth guards (e.g., measuring line and rotation rate, pulse location and direction, and calculating all pulses); wearable devices (e.g. CardioInsight non-invasive 3D mapping system); and various types of wireless patches (e.g. Smartcardia - Vital Signs Temperature, Pulse, Blood Pressure, Blood Oxygen Level, Heart rhythm and electrical activity), these patches are interchangeable, and in some cases are stretchable (e.g. electric skin with pressure and temperature sensors). Two other new technologies that may impact the future of non-invasive POCT are breath analysis (volatilomics) and speech analysis. Breath contains compounds, particularly volatile organic compounds, whose structure has been linked to disease. The absence of negative breath is attractive for POCT, and many analytical methods have been developed. Speech analysis is relatively new as a diagnostic test. Algorithms for speech analysis have been developed and some success has been achieved in detecting coronary artery disease. Pharmacists take finger swabs, which are then analyzed for up to 21 tests at the pharmacy (for example, the board monitors cholesterol, triglycerides, blood sugar, and the hard work of the liver). The number and reach of POCT is likely to increase in the 2020s.64–66
Challenges in implementing automation and ML
Following challenges are faced when we implement the machine learning algorithms in data management and analysis.
Data management complexities
In implementing automation and ML in laboratory medicine, managing data, especially regarding patient privacy and security, poses significant challenges.67 The sheer volume of data demands robust encryption and controlled access to prevent unauthorized exposure. Addressing these concerns involves a combination of technological and policy-based solutions. Advanced cybersecurity measures and blockchain technology to secure data transactions. Simultaneously, comprehensive policy frameworks and ongoing staff training are essential to ensure adherence to data security best practices, maintaining trust and integrity in laboratory information systems.68
Algorithmic biases and challenges
Algorithmic bias represents a critical challenge in ML applications in laboratory medicine, impacting diagnostic precision and patient outcomes.69 These biases, arising from unrepresentative training data or algorithm design flaws, can lead to systematic errors in patient care. To bolster algorithmic reliability, it’s essential to use diverse and representative datasets and perform thorough validation across various population groups.70 Ongoing monitoring of algorithmic outcomes is crucial for identifying and rectifying biases. Implementing explainable AI can demystify algorithmic decisions, identifying and mitigating biases, thus enhancing algorithm reliability and trust in AI-driven laboratory processes.71 Overcoming biases resulting from non-representative data is a key challenge in developing and implementing AI approaches in healthcare. When AI algorithms fail to differentiate patient diversity due to extensive data, they can produce biased recommendations that can seriously impact patient care. There are chances of misrepresentation or distortion of healthcare disparities due to integration of AI algorithms by suggesting a variety of treatment lines based on race, gender, genetic histories, and socio-demographic factors. This can lead to a wide contradiction in the field of medical laboratories in terms of health standards and outcomes for different ethnicities of patients. It can lead to misdiagnosed clinical conditions and serious therapeutic errors, and incorrect treatment recommendations. Biased AI system will compromise the legal and ethical implications and will destroy patients’ rights and trust in healthcare systems. It can also affect public health by disrupting the allocation of health resources, interventions, and decision-making based on biased AI system. There is a need to address biases of AI algorithms by introducing expertise, ethics, and collaborative measures like the provision of data of patients from different demographics and races and, the application of bias detection and reduction techniques such as rational and bias removal algorithms. To ensure transparency and accountability in the AI development and implementation process there should be disclosure of resources, assumptions, and vulnerabilities to stakeholders to integrate multiple perceptions and skills in the model. By addressing these biases in AI algorithms, our healthcare system can uphold an efficient and comprehensive healthcare approach that will improve patient outcomes and increase public trust in AI system.72
Navigating regulatory and compliance challenges
The regulatory landscape for ML and automation in laboratory medicine is constantly evolving, shaped by various international standards and national regulations focused on patient safety and data security.57 In the United States, entities like the Clinical Laboratory Improvement Amendments and the Food and Drug Administration (FDA) oversee laboratory testing, including ML and automated applications, mandating comprehensive validation and quality control.73 The challenge in compliance arises from the dynamic nature of ML models, which continually evolve and adapt, potentially surpassing existing regulatory frameworks designed for static devices.74 The development of adaptive regulatory pathways is crucial in maintaining safety and efficacy while encouraging innovation. Striking a balance between technological progress and stringent regulatory compliance is a key obstacle in the widespread integration of these technologies in clinical settings.75
Infrastructural and economic factors
The integration of automation and ML into laboratory medicine involves a complex cost-benefit analysis. This includes initial investments in technology and training, as well as potential modifications to existing workflows.76 Larger institutions often benefit from economies of scale, being able to distribute costs across a high volume of tests. However, the long-term advantages, such as heightened efficiency, error reduction, and potentially superior patient outcomes, can outweigh the initial costs.77 In resource-limited environments, challenges extend beyond just the financial aspects, encompassing infrastructural deficiencies like inconsistent power supplies, internet connectivity issues, and a lack of adequately trained personnel.78 To overcome these hurdles, a holistic approach is needed, one that not only focuses on technological advancement but also considers the local context, including investments in infrastructure and human resources development.79
Applying rules in the context of ML models presents unique challenges, especially given the quality of these models. Some of the challenges are described here, among which legal background is an important one as the law often struggles to keep up with the rapid growth of ML. As ML models evolve and improve, regulatory frameworks will lag, making it difficult for organizations to enforce outdated or inappropriate policies.80 Description and interpretation of many regulations, such as the European Union’s General Data Protection Regulation (GDPR), require an operational decision-making process (including a decision-making process supported by ML models) to describe and explain.81 But as the ML model becomes more complex, full disclosure and explanation will become more difficult and it will be harder for organizations to comply with these requirements. The third one is privacy and security of data, as regulations such as the Health Insurance Portability and Accountability Act and GDPR in the United States have introduced a strict criterion for the protection of sensitive and private information, including medical data records used for educational, research, or referrals ML model.82 Supervised implications of these regulations require strong security of data storage, access, and management and secure data storage. An impartial and fair implementation of regulations will help to prevent injustice and discrimination that may be caused by ML systems. However, a complete bias removal system in ML algorithms is not so easy. Information technology-based organizations should step forward to recognize, solve, and monitor biases with regulatory requirements to verify and validate the precision, accuracy, consistency, transparency, and stability of ML models, especially when dealing with complex ML-based deep neural networks. There should be a multidisciplinary approach connecting ML, data science, governance, and legal and policy decision-making with a range of flexibility in the regulatory set for ML algorithms as standards vary across regions and industries.83
Tactical frameworks for implementing automation and ML
Establishing validation and standardization frameworks
For ML applications in laboratory medicine, implementing robust validation procedures is crucial to ensure consistent and accurate algorithm performance. A multi-tiered validation strategy is advisable, starting with internal validation against historical data, followed by external validation using data from multiple centers.84 This approach is designed to identify and correct potential overfitting and biases that might not be evident in single-center studies. Additionally, setting international algorithmic standards is vital to ensure consistency and interoperability across various systems and institutions.85 Entities like the International Organization for Standardization and the Clinical and Laboratory Standards Institute could expand their laboratory standards to encompass ML applications, covering aspects such as algorithmic transparency, data quality, and performance metrics. Such standardization is key not only for quality assurance but also for facilitating regulatory approvals globally.86
Encouraging cross-disciplinary collaboration
Cross-disciplinary collaboration is essential in maximizing the potential of automation and ML in laboratory medicine.87 Teams comprising data scientists, laboratory technicians, clinicians, and information technology experts are crucial for the development, validation, and implementation of sophisticated analytical tools. A notable example is the team at Beth Israel Deaconess Medical Center, which created an ML algorithm to predict patient risks by integrating laboratory data with electronic health records, achieving enhanced patient outcomes.88 The partnership between IBM Watson Health and Quest Diagnostics is another instance of successful interdisciplinary collaboration, where cognitive computing is applied to vast lab data, advancing the field of precision medicine.89 These initiatives underscore the importance of merging technological expertise with clinical insights for innovation in healthcare.
Fostering educational and training programs
Adapting educational models to include automation and ML is imperative in laboratory medicine. This includes developing curricula that blend data analytics with clinical acumen, as well as modifying continuous professional development programs to keep pace with technological progress.90,91 Online platforms offering micro-credentials provide flexible, targeted learning opportunities in areas like data analysis and system integration, essential for laboratory professionals to maintain competency in these rapidly advancing technologies. Such educational initiatives are key to ensuring the workforce remains adept in the evolving technologies underpinning quality assurance in laboratory medicine.92
Ethical considerations and regulatory evolution
The increasing prevalence of ML systems in laboratory medicine necessitates the development of specific ethical guidelines. These guidelines should focus on aspects like the transparency of algorithmic decisions, informed patient consent for data usage, and ensuring equity in healthcare outcomes, as emphasized by the American Medical Association in its policy on augmented intelligence.93 Concurrently, it is critical to promote proactive policymaking, fostering collaboration between regulators, technologists, and healthcare professionals. Such a collaborative stance ensures that regulatory measures are both informed by and adaptable to the intricacies of ML applications, facilitating safe and effective integration while keeping ethical considerations at the forefront.94 Regulatory frameworks need to be agile, adapting to the fast-evolving field of ML in laboratory medicine, akin to the FDA’s progressive guidelines on digital health.95
The potential ethical concerns of patients’ privacy and data security challenges due to the use of AI in healthcare can cause data breaches. As AI algorithms accumulate and process data of a huge number of patients, the risk of data breaches surges. Unauthorized and unsupervised access to private medical histories and data can violate privacy with serious consequences for patients. This matter can lead to the misuse of data instead of improving health outcomes. Even if we use high privacy control systems, there is always a risk of re-identification of patient’s data in the future. Sometimes, patients may not be able to clearly understand the practice of his/her AI data for clinical testing or research purposes, compromising the consensual right of the patient. Algorithm bias, data ownership, and regulatory compliance may also interfere with a fair ethical healthcare system.92,93 A number of approaches and strategies are required to solve these ethical issues, including strong and reliable data management policies, encryption and anonymization technologies, communication with patients (informed consent) about data security and use, continuous monitoring of algorithmic biases, and strict implementation of regulatory standards. Moreover, an environment of trust and responsibility between healthcare providers, technology developers, and patients are essential pillars to ensure the credibility of medical intelligence while protecting patient privacy and data security.94
Prospective developments
Emerging trends in laboratory medicine
Looking ahead, laboratory medicine is poised for transformative evolution driven by the amalgamation of ML and automation. Anticipated future trends include the development of self-regulating lab systems, which autonomously adjust based on continual data analysis, thereby boosting accuracy and operational efficiency.95 The integration of IoT devices is expected to enable remote monitoring and management of lab processes.77 Moreover, advancements in ML are likely to facilitate predictive diagnostics, using extensive datasets to foresee potential disease outbreaks or patient-specific health risks, paralleling predictive maintenance techniques used in industrial contexts.78 This integration may also catalyze the decentralization of laboratory services, with point-of-care diagnostics becoming more prevalent and requiring minimal human oversight, thereby extending healthcare reach, especially in under-resourced areas.96
Personalized medicine and its public health implications
Personalized medicine, tailored to individual genetic, environmental, and lifestyle profiles, is increasingly becoming a healthcare priority. ML and automation are pivotal in this shift, enabling the intricate analysis of biological data and the identification of targeted treatment pathways.97 These technologies are expected to revolutionize patient care by customizing therapies and predicting individual responses to various treatments.98
Fostering innovation and flexibility
In the evolving field of laboratory medicine, continuous innovation is crucial to uphold the reliability and validity of diagnostic tests amid advancing technologies. As novel tools and methods emerge, the sector must be agile, updating its protocols, introducing new quality control strategies, and equipping professionals with the skills to manage complex equipment and data analyses.99 This adaptability is vital to ensure that technological advancements yield enhanced health outcomes while upholding the accuracy and ethical standards central to laboratory practice.100
Conclusion
This review has explored the progressive integration of automation and ML in laboratory medicine, underscoring its transformative effect on quality assurance. We have traversed the promising prospects offered by this integration, from enhancing diagnostic accuracy to bolstering analytical performance. Yet, this path is laden with challenges such as data management complexities, biases in algorithms, evolving regulatory scenarios, and economic considerations. To address these challenges, we have proposed several strategic measures: implementing stringent validation protocols, encouraging cross-disciplinary collaboration, advancing educational efforts, and crafting ethical guidelines, all aimed at heralding a new era of technological integration. The convergence of ML, automation, and personalized medicine points towards a future where laboratory diagnostics are not just reactive but increasingly predictive and preventive. The responsibility lies with the contemporary scientific community to implement proactive strategies, ensuring that continuous innovation, adaptability, and a collaborative spirit form the foundation of laboratory medicine. Armed with these principles, the field can not only adapt to but also drive the ongoing wave of technological evolution, enhancing patient care and public health.
Declarations
Funding
None.
Conflict of interest
The authors have no conflict of interest related to this publication.
Authors’ contributions
Conceptualization, study design and writing original draft (QUA), data curation (RN), formal analysis (AN), project administration (HS), writing-review and editing (AD), proofreading, editing and corrections (IUM, MI). All authors have made a significant contribution to this study and have approved the final manuscript.