Postgraduate medical journal
-
Generative conversational artificial intelligence (AI) has huge potential to improve medical education. This pilot study evaluated the possibility of using a 'no-code' generative AI solution to create 2D and 3D virtual avatars, that trainee doctors can interact with to simulate patient encounters. ⋯ By providing trainees with realistic scenarios, this technology allows trainees to practice answering patient questions regardless of actor availability, and indeed from home. Furthermore, the use of a 'no-code' platform allows clinicians to create customized training tools tailored to their medical specialties. While overall successful, this pilot study highlighted some of the current drawbacks and limitations of generative conversational AI, including the risk of outputting false information. Additional research and fine-tuning are required before generative conversational AI tools can act as a substitute for actors and peers.
-
Observational Study
Gastroesophageal reflux disease increases the risk of essential hypertension: results from the Nationwide Readmission Database and Mendelian randomization analysis.
The link between gastroesophageal reflux disease (GERD) and essential hypertension (EH) and its causal nature remains controversial. Our study examined the connection between GERD and the risk of hypertension and assessed further whether this correlation has a causal relationship. ⋯ GERD is a causal risk factor for EH. Further research is required to probe the mechanism underlying this causal connection.
-
The lack of transparency is a prevalent issue among the current machine-learning (ML) algorithms utilized for predicting mortality risk. Herein, we aimed to improve transparency by utilizing the latest ML explicable technology, SHapley Additive exPlanation (SHAP), to develop a predictive model for critically ill patients. ⋯ A transparent ML model for predicting outcomes in critically ill patients using SHAP methodology is feasible and effective. SHAP values significantly improve the explainability of ML models.
-
Bar charts of numerical data, often known as dynamite plots, are unnecessary and misleading. Their tendency to alter the perception of mean's position through the within-the-bar bias and their lack of information on the distribution of the data are two of numerous reasons. The machine learning tool, Barzooka, can be used to rapidly screen for different graph types in journal articles.We aim to determine the proportion of original research articles using dynamite plots to visualize data, and whether there has been a change in their use over time. ⋯ Our results show that the use of dynamite plots in surgical research has decreased over time; however, use remains high. More must be done to understand this phenomenon and educate surgical researchers on data visualization practices.