Augmented intelligence is transforming health and fitness

AI is going multimodal, and the world of computer-assisted assessment and decision support is about to accelerate at a breathtaking rate. This article by @erictopol describes the tip of the iceberg.

One of the primary reasons that homo sapiens are at the top of the food chain is because communication enables co-operation. Another is that the human brain can make decisions and plan ahead based on information processing. The health, fitness and wellness industries are dependent on good decision-making, communication and cooperation between professionals and their clients, often in the face of uncertainty.

The introduction of large language models (LLM) is going to totally transform health and well-being by reducing uncertainty and changing some very simple factors that humans are simply not good at:

1. Ultra-accurate measurement – in pixels and milliseconds, in multiple dimensions

2.       Consistent data acquisition – unbiased assessments and re-assessments

3.       Uncorrupted recording and recollection of data – for learning and forecasting

4.       Correlation between a multitude of interactions – for complex problem-solving

5.       Continuous data acquisition – at times and places when the data is most important

Calculators have been used for decades for making sense of complex numbers, and more recently, ChatGPT for language. Now image recognition and processing are entering the mix with GPT-4 that is capable of working with text, audio, speech, and images– for recording, enhancing, analysing, and visualising still and video images.

’’Over the past several years, there has been a torrent of studies that have consistently demonstrated how powerful “machine eyes” can be, not only compared with medical experts but also for detecting features in medical images that are not readily discernable by humans.’’(1)



As impressive as that may sound…

‘’ the big shift ahead is the ability to transcend narrow, unimodal tasks, confined to images, and broaden machine capabilities to include text and speech, encompassing all input modes, setting the foundation for multimodal AI.’’(1)

Granted, much of the input resources are currently based on content from books and the internet, but once we put more specific health data, that is labelled (supervised learning) and fit for high-quality machine learning, through these transformed models like GTP-4, LLaMa, PaLM-2 and Bard, we are going to have digital assistants (sometimes called copilots) that help us do our jobs as health and fitness professionals much faster, more accurately, with greater scale and at lower cost. Not to mention, improve our ability to communicate with clients and co-workers, and to co-ordinate multifactorial assessments and solutions, true to the client-centric biopsychosocial model that we all aspire to, but few manage to deliver.

My recommendation to all health and fitness workers is to start integrating technology for decision support NOW or be left in the dark very soon. Health and fitness workers that embrace augmented intelligence now will grow with the technology. The laggards will be stuck in an expensive and unproductive analog past, dreaming of fax machines, and waiting rooms, and wondering why they have so many disgruntled clients with misdiagnoses, poor outcomes, long wait times, and chronic illness that should have been prevented in the first place.

Glenn Bilby
MBA, B.Sci (Human Movement). B.Physio

(1) Topol, E,   As artificial intelligence goes multimodal, medical applications multiply, SCIENCE, 15 Sep 2023, Vol 381, Issue 6663, https://doi.org/10.1126/science.adk6139 (online 20230904 As artificial intelligence goes multimodal, medical applications multiply | Science)

 


 
Previous
Previous

The squat is a whole-body event - understanding the kinematics is the secret to success.

Next
Next

What muscles are causing the body to move during a squat?