Take a deep breath, your smartphone could help measure blood oxygen levels at home [Details]

First, pause and take a deep breath.

When we breathe in, our lungs fill with oxygen, which is distributed to our red blood cells for transportation throughout our bodies. Our bodies need a lot of oxygen to function, and healthy people have at least 95% oxygen saturation all the time.

Conditions like asthma or COVID-19 make it harder for bodies to absorb oxygen from the lungs. This leads to oxygen saturation percentages that drop to 90% or below, an indication that medical attention is needed.

In a clinic, doctors monitor oxygen saturation using pulse oximeters — those clips you put over your fingertip or ear. But monitoring oxygen saturation at home multiple times a day could help patients keep an eye on COVID symptoms, for example.

In a proof-of-principle study, University of Washington and University of California San Diego researchers have shown that smartphones are capable of detecting blood oxygen saturation levels down to 70%. This is the lowest value that pulse oximeters should be able to measure, as recommended by the U.S. Food and Drug Administration.

The technique involves participants placing their finger over the camera and flash of a smartphone, which uses a deep-learning algorithm to decipher the blood oxygen levels. When the team delivered a controlled mixture of nitrogen and oxygen to six subjects to artificially bring their blood oxygen levels down, the smartphone correctly predicted whether the subject had low blood oxygen levels 80% of the time.

smartphones are capable of detecting blood oxygen saturation levels

In a proof-of-principle study, University of Washington and University of California San Diego researchers have shown that smartphones are capable of detecting blood oxygen saturation levels down to 70%. The technique involves having participants place their finger over the camera and flash of a smartphone, which uses a deep-learning algorithm to decipher the blood oxygen levels from the blood flow patterns in the resulting video./Photo:Dennis Wise/University of Washington

“Other smartphone apps that do this were developed by asking people to hold their breath. But people get very uncomfortable and have to breathe after a minute or so, and that’s before their blood-oxygen levels have gone down far enough to represent the full range of clinically relevant data,” said co-lead author Jason Hoffman, a UW doctoral student in the Paul G. Allen School of Computer Science & Engineering. “With our test, we’re able to gather 15 minutes of data from each subject. Our data shows that smartphones could work well right in the critical threshold range.”

Another benefit of measuring blood oxygen levels on a smartphone is that almost everyone has one.

“This way you could have multiple measurements with your own device at either no cost or low cost,” said co-author Dr. Matthew Thompson, professor of family medicine in the UW School of Medicine. “In an ideal world, this information could be seamlessly transmitted to a doctor’s office. This would be really beneficial for telemedicine appointments or for triage nurses to be able to quickly determine whether patients need to go to the emergency department or if they can continue to rest at home and make an appointment with their primary care provider later.”

The team recruited six participants ranging in age from 20 to 34. Three identified as female, three identified as male. One participant identified as being African American, while the rest identified as being Caucasian.

To gather data to train and test the algorithm, the researchers had each participant wear a standard pulse oximeter on one finger and then place another finger on the same hand over a smartphone’s camera and flash. Each participant had this same set up on both hands simultaneously.

“The camera records how much that blood absorbs the light from the flash in each of the three color channels it measures: red, green and blue,” said Wang, who also directs the UC San Diego DigiHealth Lab. “Then we can feed those intensity measurements into our deep-learning model.”

Each participant breathed in a controlled mixture of oxygen and nitrogen to slowly reduce oxygen levels. The process took about 15 minutes. For all six participants, the team acquired more than 10,000 blood oxygen level readings between 61% and 100%.

“Smartphone light can get scattered by all these other components in your finger, which means there’s a lot of noise in the data that we’re looking at,” said co-lead author Varun Viswanath, a UW alumnus who is now a doctoral student advised by Wang at UC San Diego. “Deep learning is a really helpful technique here because it can see these really complex and nuanced features and helps you find patterns that you wouldn’t otherwise be able to see.”

 

Mass media linked to childhood obesity

A task force from the European Academy of Paediatrics and the European Childhood Obesity Group has found evidence of a strong link between obesity levels across European countries and childhood media exposure. The experts’ review is published in Acta Paediatrica.

The findings indicate that parents and society need a better understanding of the influence of social media on dietary habits. In addition, health policies in Europe must take account of the range of mass media influences that promote the development of childhood obesity.

“Parents should limit TV viewing and the use of computers and similar devices to no more than 1.5 hours a day and only if the child is older than four years of age. Moreover, paediatricians should Inform parents about the general risk that mass media use poses to their children’s cognitive and physical development,” said senior author Dr. Adamos Hadjipanayis, of the European Academy of Paediatrics.

New software can detect when people text and drive

Computer algorithms developed by engineering researchers at the University of Waterloo can accurately determine when drivers are texting or engaged in other distracting activities.

The system uses cameras and artificial intelligence (AI) to detect hand movements that deviate from normal driving behaviour and grades or classifies them in terms of possible safety threats.

Fakhri Karray, an electrical and computer engineering professor at Waterloo, said that information could be used to improve road safety by warning or alerting drivers when they are dangerously distracted. And as advanced self-driving features are increasingly added to conventional cars, he said, signs of serious driver distraction could be employed to trigger protective measures.

“The car could actually take over driving if there was imminent danger, even for a short while, in order to avoid crashes,” said Karray, a University Research Chair and director of the Centre for Pattern Analysis and Machine Intelligence (CPAMI) at Waterloo.

Algorithms at the heart of the technology were trained using machine-learning techniques to recognize actions such as texting, talking on a cellphone or reaching into the backseat to retrieve something. The seriousness of the action is assessed based on duration and other factors.

That work builds on extensive previous research at CPAMI on the recognition of signs, including frequent blinking, that drivers are in danger of falling asleep at the wheel. Head and face positioning are also important cues of distraction. Ongoing research at the centre now seeks to combine the detection, processing and grading of several different kinds of driver distraction in a single system.

“It has a huge impact on society,” said Karray, citing estimates that distracted drivers are to blame for up to 75 per cent of all traffic accidents worldwide.

Another research project at CPAMI is exploring the use of sensors to measure physiological signals such as eye-blinking rate, pupil size and heart-rate variability to help determine if a driver is paying adequate attention to the road.

Karray’s research — done in collaboration with PhD candidates Arief Koesdwiady and Chaojie Ou, and post-doctoral fellow Safaa Bedawi — was recently presented at the 14th International Conference on Image Analysis and Recognition in Montreal.