This printout has been edited for clarity.
Jagmeet P. Singh, MD, MSc: Hi every one. I’m Jag Singh from Medscape Cardiology. It’s a pleasure to be here. I’m glad to have my close friend and colleague from Massachusetts General Hospital, Dr. Steven Lubitz, here with us. Welcome, Steve.
Steven A. Lubitz, MD, MPH: Thank you, Jag.
Singh: Steve is an associate professor of medicine at Harvard Medical School and a clinical electrophysiologist. Just yesterday, he presented a top-line, late-breaker clinical trial, the Fitbit Heart Study. We are very happy to hear about this from him.
Steve, for those who were not there for yesterday’s presentation, I was hoping you could give us an overview of what the survey results were and what the design of the survey was.
Fitbit Heart Study
Lubitz: Of course, happy. Thank you, Jag. As a background, we know that undiagnosed atrial fibrillation and flutter can cause morbidity, including stroke, which is disabling. These can be prevented with early detection of atrial fibrillation. Many individuals these days wear smartwatches or fitness trackers equipped with optical sensors to measure photoplethysmography (PPG) signals that can detect heart rate. Software algorithms can now be applied to these PPG signals to deduce the presence of atrial fibrillation.
We evaluated a new Fitbit software algorithm that examines frequent and overlapping PPG signals collected by Fitbit portable devices to detect undiagnosed atrial fibrillation. We tested the positive predictive value of the algorithm for concurrently undiagnosed atrial fibrillation in a large remote clinical trial with existing wearable users in the United States.
In essence, participants who signed up were evaluated for the presence of an irregular pulse using this algorithm, and those in whom the algorithm detected an irregular rhythm were sent a message and invited to schedule a visit to a telecommunications provider. . So, if they were eligible, they were sent a 1-week ECG patch monitor, which they themselves fitted and returned to us for analysis.
Singh: There were, if I remember correctly, about 460,000 patients. That’s a huge number. Before we get further into the results, what were the challenges with remote recruitment, remote consent and remote follow-up? You change the landscape of clinical trials out there, and even remote engagement. What did you experience, and what can future clinical trials in this arena learn from your trial?
Lubitz: I think that’s a good question. That’s right – over 455,000 participants signed up. Thanks to all participants who engaged in the study and participated in about a 5-month period in the midst of a pandemic.
There were a number of challenges. One was that there was an overwhelming enthusiasm for participating in the study, which is reflected in that sample size. We were initially unsure what the level of engagement would be or what the enthusiasm and sign-up would look like, but it was overwhelming.
As we marched through the survey flow, we saw that there was some attrition of participants who signed up for the survey. This has been observed before in other distant, large clinical trials. It presents a great challenge, I think, for remote testing because we really do not have the warm contact face-to-face, person-to-person interaction during these types of trials. We are not engaged in the health care system and the providers that participants are used to interacting with, so we lose the personal degree of commitment from participants in trials like this. It presents a great challenge for remote testing in the future, I think.
Comparisons with the Apple Heart Study
Singh: It is very useful. One of the other big studies that apparently everyone knows was the Apple Heart Study, which also looked at a PPG algorithm. What do you think the Fitbit Heart Study contributes in addition to what we have already learned from the Apple Heart Study? Besides validating the algorithm, are there other things you think we should have as takeaway message?
Lubitz: I think there are some common themes here between the studies and there are some differences between the studies that we can highlight. The biggest common theme is that these algorithms that analyze PPG waveform data can accurately detect atrial fibrillation.
In our study, we saw a very high positive predictive value of 98% for concomitant atrial fibrillation when an irregular heartbeat was detected during concomitant use of the ECG patch monitor. It is higher than has been reported in other algorithms to date.
We also saw that among those who had an irregular heartbeat detection and then returned an ECG patch monitor after using it, about 32% had confirmed atrial fibrillation during the period they were wearing that ECG patch monitor. It can be compared to what Apple saw.
In our study, we also examined the burden of atrial fibrillation. We observed that participants who had atrial fibrillation during this ECG patch monitor had a median load of 7%, which is not trivial. In other studies where patch monitors have been used to screen for atrial fibrillation without any kind of irregular pulse waveform prescreening or sampling before, as we did in this study, the burden is on average only about 1% or so.
On average, the detection of atrial fibrillation occurs only in up to about 5%, whereas we saw 32% in our study. This is marked enrichment for atrial fibrillation and enrichment for people who also have a significant burden of atrial fibrillation.
Singh: One of the things that stands out with all these PPG algorithms is that they do not record ECGs at the same time. I think many of these trackers and smartwatches have the ability to do that, but most of these are capable, I would say, of confirming atrial fibrillation if it’s more than 30 minutes in duration, right? And also more at night, when patients are not moving their hands and they are inactive, the algorithm works better.
Talk to me about what the implications of the way the algorithm works right now could be on large-scale surveillance surveys in the population. Will it or will it not play a role?
Role of wearables for patients
Lubitz: I also think that’s a good question. There are some practical options here for clinicians who end up talking to their patients about the use of these devices and for consumers who have these devices, and that is that these algorithms to detect atrial fibrillation are most effective when attending or users are not active. This is to minimize the interference that may occur and artifacts that may occur during periods of movement and activity.
It is at rest periods that these algorithms are most likely to collect analyzable data and potentially be useful. It is important for participants or users to know. If they can wear it during sleep, then it is so much the better, because the probability of detecting atrial fibrillation may be highest during sleep.
On a broad scale, what we now know is that when these algorithms detect an irregular rhythm, there is a reasonably high probability that a person may have atrial fibrillation, and this justifies serious consideration by a clinician. We need to figure out how to bridge the gap between a user receiving a message that they have an irregular heartbeat record and getting them connected with the right kind of care.
We also need to figure out the right ways in which these types of irregular heartbeat detection should be monitored and evaluated by the medical community because we do not have strict guidance. We know what we did in these studies, and it’s a way to evaluate these irregular heartbeat recordings. We really need a more robust way to integrate this data into our existing healthcare structure and think of new healthcare structures to handle this data in the future.
Singh: I could not agree more with you. I definitely think it’s a great first step. At the same time, it gives clocks to patients who are asymptomatic and have durations of atrial fibrillation that are less than 30 minutes – which we know can still be clinically meaningful in some patients who have high CHA2DS2-VASc risk scorer and is prone to stroke – could give them a false sense of security that they do not have atrial fibrillation if it was not more than 30 minutes long. I certainly think it needs to be put into perspective, as you just said, and we need to have larger, longer-term outcome studies in specific populations to really understand what the clinical role and value of this algorithm in larger populations will be. .
One thing I noticed that is phenomenal is that 70% of the recruited patients were women. Congratulations to you and your team for doing so. I think it’s probably one of the very few studies – you can count them on one hand – where women have been recruited much more than men. Was it deliberate, or did it happen organically? Can you give us an insight into how it happened?
Lubitz: It happened more or less organically. It may reflect the demographics of the user base. Participating in this type of remote clinical trial among this user base may reflect some other aspects of predilection. We were also able to recruit about 13% of individuals aged 65 years or older, which is obviously an important subgroup given the increased risk of stroke if they should have detected atrial fibrillation.
To reiterate your point, we really need to think about justice, especially with remote clinical care and mobile technology, including smartwatches and fitness trackers and these types of algorithms. In clinical trials and in the real world, we also really need to think about equal opportunities in healthcare, which can be enhanced by this technology if we are not thoughtful.
Singh: Steve, that’s great. With the wonderful remark about diversity and equality and remote care, I would like to congratulate you and your team for a fantastic study that will truly change the landscape of how we practice clinical care and take care of patients with atrial fibrillation.
Thanks again. It has been great to have you here today. Beware.
Lubitz: Thank you, Jag. I appreciate it.
Follow theheart.org | Medscape Cardiology on Twitter
Follow Medscape on Facebook, Twitter, Instagram and YouTube