We often use analogies to health and medicine to talk about financial outcomes for consumers and businesses. We talk about giving ourselves an annual financial "checkup" that examines income, assets, and savings. We describe behaviors like having insurance or being able to pay bills on time as symptoms of "financial health." We think about "chronic" or "acute" financial diseases or injuries (failure to plan versus sudden life change) that must be addressed over the long term or immediately.
So, it makes sense that the health care analogy also applies when thinking about generative AI in payments. Here are four lessons from medicine for those of us working in payments.
End users are self-prescribing
Change is happening fast and outside of expected channels. For example, medical students are picking up ChatGPT to back up all that memorizing or to practice test questions. Is this a useful tool or does it sidetrack students from gaining the skills they need to be doctor?
We see the same thing in financial services and payments. According to Motley Fool, some people already are using ChatGPT to choose credit cards, banks, insurance brokers, and more. The sudden popularity of generative AI shows that we end users are not waiting for systems to be tested, explainable, or safe before we try them—and when we abdicate decision-making, we may miss an opportunity to educate ourselves about money management.
Human experts' communication is poor
Generative AI makes us feel like we are chatting with a real and charming person. For me, that's much of the allure. I recently read that some doctors are using ChatGPT to help them show more empathy for patients. That sounds horrible on the face of it, such a deepfake. In implementation, however, the task turned out to be built around using less technical language. The result: Cutting out the medical jargon helped people feel more connected and cared for.
For financial institutions, as for medicine, more personal and straightforward communication is needed. An instruction to Google's Bard could be something like, "In the style of my fourth-grade teacher, Mrs. Owens, please describe this pricing table for overdraft protection." Think of all the efforts at "plain English" communications for financial services and how they could improve.
Fairness and social benefit must be defined by humans
As with all kinds of technology, there will be variability in the quality of training data sets and implementation: garbage in, garbage out. For example, COVID-19 prediction models designed to allocate intensive care unit beds and ventilators at the height of the pandemic used training data that reflected existing racial bias in health care delivery, potentially worsening disparities for people of color.
This research into medical outcomes has implications for credit scoring, where poorly designed or opaque training data have the potential to introduce bias into credit card approvals and friction into account opening. It's critical to understand the inputs, as made clear by a recent FTC investigation into whether a generative AI tool has created statements about real individuals that are "false, misleading or disparaging." Otherwise, depending on the model, you could end up with digital redlining
—that is, discriminatory targeting of financial products and services.
Distrust the hype cycle
Medical professionals are moving cautiously on AI when it comes to diagnosis and treatment. They see applications in patient records, patient communications, and insurance claims.
Payments professionals also should take their time in the rush to bolt generative AI on to existing technologies. As our doctors pledge, "First, do no harm."