AI can help developers increase efficiency and meet deadlines, but it takes more tools to create ethical AI, which provides transparent and interpretable decisions that won’t drown users in murky ethical waters.
That’s according to Don Schurman, Pegasystems’ chief technology officer and vice president of product strategy and marketing, who said Pega has developed its own AI using anti-bias and interpretability mechanisms. In this way, Pega AI does not suffer from some of the problems that AI-driven development tools have, such as Discrimination on the basis of sex And the copyright infringementSchurmann said.
In Schuermann’s view, transparency and interpretability are vital to avoid bias in AI decision-making – especially when it comes to institutions that must meet regulatory guidelines, such as financial institutions that must comply with them. Fair Lending Laws.
In this Q&A, Schurmann talks about the challenges in AI development, how ethical AI can take customer service back to the 1970s and what’s next for Bega.
How is Pega’s approach to AI different from other AI systems, such as GitHub Copilot, that have copyright issues?
Don Schurmann: We don’t train global models that then generally apply to every single client we have. We provide a decision engine in which our client puts their own data, then builds unique models for their business.
Another term you hear a lot in the industry now is explicability, which is important when making decisions about how you interact with a customer, what products you offer, whether to suggest a particular service or how you assess the risks of a particular transaction. You can explain how you made the decision, for organizational reasons, and in some cases, it is important that users learn to trust forms and can see how decisions are made.
No SchurmannCTO, Pegasystems
we built [what] We call the “Transparency key”, which allows you to control the level of transparency and interpretability of your models. Let’s say I’m selecting the type of ad that will be shown for a particular offer – I probably don’t need much explanation for choosing that image. But if I was thinking about whether to offer someone a loan – boy, it better be really explainable, why and how I made that decision.
One of the biggest challenges in the world of AI is that AI is trained on data. Data can contain, for better or worse, our society’s biases. So even if your AI predictors – the pieces of information you use – don’t necessarily align with protected categories, you can still make decisions or follow trends that align with something protected, such as race or sexual orientation. Therefore, we have built an ethical bias test into the platform so that our customers have the tools to test and validate their algorithms to ensure that they don’t.
But the people who created the moral bias test feature They have their own biases. So how do you make sure that this same feature isn’t biased?
Schurmann: I think part of that is getting as broad a range of perspectives into the conversation as possible, both from our in-house developer community, but also from our client community, which is our advisory community — the people we talk to in the industry space.
The first thing we should address our biases is to recognize them. It won’t guarantee that every customer in the world, at some point, does something that doesn’t align with bias best practices, but we do get it front and center. We ask people to think about it when they think about how to deploy their AI models. We also ask people to think about empathy for customers.
If you give a lot of customers Suggestions generated by artificial intelligenceWon’t they close?
Schurmann: One of the clients we work with talks about his goal of bringing banking back to the 1970s. And what he meant by that was not for everyone to wear the bell-robe and disco, but that you would use artificial intelligence, as far as possible, to restore the personal relationships you would have had with your local banker, whom you knew as an individual and watched at the local branch each week. You don’t have this anymore.
We need to use some AI tools to get that knowledge and understanding of the customer – no matter who the customer is dealing with in the company. Someone might be in the call center. Someone in the branch may have started this week. Perhaps the customer is in self-service. We still have that “I understand you, I know you, I’m aware of your goals and needs” personality. [message]. That’s what we’re trying to do, is to make this a human experience, but on a large scale.
What can last month’s release of Pega Infinity 8.8 do for developers that they couldn’t do before?
Schurmann: This latest release, it gives developers the ability to apply and make decisions about AI at scale across a process to see, “Are there opportunities to drive efficiency? Can I expect that process will lose its service level? [agreement]? Wait, we’re going to miss a deadline. They have an AI model that predicts this and automatically escalates the process before they miss a deadline and they either have to explain this to the client or, at worst, push the regulatory ends.
What’s Beja’s next step in the world of ethical AI?
Schurmann: We work with a lot of companies that are now in a world where they can’t deploy a single application for payment exceptions, because they need to keep their data from Swiss customers in Switzerland, and then they need to keep data from UK customers in the UK, and data for Singapore customers in Singapore. They should have distributed versions of this app. We support that from an architectural point of view, but what we also think about is actually connecting those distributed applications. How do you return that to an overall experience?
Editor’s note: These questions and answers have been edited for clarity and brevity.