by Nuno Sebastiao
8 minutes • AI & Technology • March 28, 2025
The Future of AI Depends on Trust—And That Starts Now
Feedzai recently wrapped up a whirlwind week at HumanX, the leading industry AI conference. And what a week it was! The top highlights include a fireside chat with a former vice president and the debut of new initiatives designed to build responsible AI and trustworthiness into innovation.
If you weren’t able to join us at HumanX, don’t worry. Read on and I’ll proudly share the exciting news from Feedzai’s time at the show.
Key Takeaways
- At the HumanX show in Las Vegas, Feedzai CEO Nuno Sebastiao discussed AI’s opportunities and challenges during a fireside chat with former vice president Kamala Harris.
- It’s time to shift focus to the role of AI stewards—those who are building the technology.
- HumanX debuted the Responsible AI Landscape (RAIL) initiative, an effort to map organizations taking a fair, safe, and trustworthy approach to AI development.
- Feedzai launched its TRUST Framework based on five key pillars—Transparent, Robust, Unbiased, Secure, and Tested—to guide building responsible AI from the beginning.
- The AI TRUST Innovation Challenge highlighted several startups already embodying responsible AI practices in their solutions.
“I think it is an absolute false choice to suggest we can either have safety or have innovation…We can and we must have both.”
Former Vice President and AI Czar Kamala Harris on AI’s Future
AI has the ability to revolutionize industries, reshape economies, and redefine how we interact with the world. But as AI moves from promise to reality, the question remains: Will people trust it?
At HumanX, I had the honor to share the stage with the show’s distinguished guest, former US Vice President and AI czar Kamala Harris, for a fireside chat. Mrs. Harris and I discussed AI’s exciting opportunities and its most significant challenges.
She noted that AI’s immense potential can be undercut if it is used to spread misinformation or render jobs obsolete. As AI expands, she said it’s essential to consider peoples’ concerns with respect and dignity, not dismiss them.
“We need more empathy on this point,” Harris said. “It’s got to come from those who understand where the technology is going.”
Listening to these viewpoints is critical to building trust because it puts people at the center of technological change. Building trust in AI systems is not an obstacle to progress; it is a powerful innovation engine.
“I think it is an absolute false choice to suggest we can either have safety or have innovation,” Harris said. “We can and we must have both.”
It’s Time to Shift from AI Builders to AI Stewards
During our discussion, Kamala Harris said it’s time to prioritize the role of AI stewards, those who are responsible for building and developing AI technology.
AI stewardship means taking proactive steps to ensure that AI systems work reliably for society as a whole. It’s about embedding accountability, security, and fairness directly into the design and deployment of AI, so that these technologies not only drive profit but also deliver real public benefits.
Trust in AI isn’t automatic—it has to be earned through diligent, transparent practices. AI stewards, therefore, will play a critical role in ensuring the technology is safe and trustworthy.
Here’s how to consider AI stewardship.
- AI stewardship is the active responsibility of ensuring AI serves humanity—not just profit.
- AI stewardship means embedding trust, fairness, and security into AI’s foundation.
- AI stewardship is about guiding AI toward outcomes that benefit people, businesses, and society.
Remember that trust in AI isn’t a given—it’s earned. Earning that trust requires action.
Mrs. Harris issued a clear challenge for the HumanX audience: AI must be built with responsibility at its core.
Putting compassion and empathy at the center of AI innovation will be essential to making the technology trustworthy. I was excited by two new initiatives that were unveiled at HumanX that promise to fuse trust with AI innovation.
Mapping a Responsible AI Ecosystem with RAIL
One of the discussions at HumanX centered on a new concept: the responsible AI Landscape (RAIL). RAIL was introduced by HumanX, with Feedzai as an early contributor and champion.
RAIL was introduced as a first-of-its-kind effort to map the ecosystem of companies that are building AI and doing so in a fair, safe, and trustworthy way. Many are doing amazing work in responsible AI and RAIL creates an industry forum to recognize responsible AI innovators and share critical lessons.
No one owns RAIL. It is designed to evolve as a community-defined initiative—rather than a proprietary list. Feedzai’s contribution to the RAIL mapping effort includes:
- HumanX Companies. Organizations that participated in HumanX are engaging firsthand with the call to action for responsible AI and contributing to the conversation on transparency, trust, and governance.
- Key AI Leaders. Beyond HumanX attendees, RAIL should be shaped by companies that are actively integrating responsible AI principles across industries, shaping the broader AI landscape through innovation and accountability.
This is a critical milestone because AI that isn’t trusted won’t be adopted. And AI that isn’t adopted won’t change the world.
If AI is going to shape economies, financial systems, and everyday decision-making, we need a clearer picture of who is building it, how, and with what values.
Here are a few important points about RAIL.
- It’s not a final list. It’s a conversation designed to foster greater industry collaboration and visibility.
- It’s a collective effort. The goal is not just to map responsible AI but to ensure the companies shaping AI’s future are held to higher standards.
- No one owns RAIL. It’s a concept that is meant to evolve with time. We want the AI community to actively participate in the dialogue to refine and improve it.
Remember, the responsible AI conversation is ongoing. The goal is to spark a conversation, not dictate conclusions.
But what does it mean to be a responsible AI builder—and how do we measure it? That’s where Feedzai’s new TRUST framework comes into play to ensure responsible AI is built into the development process.
Introducing Feedzai’s TRUST Framework
As an AI-native company, Feedzai has long embraced the responsibility of building trust in AI. This isn’t an afterthought for us, it’s in our DNA. In fact, at HumanX, Feedzai Co-founder and Chief Science Officer Pedro Bizarro, unveiled the AI TRUST Framework.
The TRUST Framework is a set of clear, practical principles for ensuring AI is trusted in real-world conditions. Five key pillars—Transparent, Robust, Unbiased, Secure, and Tested—provide guideposts and core questions to guide organizations on adopting or building responsible AI from the beginning.
Here’s how the TRUST principles work in practice.
Transparent
Transparency ensures AI’s inner workings are clearly understood and openly communicated, making AI-driven decisions auditable and explainable.
- Users should clearly know when they’re interacting with AI.
- Models and decision-making processes are documented, accessible, and easy to understand.
- Example: Feedzai uses explainability tools like TimeSHAP and DiConStruct to visualize and clarify AI decision-making.
Robust
Robustness focuses on AI’s resilience, consistency, and adaptability to real-world conditions and unexpected challenges.
- Models must deliver stable results even under changing conditions.
- Continuous monitoring ensures AI systems handle data shifts without compromising performance.
- Feedzai’s robustness was demonstrated clearly during major market disruptions, including the early COVID-19 pandemic, maintaining high accuracy despite unusual data patterns.
Unbiased AI
Unbiased AI systematically reduces algorithmic bias to deliver equitable outcomes across all user groups.
- Regular fairness audits using tools like Feedzai’s open-source Aequitas Flow identify and mitigate biases.
- Techniques such as data rebalancing ensure fairness in diverse populations.
- Inclusion of diverse perspectives in AI development prevents biases before they occur.
Secure
Security ensures AI systems are safeguarded against manipulation, data breaches, and unauthorized access.
- Feedzai consistently undertakes rigorous penetration testing and independent audits.
- Privacy-centric solutions such as federated learning keep sensitive data secure while enabling powerful analytics and collaboration.
- Systems must respect privacy regulations and ensure data integrity at all times.
Tested
Thorough testing validates AI under real-world, high-stakes scenarios.
- Continuous and rigorous integration testing ensures reliability and alignment with user expectations.
- AI performance benchmarks confirm adherence to high standards under various operational scenarios.
- Tools such as RailVal and Xplainz allow AI system testing under realistic operational conditions, ensuring effectiveness at scale.
The TRUST AI Framework gives banks a roadmap that isn’t theoretical. It can be immediately implemented into operations.
TRUST Framework for Responsible AI
Discover how to build AI systems that are not only high-performing but also trustworthy. Our TRUST Framework—Transparent, Robust, Unbiased, Secure, […]
Meet the Leading Responsible AI Startups
With the launch of the TRUST AI Framework, a vital question facing the AI community remains: How do we practically ensure AI is trustworthy, ethical, and beneficial?
The great news is that there are already companies that are putting the TRUST framework pillars into practice. At HumanX, we sponsored the AI TRUST Innovation Challenge to highlight companies already pioneering and practicing responsible AI solutions.
Four remarkable startups took the stage, each showcasing innovations aligned with the core pillars of the AI TRUST Framework: Transparent, Robust, Unbiased, Secure & Safe, and Tested. Here’s how they’re shaping responsible AI across sectors.
Ferrum Health: Real-world Tested AI in Healthcare
Ferrum Health provides a platform that enables hospitals to safely deploy AI tools without compromising patient data. Their commitment to rigorous, real-world validation of AI systems earned them the competition’s top spot, exemplifying the Tested pillar of the TRUST Framework.
Stratyfy: Unbiased AI in Finance
Stratyfy is transforming financial decision-making by eliminating bias in loan assessments through transparent and interpretable AI. Their UnBias™ tool helps lenders ensure fair access to financial services, embodying the Framework’s Unbiased principle.
CalypsoAI: Securing AI Systems at Scale
CalypsoAI focuses on AI security, enabling organizations to test, defend, and monitor AI applications against vulnerabilities and threats. By safeguarding AI models from manipulation and misuse, CalypsoAI strongly represents the Secure & Safe pillar.
Arcanna.ai: Robust AI for Cybersecurity
Arcanna.ai enhances cybersecurity through AI-assisted decision-making, building adaptive systems that remain stable even under continuous real-world attacks. Their solution emphasizes the Robust principle, providing resilient AI that adapts to new threats.
The AI TRUST Innovation Challenge wasn’t just about recognizing winners—it was about elevating the broader community of responsible AI innovators. Each startup demonstrated how embedding responsible AI principles creates solutions with genuine, lasting impact.
A Clear Road Forward to Responsible AI
In artificial intelligence, trust is the goal. Responsibility is how we get there. The TRUST and RAIL frameworks provide clear roadmaps to make AI that can responsibly improve the world.
But these frameworks are just the beginning. What happens next depends on all of us.
The companies that will define AI’s future are those that take responsibility for it today. Will you build AI that people trust? Will you commit to transparency? Will you push for responsible AI? Will you be part of this movement?
The question of whether people will trust AI isn’t just for technologists. It’s a question for business leaders, policymakers, researchers, and anyone invested in AI’s future. Trust isn’t something that appears overnight—it’s built through transparency, accountability, and a commitment to responsibility from day one.
All expertise and insights are from human Feedzians, but we may leverage AI to enhance phrasing or efficiency. Welcome to the future.