The ethical development of AI components and systems is essential. Read how NXP, a Partner of the Charter of Trust, has built its comprehensive framework for AI ethics principles that are rooted in its corporate values, ethical guidelines, and a long tradition of building some of the world's most sophisticated secure devices.

Excerpt from NXP’s whitepaper “The Morals of Algorithms – A Contribution to the Ethics of AI Systems”

It’s hard to deny that artificial intelligence has come so far so fast. It’s working its way into our lives in ways that seem so natural that we find ourselves taking it for granted almost immediately. It’s helping us get around town, keeping us fit and always inventing new ways to help around the house. The ever-increasing availability of affordable sensors and computing elements will only accelerate this trend. Thinking of what’s next, which was once the domain of science fiction writers, is now our day-to-day reality.

AI applications will be designed for the greater good, but there will be outlying cases. The increase in autonomous applications that carry with them the potential to put humans in danger drives home the need for a universal code of conduct for AI development. A few years ago this would have sounded preposterous, but things are changing fast.

The industry, together with governments of several of the world’s leading nations, are already developing policies that would govern AI, even going so far as discussing a “code of conduct” for AI that focuses on safety and privacy. But how does one make AI ethical? First, you have to define what is ethical and the definition isn’t as cut and dry as we may hope. Without even considering the vast cultural and societal differences that could impact any such code, in practical terms, AI devices require complicated frameworks in order to carry out their decision-making processes.

The integrity of an AI system is just as important as the ethical programming because once a set of underlying principles is decided on, we need to be sure they’re not compromised. Machine learning can be utilized to monitor data streams to detect anomalies, but it can also be used by hackers to further enhance the effectiveness of their cyberattacks. AI systems also have to process input data without compromising privacy. Encrypting all communications will maintain confidentiality of data, and Edge AI systems are starting to use some of the most advanced cryptography techniques available.

Perhaps the biggest challenge is that the AI ecosystem is made up of contributions from various creators. Accountability and levels of trust between these contributors are not uniformly shared, and any breach could have far reaching implications if systematic vulnerabilities are exploited. Therefore, it’s the responsibility of the entire industry to work towards interoperable and assessable security.

Agreement on a universal code of ethics will be difficult and some basic provisions need to be resolved around safety and security. In the meantime, certification of silicon, connectivity and transactions should be a focus for stakeholders as we collaborate to build trustworthy AI systems of the future.

NXP, a partner in Charter of Trust, believes that upholding ethical AI principles, including non-maleficence, human autonomy, explicability, continued attention and vigilance, privacy and security by design is important. You can read NXP’s whitepaper “The Morals of Algorithms – A Contribution to the Ethics of AI Systems” here.

You may also like

Chairwoman Natalia Oropeza in Brandeins Magazine
icon External Engagement

Chairwoman Natalia Oropeza in Brandeins Magazine

We're thrilled to announce that our Charter of Trust Chairwoman Natalia Oropeza has been featured in the annual IT edition of the brand eins magazine!

In an interview with Dorit Kowitz, Natalia dives deep into the pressing issues facing the cybersecurity landscape, explaining how the Charter of Trust bundles the expertise of different businesses across several regions to stay resilient in the face of evolving threats. As Natalia Oropeza says: "We all win if cybercrime doesn’t win."

Here are three key insights from her interview:
🔑 Collaboration is essential: No single organization can tackle cyber threats alone. The Charter of Trust is a prime example that businesses nowadays are more transparent when it comes to attacks and that sharing information in this field can be beneficial.
🔑 Addressing the digital skills gap: The Charter of Trust is working to address the global shortage of cybersecurity professionals by encouraging diversity and actively promoting opportunities for women to join the field.
🔑 Unified regulations: Harmonizing global cybersecurity standards will reduce vulnerabilities, helping businesses and governments combat threats more effectively.

The full interview is available here: https://lnkd.in/gRm6ZDGC
October 19, 2024
Cybersecurity Awareness Month
icon External Engagement

Cybersecurity Awareness Month

We are in the middle of Hashtag#CyberSecurityAwarnessMonth and many of our Charter of Trust Partners are promoting it with great initiatives. One of the programs that we want to highlight is last week’s panel organized by Allianz talking about “Security in light of (gen)AI”.

The complexity and urgency of this topic gathered a lot of interest, with 600+ attendees throughout the whole panel, which was composed of Jon-Paul Jones, COO at AZ Commercial, Firas Ben Hassan, GenAI expert & Manager of AllianzGPT at AZ Technology, Dr. Martin J. Krämer, External Security Awareness Advocate at KnowBe4, and Dr Sumit Chanda, Global CISO at Eviden & Chair of the Global External Engagement Working Group at the Charter of Trust.

We are pleased to see Dr. Sumit Chanda from Eviden bringing in his unique CISO insight on what these emerging technologies mean in day-to-day cybersecurity practices and bringing in the Charter of Trust perspective on this topic as well.

Thank you, Ervin Cihan and Haydn Griffiths for inviting other CoT Partners and for the great initiatives that Allianz is putting together within this year’s Security Awareness Month. And special thanks to Heather Armond for the great moderation.
October 15, 2024
UK/EU Summit - “Risk to Resilience”
icon External Engagement

UK/EU Summit - “Risk to Resilience”

Detlef Houdeau, Senior Director, Business Development at Infineon Technologies was a speaker at the inaugural UK/EU Summit organized by our newest Associated Partners Shared Assessments.

💡Under the theme “Risk to Resilience” the first event of this series was held in London and brought together professionals from different industries and regions. Detlef participated in the panel about the complex regulatory landscape and emphasized that new legislation like the EU AI Act, DORA and Hashtag#NIS2 continue to push the standard of care on cybersecurity and other risks.

Thanks to Shared Assessments for organizing such an amazing event and inviting the Charter of Trust to participate in this high-class panel alongside Andrew Moyad, CEO at Shared Assessments.
October 08, 2024