TalkSecure

Artificial Intelligence Embedded in Code: Do’s and Don’ts for Commercial Developers

Posted on

by

In this video-cast, Deb hosts two experts—one focused on Artificial Intelligence (AI) in commercial products and the other focused on AI in DevOps: Diana Kelley is CISO at Protect AI, which is focused on machine learning lifecycle security; and Tracy Bannon, senior principal/software architect and DevOps advisor for MITRE. 

“Arguably AI easily goes back to the 1960’s when the Eliza solution was introduced at MIT. It was essentially an early chatbot,” Kelley explains. As such, Machine Learning (ML) and AI are widely used in many different systems today, but with the explosion of ChatGPT, generative AI is being productized at a rapid pace.

For developers of commercial products AI and ML open layers of issues that they should be preparing for today. For example, faulty assumptions and output used in decisions—say a medical scanner output is incorrect and the patient truly does have cancer. 

As product developers are focusing on supply chain and open source, they should consider the layers of decisions and data the AI and ML models are trained on, Bannon advises. “Who made the ML model in the first place? How is it controlled and contained? And how do I make sure that those models are the right ones that are getting into production?”

Kelley adds that security scanning and testing must become part of the ML lifecycle, including special MLBOMS to identify open source in the ML pipeline. Bannon adds that product developers need to ask questions about the lineage. “Where was it trained on? What was the lineage of the data it was trained on in the repositories of the world? Do I have the bug that is in that flawed open-source package?” Bannon asks. “We’re looking at turtles on top of turtles on top of turtles.”

Resources: 

OWASP Top Ten ML Security Risks 

MITRE ATLAS – Adversarial Threat Landscape for Artificial-Intelligence Systems 

Difference between Machine Learning and AI

AI Prompt injection attacks

Data Poisoning in AI and ML 

Using SBOMs for Better Software Security Decisions

Related Posts

Check out all of CodeSecure’s resources and stay informed.

view all posts

Book a Demo

We’re ready to help you integrate SAST and SCA security into your DevSecOps flow. Get a personally guided tour of our solution offerings to ensure you are receiving the right solution for your development team. 

book now