Software product companies should prepare for 2023 trends to carry over into 2024, according to experts who contributed to this post. In a nutshell, they predict that:
- Supply chain attacks will continue into 2024 and beyond.
- Artificial intelligence (AI) used in code development and testing will continue to mature even as AI is increasingly leveraged for new ways to attack critical infrastructure applications.
- Risks to critical embedded applications, along with the increasingly complex regulatory landscape, will prompt more product companies to seek the expertise of a Chief Product Security Officer (CSPSO).
Increase in Supply Chain Attacks
With the 2021 SolarWinds breach, attackers proved the value of manipulating the codebase of a popular enterprise monitoring program to infiltrate and spread along the software supply chain. Juniper Research predicted supply chain attacks would cost businesses $46 billion in 2023 (the numbers are still being counted). Juniper also predicts that, by 2026, supply chain attacks will cost the global economy almost $81 billion—with healthcare, finance, government, and automotive industries bearing the majority of these costs.
“Expect software supply chain attacks to accelerate as attackers increasingly realize the high ROI of impacting a single target and having a cascading impact across the entire software ecosystem,” says Chris Hughes, president and Co-Founder at cybersecurity services firm Aquia, who is also a cyber innovation fellow at the Cybersecurity and Infrastructure Security Agency (CISA) focused on software supply chain security.
Open-source code, which is used in almost all commercial software products, is particularly valuable to supply chain attackers. Sonatype recorded a 742% increase in supply chain malware planted in upstream open-source packages between 2019 and 2022, and the trend will continue as long as these attacks net big gains. Hughes also warns of more software supply chain attacks spreading downstream from large proprietary software vendors, just as we saw with Okta, the Microsoft Lazarus breach, and the MoveIT CIOp transfer attack in 2023.
As a result, he adds, “Software liability will continue to gain traction as the industry grapples with trying to hold software suppliers responsible for due diligence with the products and software they push out to the market. This will align with the push for secure-by-design being advocated for by the CISA to push the burden onto software vendors rather than downstream consumers and customers.” (See our previous piece about CISA’s recommendations for software product vendor liability here.)
Artificial Intelligence Embedded in Apps and Testing
Tracy Bannon is a Senior Principal Software Architect and DevOps Advisor with MITRE’s Advanced Software Innovation Lab. She has long advocated for secure-by-design principles in software products, and she is glad that organizations like CISA are making secure-by-design a rallying call. However, she warns that the same principles need to be followed when developing AI-embedded products. As she said in a recent TalkSecure podcast, there will be layers and layers of testing and review related to AI source, training of the language models, and new types of threat models, among other layers.
In a recent LinkedIn post, she pointed out several developments to help developers leverage AI in their development and products, including the CISA Roadmap for AI and the U.K.’s NCSC Guidelines for Secure AI Systems Development which provides practical guidance for developing secure and resilient AI systems. Of the CISA Roadmap, she wrote, “One example that I find particularly valuable is the emphasis on secure-by-design principles in AI adoption. Our emphasis must be to make sure AI software tools are not vulnerable to attacks or abuse. This aligns with fundamental security practices and it’s crucial for the responsible and secure development of AI systems.”
Expect more developer resources and regulations for secure AI development in 2024, she says, especially after the EU rolled out comprehensive AI legislation, the EU Artificial Intelligence Act, in late 2023.
As a software architect, she says she is more focused on how AI is used to support the entire software development process from the moment the mission or business vision is set, through building code, testing, and deployment. Large Language Models (LLMs) are trainable and can be fine-tuned for each project. But she also cautions that AI development and testing tools are not ready for ‘prime time,’ and their output will need human scrutiny for the foreseeable future.
“We’ll see continued emphasis on code generation because LLMs are statistically non-deterministic and can put together words that make them perfect for language—and programming is a language. I can leverage different types of AI tools to assist my application security testing and show me where there are defects,” she notes.
In 2024 she predicts a surge in AI use for development and testing as these tools improve in accuracy, relevancy, and threat modeling. She also sees AI helping IT Ops to evaluate, prioritize, and route tickets, just as it will help development teams in the same ways. She adds, “Code development, testing, security, and IT ops will be widely impacted by these LLM’s in 2024.”
More Demand for Chief Product Security Officers
In 2024 expect to also see more computer-generated attacks and defenses leveraging AI, notes Ian Ferguson, VP of Marketing at Lynx Software Technologies, which provides edge software platforms used in high-assurance systems, including aviation, armored vehicles, satellites, and other critical infrastructure. In turn, he also predicts that AI will be used on the defensive side to prevent attacks. “GenAI [generative AI] can help identify vulnerabilities and fix weaknesses in protection shields,” he explains. “So, now it will become more of a machine versus machine attack rather than human versus human.”
Given all these factors, along with hacktivist-related attacks due to wars in Ukraine and Palestine, Ferguson predicts that chief product security officers (CPSOs) will be more in demand in 2024. “This new role has formed in the last two to three years and has come about because of rapid digital transformation that’s exposing products and organizations to cyber threats,” he explains.
“A lot of today’s technology relies on embedded systems, and as a result embedded software product security has growing importance,” Ferguson continues. “However, there is a lack of individuals willing to take on this challenging role, which has created a skills gap. It’s critical to have this bridging role between developers and security to ensure that products are built safely and securely.”