By Phillip Ivancic

At the Black Hat Asia 2023 cyber security conference, held in Singapore from 11-12 May, everyone will be talking about the security, privacy, and intellectual property implications of Artificial Intelligence (AI)-developed software.

Some organisations will be considering writing policies, some companies have simply banned ChatGPT on their networks and others will be blissfully unaware of the security and licensing implications of its use.

It’s simply too late. The genie is out of the bottle and your software developers are already using it!

And why not, AI large language model (LLM) tools can write software code in seconds, saving what would take a human-being hours, if not days. The important thing to know is that, by their nature, all LLM tools have been trained using Open-Source repositories and data-sets.

Therefore, if your developers have used any AI to help them speed up their jobs, then there will be Open Source components or sub-components (knowns as Snippets) in your organisation’s software.

Basically, an AI tool will recommend a code snippet to implement a common function and that snippet is, in turn, likely to be replicated and become commonly used in your organisation. If a vulnerability is discovered in that snippet, then it becomes essentially a systemic risk throughout many organisations. Thus, introducing the potential to scale vulnerable code far and wide.

One of the most important steps organisations should take to keep safe, is automatically maintaining a Software Bill of Materials that also understands and tracks Open-Source snippets, like Synopsys’ Black Duck Software Composition Analysis (SCA) technology.

Another important consideration relating to AI-generated code, is the output often lacks important licensing information.

Failing to comply with open-source licenses could be very costly to an organisation – depending on the license requirements. One of the most famous examples of that is Cisco, which failed to comply with requirements of GNU’s General Public License, under which its Linux routers and other open source software programs were distributed.

After the Free Software Foundation brought a lawsuit, Cisco was forced to make that source code public. The amount of money it cost the company was never disclosed, but most experts say it was substantial.

AI tool providers recognise that the effectiveness of their tools is directly linked to the quality of the datasets used for training. This is advantageous for developers seeking job security because, despite the capabilities of AI tools, they are not currently capable of completely replacing developers.

However, these tools can be valuable in aiding developers with various tasks such as creating unit tests, troubleshooting stack traces, and automating repetitive tasks. Nonetheless, human supervision, complemented by automatically generated Software Bill of Materials, is still essential to ensure compliance with license terms, just as an example.

At the end of the day, the point is that AI-generated code requires equal testing scrutiny as code generated by a human. To take this one level further, this means a full suite of automated testing tools for static and dynamic analysis, software composition analysis (to identify vulnerabilities and licensing conflicts in open-source code) along with penetration testing before code goes into production.

Phillip Ivancic is the APAC Head of Security Solutions for Synopsys Software Integrity Group

LEAVE A REPLY

Please enter your comment!
Please enter your name here