M340: Unveiling the Dark Side: Navigating the Risks and Dangers of Artificial Intelligence (Chat GPT came up with this title)
In our session you will hear from Tigunia’s VP of IT and Director of security as we discuss the following topics:
Sensitive data breaches:
The ability to quickly generate answers based on inserted inputs is something that most employees and employers see as an advantage. However, as Samsung saw earlier in 2023 the ability for leaked code to be consumed and fed is a real issue that could have a significant value in lost revenue, loss of IP, and issues with consumer confidence.
Since large dataset models are trained on valid data, there is a chance that the scanning of a data source, whether provided to a public dataset or internally generated into a dataset, can become corrupted and intentionally or unintentionally insert misinformation or invalid answers. This was seen in 2016 first hand when Microsoft used Twitter as a data source and ended up with an AI fueled by racist language.
Legal compliance objectives:
Many countries, such as France, are considering the dangers around large data model AIs suggesting that a global regulation is needed to avoid “skynet” disasters. These countries are proposing blanket bans, extreme tech sanctions around the technology, and/or Asimovian hardcoded kill switches mandated into every AI.
For each of these we will establish the risk and discuss how we can operate in this rapidly evolving technology space, while applying controls to keep your organization and users safe.
- – Learn large data model AI tools like ChatGPT, Azure Cognitive Services, and Google Bard have become widely and easily available for the average person.
- – In a world where AI tooling can save labor, easily tackle large problems, and integrate into consumer spaces how can workplaces tackle considerations about the data provided to AI data models.
- – Learn how to quickly generate answers based on inserted inputs is something that most employees and employers see as an advantage.
To Watch the Webinar on the Media Library CLICK HERE!