BD&A - Thinks and Links | November 20, 2023
BD&A - Thinks and Links
Big Data & Analytics - Thinks and Links | News and insights at the intersection of cybersecurity, data, and AI
OpenAI’s Not So Good Weekend
OpenAI injected three seasons of Succession meets Silicone Valley drama into the weekend news feeds for all of us. If you happened to shut your phone off Friday afternoon and just be checking in now, here’s the highlights:
Friday
OpenAI’s Board of Directors announces the firing of CEO Sam Altman for “misleading the board”
OpenAI President and Co-founder Greg Brockman resigns
News quickly emerged that investors were blindsided, including Microsoft (who had just completed their Ignite and GitHub events showcasing OpenAI-powered innovation
Saturday
Continued backlash as everyone who has ever used ChatGPT becomes an expert on corporate governance
Rumors emerge that OpenAI staff are threatening to quit unless Altman is brought back
Late in the day rumors are confirmed that the board is in talks to reinstate Altman
Meanwhile, discussions and confusion about the reason for the firing continue to circulate: was Altman double-dealing? Has AGI (AI able to replace humans) been discovered? What could possibly tank a company on a meteoric rise to a $100B+ valuation and central to a huge AI ecosystem?
Sunday
Without much further revealed we learn that Altman is active negotiations with the board
A 5PM PST deadline comes and goes
OpenAI board announces the appointment of a new CEO, likely signaling a deal is off
12AM PST Satya Nadella – Microsoft’s CEO announces Altman, Brockman, and the OpenAI leadership team will be joining Microsoft to lead a new Advanced AI Research team
OpenAI as it existed Friday afternoon is over. Microsoft acqui-hired a significant chunk of leadership as of this writing for free.
The situation is dynamic and likely further evolved since this came out, but I wanted to put some thoughts on the situation that applies to this audience:
AI and Security Are Inseparable
Without more concrete facts on the reason for the initial firing, the main debate about how OpenAI has gone wrong is that they have determined that the security of AI is more important than commercialization. We don’t know what the board saw that made them think this, but the parallel in the non-AI world is worth a reminder – Risk Management is an important part of any business effort and it comes down to:
Cataloging risks – their likelihood and their impact
Implementing controls
Continually evaluating the effectiveness of the risk management systems, including by third parties who can be objective
If we allowed the ever-present risk of cyber attack to slow the progress of our companies, we would all be out of a job. Risk is non-avoidable, but how we respond and professionalize around it is completely under our control. The people most concerned about risk at OpenAI have seen a significant portion of their people and IP captured by Microsoft. That was an outcome original founder Elon Musk warned about and part of the reason he left the company years ago. Risk management is no joke in our AI world.
It is also something that cybersecurity brings to the table. We have a lot to offer Data Science and other business units. If/when an AGI starts misbehaving, which business unit at OpenAI would see it first? What about lesser forms of scary AI risks like providing instructions for bioweapons or hallucinating important business functions? Wouldn’t cybersecurity tools, people, and processes be useful in catching the people behind these issues.
I’m looking forward to learning more about what was the reasoning behind this board decision. And it is also not – to me – about do we slow down or speed up AI. AI is coming via Microsoft, Google, the Open Source community and many others. The question is how do we learn from other forms of computer security to be safer and better with AI.
As AI continues to march forward, we can’t rely on non-profit organizations to keep us safe. We need to lead by owning security and enabling acceleration.
Ecosystems > Monoliths
GPT-4 is still the best LLM model for most people. Microsoft has the ability to continue serving it no matter what happens with OpenAI. The rumors about GPT-5 make it sound incredible, but if this past weekend has delayed that it is OK. There are plenty of innovations happening in the current capability level. But also it’s a good time to take a look at the 72%+ of companies that are experimenting with generative AI and which models they’re using. GPT-4 can be like the training wheels bike that gets you started, but you will want to consider diversity in model sources and consider the dependencies of what you build. Orchestration tools like LangChain will make it easier to build AI solutions that can easily swap out underlying models.
If a portfolio of Generative AI models is where businesses are going, it will be important to consider these each separately for security review and monitoring. Models can be trained in different ways, on different data, and with different requirements. Security will have a huge role in making sure only trusted models are onboarded and that models are guarded as crown jewels with cryptographic verification they haven’t been tampered with.
Don’t Count Google Out
Research can take a long time and a lot of hardware. Regardless of outcome for Microsoft, the AI research coming out of OpenAI will likely be slowed. Altman has built a reputation for moving extremely fast, but there’s also room in this market for Google to continue compounding the massive capabilities they’ve been building. Low drama and consistent innovation will likely pay dividends. Stability and ease of integration are key, and right now Google is looking like the calm port in a storm. Which is also ironic, because Google announced an incredible new Weather Forecasting model this week adding to a portfolio of equally impressive AI tools.
Optiv’s Predictions for 2024
https://www.linkedin.com/video/live/urn:li:ugcPost:7130236958391312385/
Great discussion with Alan Mayer (Optiv’s head of Partnerships), Woodrow Brown (Optiv’s head of R&D), and myself to discuss what we’re seeing coming in 2024. We did not predict any of the drama at OpenAI, although I did predict Microsoft would continue to dominate the space.
Adversarial Attacks on LLMs
https://lilianweng.github.io/posts/2023-10-25-adv-attack-llm/
A comprehensive guide from a researcher at OpenAI (as of last week). A thorough threat model, classification of attacks, and potential mitigation strategies. Serious review of security themes to understand, detect, and stop abuse of these models.
Over a Dozen Exploitable Vulnerabilities Found in AI/ML Tools
https://www.securityweek.com/over-a-dozen-exploitable-vulnerabilities-found-in-ai-ml-tools/
Since August 2023, members of the Huntr bug bounty platform for artificial intelligence (AI) and machine learning (ML) have uncovered over a dozen vulnerabilities exposing AI/ML models to system takeover and sensitive information theft.
Identified in tools with hundreds of thousands or millions of downloads per month, such as H2O-3, MLflow, and Ray, these issues potentially impact the entire AI/ML supply chain, says Protect AI, which manages Huntr.
“GenAI is a once-in-a-lifetime opportunity for CISOs to be the lead in exponentially transforming the productivity, growth and revenue of their organization.”
https://www.thefastmode.com/expert-opinion/33873-generative-ai-is-the-ciso-s-big-moment
Blog from Arti Raman, CEO of Optiv Partner Portal26 (formerly known as Titaniam). Arti covers the significant impact GenAI is having on business productivity and growth, highlighting the rapid adoption of technologies like ChatGPT and the potential economic benefits spanning trillions of dollars. The piece also addresses the challenges and opportunities faced by CISOs in implementing GenAI, including security threats, skill gaps, and the importance of developing a unified strategy.