Big Data & Analytics - Thinks and Links | May 22, 2023
BD&A - Thinks and Links
Big Data & Analytics - Thinks and Links | News and insights at the intersection of cybersecurity, data, and AI
Happy Friday!
Some of the more interesting AI news seems to come out on Fridays immediately after I hit send on this newsletter. That was the case last week as OpenAI finally released the Plug-in and Web Search functionality on top of ChatGPT that had been in limited beta previously. If you haven’t checked in on the various chat services now would be a great time to see how far they’ve come. Let’s compare some prompts:
1. What was the Log4j vulnerability and how can I be sure I’ve protected my environment?” – how do the leading chat AI’s work with that now?
Winner: ChatGPT. And that’s without even searching the internet.
2. What is the latest research on the optimal architecture for Large Language Model based autonomous AI agents?
Winner: Bing – provides both a variety of articles, and links to explore.
What are the five most interesting things to happen in AI and Cyber Security this week?
Winner: ChatGPT with browsing not only found new stories and summarized them well, but it also knew its limits to avoid hallucinating fake things.
This has been fun, but what’s the point? Even with web access, these models are still highly unpredictable and often wrong. The launch of all these formerly limited release applications is helpful to put context around the AI hype. Ultimately it sees those of us in the information gathering and distribution business are going to be safe from this generation AI. (Every week I try to get it to write this newsletter. Not today, AI!)
One final test for today:
10 Types of AI Attacks CISOs Should Track
https://www.darkreading.com/threat-intelligence/10-types-of-ai-attacks-cisos-should-track
This is a carousel article, so I’ve pulled out the headlines, so you don’t have to click 11 times:
But if you do click through there’s lots of detail and articles linked within.
UK Telecom Company to cut ~11,000 Jobs Due to AI
https://www.bbc.com/news/business-65631168
BT announced it will cut up to 55,000 jobs by the end of the decade, mostly in the UK, as it cuts costs and replaces staff with technologies like AI. The company estimates about 1/5 of these cuts will be driven by AI. Once existing build work is completed, BT believes they will not need as many staff to maintain its networks.
Apple Restricts ChatGPT and Microsoft CoPilot Use
Apple is in the news for restricting use of some external AI tools. They’re rapidly developing their own similar technology. It has restricted the use of ChatGPT and other external AI tools for some employees, while also warning them not to use Microsoft-owned GitHub’s Copilot. Apple is also working on its own large language models and has acquired several AI startups.
OpenAI is working on an Open-Source Model
https://www.artisana.ai/articles/openai-readies-open-source-model-as-competition-intensifies
As discussed in previous weeks of this newsletter, the Open Source community is creating some impressive advances in LLM technology. OpenAI will be contributing here as well and it should be interesting to see how it compares.
Deep Dive on Prompt Injection and Why Many Proposed Solutions Won’t Work
https://simonwillison.net/2023/May/2/prompt-injection-explained/
A lot to unpack here, but the summary is that Prompt Injection is a serious vulnerability, and not one that can just be solved with better hidden instructions (see the following article as an example). Instead, the use of multiple LLMs potentially checking each other for Prompt Injections may be a path forward. We are still in the Research and Development of this concept, however, so please tread carefully in Production.
Microsoft Copilot’s Confidential Rules
https://twitter.com/marvinvonhagen/status/1657060506371346432
Prompt injection reveals the rules behind the Copilot chat.
Covered Extensively Elsewhere, but Obligatory AI CEO Testifies for Congress & Asks for Regulation Link
https://www.theguardian.com/technology/2023/may/16/ceo-openai-chatgpt-ai-tech-regulations
At the Senate judiciary committee hearing this week, OpenAI CEO Sam Altman called for regulation of AI to enable its benefits while minimizing its harms. He proposed licensing and testing requirements for development and release of AI models, as well as allowing independent auditors to examine the models before they are launched. He also argued for a new regulatory agency for the technology, and for existing frameworks like Section 230 to be re-examined. Altman’s testimony was well-received, and senators drew parallels between social media and generative AI, and the lessons lawmakers had learned from the government’s failure to act on regulating social platforms. This hearing is just the first step in understanding the technology, and its potential risks and rewards.
Have a Great Weekend!