Thinks & Links | March 23, 2024

Big Data & Analytics - Thinks and Links | News and insights at the intersection of cybersecurity, data, and AI

đź“« Subscribe to Thinks & Links direct to your inbox

Happy Weekend

Jensen’s Law

Big news this week from Nvidia’s GTC conference including the introduction of the new Blackwell platform, a supercomputing platform that will be 30 times faster than the current generation of model training and inference hardware. It packs an astounding 208 billion transistors and 10 terabytes per second of data shuttling between its two dies. This beast is purpose-built for the generative AI era, delivering massive performance gains for training trillion parameter models as well as incredible efficiency for inference and token generation. Expect Blackwell to power the next wave of AI assistants, chatbots, and creative tools as companies race to build AI factories in the cloud.

But hardware is only part of the story. Nvidia is positioning itself as an “AI Foundry”, providing not just chips but also software like the Omniverse simulation platform, AI models and microservices called NIMs, and tools for customizing and deploying them. The goal is to make it easy for any company to spin up capable AI agents that can interface with their proprietary data and systems. We’re entering an age of AI-human collaboration where generalist foundation models are fine-tuned into skilled specialist agents - like teaching a new employee. Companies sitting on troves of digital data now have a path to transform it into interactive, intelligent assets.

Huang also gave a glimpse of an AI-powered future where the digital and physical worlds converge. Factory robots will train in photorealistic simulations to gracefully navigate unpredictable environments. Delivery drones will tap weather AIs to plot optimal routes around storms. Nvidia’s Project Groot aims to create humanoid robot assistants that learn by watching videos of humans, while Q-cartoonish figures from Disney Research hinted at AI-enabled animatronics imbued with interactive intelligence. As physical machines become programmable with AI models that can reason and adapt, our relationship with robotics will be radically redefined.

The impact of Nvidia’s relentless hardware advances on the AI landscape is reminiscent of how Moore’s Law transformed computing. With each new generation of GPUs like Blackwell delivering staggering leaps in performance, what seemed like sci-fi a year ago will quickly become achievable, then routine, then table stakes. Models that once required massive supercomputers will run on a single chip. Training costs and times will plummet even as model sizes soar into the trillions of parameters. Like a Moore’s Law for AI, Jensen’s Law is a driving force in the revolution in making AI bigger, better, and cheaper.

But with great power comes great responsibility - and major security risks. As AI agents proliferate and grow more capable, securing them becomes paramount. Today’s cutting-edge AI wunderkinds are tomorrow’s script kiddies. Bad actors will seize upon ever more convincing chatbots for social engineering attacks. AI-powered malware will learn to hide its tracks. Deepfakes will fool our eyes and ears. AI that interacts with the physical world through robotics will be vulnerable to “prompt injection” attacks that hijack their behavior. Securing these systems will make the task of locking down conventional software seem like child’s play in comparison.

Security can’t be an afterthought. It needs to be baked into the AI lifecycle from day one - data ingestion, model training, deployment, monitoring, the works. The same advances that put powerful AI tools in the hands of every company will also be eagerly exploited by hackers. CISOs need to ride this wave, not be bowled over by it. Partner with the data science teams to red team AI and harden it against attacks. Develop AI-powered countermeasures to detect and deflect pernicious AI. Educate users to be savvy about AI risks. With a strategic approach, security leaders can ensure that their companies harness the incredible potential of technologies like Blackwell and Omniverse without putting data, systems, and ultimately lives in danger as AI models exponentially grow in scale and wield increasing power over our digital and physical worlds.


AI Detection and Response

https://hiddenlayer.com/aidr/

Optiv Partner Hidden Layer has launched a new tool to integrate AI risk detection into the security Detection and Response stack. Hidden Layer is arming security teams to fend off prompt injection, data spillage, model heists, and other AI threats in real-time, all while providing for current and anticipated regulatory compliance needs. Detections are based on MITRE ATLAS and OWASP Top 10 for LLMs, plus additional supervised and unsupervised learning and behavior analysis. The launch video alone is a great recap of the need for AI security and the importance solutions like this will play as more and more businesses turn to AI functions.

”As an AI Language Model” is Everywhere

https://www.theverge.com/2023/4/25/23697218/ai-generated-spam-fake-user-reviews-as-an-ai-language-model

The phrase “as an AI language model” has become a wide-spread indicator revealing where AI is generating spam, fake reviews, and low-quality content across the web. A simple search for this phrase on sites like Twitter, Amazon, and LinkedIn exposes countless examples of malfunctioning spambots, shoddy AI-generated products, and copied-and-pasted AI responses masquerading as genuine content. While these examples provide early signs of AI’s potential to flood the internet with automated spam, the true extent and impact of this practice remains unclear. The real challenge lies in detecting AI-generated text that lacks obvious tells, as software to identify such content is currently nonexistent and may even be mathematically impossible. As an AI language model, I hesitate to speculate on the future implications of this phenomenon, but it’s clear that the phrase “as an AI language model” has become a shibboleth for the growing presence of machine learning spam online.

University of South Florida’s College of AI, Cybersecurity, and Computing

https://www.usf.edu/news/2024/usf-plans-to-launch-college-focused-on-artificial-intelligence-cybersecurity-and-computing.aspx

The University of South Florida is launching an exciting new college focused on AI, cybersecurity, and computing. USF is pioneering this important field of study to prep students for skyrocketing demand in these fields, turbocharge research breakthroughs, and forge industry partnerships. By marshaling its 200-strong faculty brain trust and tapping into the tech and defense ecosystem of the Tampa Bay area, USF is angling to become a global heavyweight in this space. With societal shifts and job markets clamoring for these skills, USF’s trailblazing move looks like a savvy bet on the future.

Microsoft Continues to Dominate in AI

https://www.ft.com/content/1045edfb-f06b-4162-bab9-e2f019f5dec4

Microsoft made another strategic move this week in the race to dominate the generative AI market by hiring Mustafa Suleyman, co-founder of DeepMind and CEO of AI start-up Inflection, to lead a new division focused on consumer-facing AI applications. This surprising appointment, which comes with most of Inflection’s 70 staff, follows Microsoft’s successful $13bn investment in ChatGPT maker OpenAI and underscores CEO Satya Nadella’s strategy of identifying and partnering with promising AI start-ups to gain a competitive edge. While Microsoft has established a strong presence in the enterprise AI segment with its Copilot assistants, it now aims to extend its lead by targeting the vast potential of consumer-oriented AI products, despite facing competition from rivals like Apple and Google. The hiring of Suleyman, one of the most prominent figures in the AI world, is expected to bolster Microsoft’s efforts to create a broad-based AI assistant for consumers and further its ambitions in the cloud computing market. This move further solidifies Microsoft’s position as a frontrunner in the rapidly evolving generative AI landscape, although regulatory scrutiny of its dealings with start-ups is likely to intensify.

How Generative AI Is Poised To Transform Enterprise Back-Office Functions

https://www.forbes.com/sites/forbestechcouncil/2024/03/20/how-generative-ai-is-poised-to-transform-enterprise-back-office-functions/?sh=4c13fbab19ff

While the spotlight shines bright on consumer-facing AI like ChatGPT and Copilot, the back office is where the real action is. Generative AI is set to turn business processes upside down, and companies should jump in now to build skills, experience, and critically, the governance and risk management foundations for responsible AI adoption. This is especially true for data curation and search, where AI can slash time from hours to minutes. But here’s the kicker: it’s the business users, not the techies, who often have the keenest eye for AI opportunities, so getting them involved early is key. Above all, clear security policies and iterative governance must be in place before the first model is deployed.

A Remote Keylogging Attack on AI Assistants

https://blog.cloudflare.com/ai-side-channel-attack-mitigated

Cloudflare recently worked with researchers from Ben Gurion University who have discovered a critical vulnerability in AI assistants like ChatGPT-4 and Copilot that could potentially expose private user conversations. By observing the lengths of encrypted message packets, attackers can infer the contents of the AI’s responses through a novel side-channel attack. The research team developed a sophisticated model using large language models (LLMs) that can reconstruct entire paragraphs from just the sequence of token lengths with surprising accuracy. Experiments show the attack can successfully infer the topic of over half of ChatGPT-4’s responses and can sometimes recreate the AI’s messages word-for-word. The researchers have disclosed the vulnerability to affected vendors, including Cloudflare who has built a mitigation for this serious privacy risk.

Department of Homeland Security Leads in AI

https://www.meritalk.com/articles/dhs-caio-aims-for-national-leadership-in-ai-tech/

The Department of Homeland Security is charging ahead on AI, with the agency’s first Chief AI Officer Eric Hysen aiming to make DHS a national leader in the technology. Hysen, who also serves as CIO, has been encouraged by the progress since the White House AI executive order last year, including the launch of an “AI Corps” hiring spree and a trio of generative AI pilots. He’s taking a holistic approach, recognizing that AI is becoming ubiquitous in software and requires not just technical chops but also deep understanding of mission and operations. Hysen sees a unique opportunity for government to be at the forefront of AI adoption alongside industry, learning together and establishing best practices.

Learn About AI: Total Noob’s Intro to Hugging Face Transformers

https://huggingface.co/blog/noob_intro_transformers

This article provides a beginner-friendly guide to understanding and using the Hugging Face Transformers library, which simplifies the implementation of transformer AI models for natural language processing and other tasks. The tutorial walks through the process of running Microsoft’s Open Source Phi-2 language model in a notebook on a Hugging Face space, covering key concepts such as the Hugging Face Hub, notebooks, classes, tokenizers, and model inference. Understanding how to use transformers on Hugging Face is crucial for teams leveraging cutting-edge open-source large language models (LLMs) in their projects. However, it is important to note that the ease of importing and using these models also means that vulnerabilities can be quickly introduced into your environment if proper precautions are not taken.


Have a Great Weekend!


You can now chat with the newsletter archive at **https://chat.openai.com/g/g-IjiJNup7g-thinks-and-links-digest