Thinks & Links | January 13, 2024

Big Data & Analytics - Thinks and Links | News and insights at the intersection of cybersecurity, data, and AI

New for 2024: 📫 Subscribe to Thinks & Links direct to your inbox

Happy Weekend!

This week, OpenAI finally launched the GPT Store within ChatGPT. First announced on the OpenAI Developer Day back in November, the store’s imminent release was delayed in part by the dramatic firing and re-hiring of Sam Altman last year. While unfortunately limited to paying ChatGPT Plus, Enterprise, or Teams users - the GPT Store is a very exciting development in the usability of Generative AI. Much like the iPhone App Store that heralded the mobile revolution, I believe the GPT Store (and similar developments from Google, Quora / Poe, and others) will continue to chip away at demonstrating use cases with real promise.

First we will need to get through the joke app phase. Who remembers iBeer the app that let’s you pretend your iPhone is a glass of beer? Or the app called “I am Rich” which charged $999.99 to show a little ruby icon. (successfully had a few buyers before the App Store shut it down - NFTs before their time) Readers with GPT Store access may enjoy Tattoo GPT, gen z 4 meme, or Minion Maker

There are three million GPTs in the store at its launch. Three million “AI Apps” in a store that was first announced in November. And while Minions, memes, and tattoos are fun, many of the GPTs are highly capable:

Diagrams: Show Me - Describe a process or a system or a visual that needs a diagram. Moments later you have that picture and a link to continue editing and modifying the visual to perfection.

Language Coach - Talk with an AI voice to learn and practice foreign languages

Thinks and Links Digest - Engage with back issues of this newsletter to recall your favorite thinks and/or links

I spend a lot of time reading and learning about AI. I’m generally bullish on the transformative capabilities of the technology. There’s a lot of hype, but I believe significantly more substance. And even I, upon spending a few hours browsing GPTs, feel like I’ve under-estimated it. GPTs (and Assistants, their API-only cousins) can provide meaningful value beyond silly use cases. This is because:

Each has a customized system prompt which gives it operating instructions, personality, and context necessary to operate

Many make use of web search or uploaded reference documents to improve the accuracy and recency of the information they return

Some GPTs use Dalle to generate images as part of their workflows

GPTs can also have “Actions” which facilitate API connectivity to external services

GPTs can also write and run code to perform various analytical tasks

APIs open a whole new world of capability into the GPT. Additional data, functionality, information about the user and their context are all available to integrate into these tools. The only limits are imagination and OpenAI’s system.

I highly recommend spending $20, even just for one month, to try GPTs first-hand. The variety of experiences available and exposure to creativity of the AI market will change your perspective on what’s possible and what’s coming. Here are three free trial links I was able to get off my account. The first person to click and claim each one will get a month of free ChatGPT Plus and access to GPTs: One | Two | Three - Let me know if you get this to work for you and what GPTs you tried!


Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training

https://arxiv.org/abs/2401.05566

A proof of concept language model that can hide adversarial or undesirable behavior. Researchers found ways to train the model to misbehave, such as write insecure code when given a backdoor trigger. Despite putting these models through standard safety and alignment processes, the backdoors were still intact. This paper implies that it may be much harder than previously thought to truly create a “safe” model. A very important finding and area for monitoring and alignment between AI engineering and cybersecurity.

Perplexity Raises $74 Million

https://www.reuters.com/technology/perplexity-ai-valued-520-mln-funding-bezos-nvidia-2024-01-04/

Perplexity is a great search engine that uses AI to find and synthesize information from multiple websites at once. Raising money on this scale indicates the massive market opportunity and a direct threat to Google’s supremacy in search. Perplexity has launched an API, and yes you can expect it to power GPTs (and other AI experiences) in the coming weeks.

AI at CES

https://www.cnet.com/pictures/coolest-ai-tech-ces-2024-weve-seen-so-far/

The Consumer Electronics Show (CES) this week featured demonstrations from technology brands of futuristic devices and a lot of generative AI. This CNET article runs down some of the biggest announcements from the annual technology event:

An AI-only device called Rabbit launched, with a $200 mini screen and camera that does nothing except interact with the onboard chatbot.

Volkswagen is adding ChatGPT to their line of Electric Vehicles

Microsoft laptops will have a dedicated AI key to launch Copilot

NIST - Understand AI’s Vulnerabilities

https://www.nist.gov/news-events/news/2024/01/nist-identifies-types-cyberattacks-manipulate-behavior-ai-systems

NIST has released a new publication, “Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations,” that addresses the vulnerabilities of AI systems to adversarial attacks. A key theme of the paper is that there are no foolproof defenses. AI can be corrupted by bad actors through data manipulation during training or interaction phases, leading to undesirable behaviors. The report categorizes major attack types—evasion, poisoning, privacy, and abuse—based on attacker goals and methodologies. The paper seeks to raise awareness that current defenses are incomplete. Awareness of the challenges and risks among AI developers and users is important.

Invisible Prompt Injections

https://www.youtube.com/watch?v=6IYi7pqGRoU&t=393s

Discovery by AI / Security researchers that there are unicode characters that can be invisible to the end user and that these can be used to facilitate prompt injection tasks. In this video, Joseph Thacker demonstrates how to create invisible prompts and how the GPT-4 API is currently vulnerable to this. It is hard to fix what you can’t see, and so this presents a new front of research and security that is needed to secure critical language-based systems.


Have a Great Weekend!


📫 Subscribe to Thinks & Links in your inbox

💬 Chat with the Newsletter Archive