Big Data & Analytics - Thinks and Links | July 15, 2023

Big Data & Analytics - Thinks and Links | News and insights at the intersection of cybersecurity, data, and AI

Happy Saturday!

Last weekend OpenAI granted general access for pro members to the “Code Interpreter” version of ChatGPT. The new model is more than just a content generation engine. It uses the LLM to perform reasoning behind the scenes and provide an Agent that can solve many coding and data analysis problems.

Here’s an example: I’ve created a PDF that contains all my prior newsletters from this year. (I reviewed these to make sure they all only contained my views and links to public sites - don’t send private and proprietary information into this model[1]) In a short prompt I can make magic happen:

Create a word cloud from this pdf:

No alt text provided for this image

But the full chat process is instructive, and I’ve included screenshots below for you to see for yourself. In short, here’s what the chatbot does:

No alt text provided for this image

No alt text provided for this image

This is showcasing a new capability for LLMs – Reasoning.

Reasoning is an emergent super-use case for Large Language Models. AutoGPT and BabyAGI are two of the many open-source projects innovating on AI Agent development. These make for impressive demonstrations but have often run into snags. So far, ChatGPT Code Interpreter works really well.

This is a preview of what will very likely soon come to other AI providers and be built as proprietary models within companies. Agents which can analyze data and write code will provide an even greater amount of utility and usefulness.

Here’s another example: I’ve loaded up a csv file that represents an extract from a ticketing system (not real company data[2]) - you can see the agent in the screenshots below:

No alt text provided for this image

No alt text provided for this image

No alt text provided for this image

No alt text provided for this image

This is the entire data analyst lifecycle, accelerated and documented thoroughly. Is it perfect? No. Are human analysts perfect? Also no. And humans often provide much less documentation about the logic we follow as we build an analysis. When AI models like will be coming to tools like Microsoft Excel and do analysis like this. Data analysis in 2025 will likely include at least some speaking to the computer… like in Star Trek.

Don’t run out and fire your data analytics team just yet. A few caveats remain:

But wait. One more thing!

The ticket analysis I shared earlier was on dummy data. But there’s one more step I can do to make this safe for enterprise use:

No alt text provided for this image

When I run that script on my own computer, no data leaves the firm.[3] I get the benefits of this incredible data analysis tool to speed up my own analysis.

This model is incredible, and after a week of playing I’m surprised every day by the creative use cases we are finding for it. This is just the beginning, and future versions from OpenAI and others will likely continue to push the boundaries of what we understand is possible to do with an AI model. Companies will build similar models (with greater security and data controls) that will change forever how they think about data analysis. This is worth the $20 for a paid version of OpenAI to experience the future.

Footnotes:

[1] Don’t upload sensitive data to ChatGPT or any other AI tool you don’t control

[2] Seriously, don’t use real company data. You’ll be tempted. It is very, very easy.

[3] Also you probably shouldn’t run code you don’t understand that came from an AI website… although the python will be well commented and easy to modify.


New Optiv AI Service Brief: Security Tools for AI

https://www.optiv.com/insights/discover/downloads/big-data-analytics-ai-dgpp

Optiv knows security. Optiv knows partner tools. We are tracking closely how organizations are monitoring public AI and building secure private AI. We can help you find the right approach that leverages the benefits of AI to fit into your organization’s risk profile, by helping you access the productivity enhancements of AI while integrating the same Data Security standards from traditional applications.

Articles Describing How Generative AI Will Enhance Security Operations

https://www.darkreading.com/vulnerabilities-threats/how-to-put-generative-ai-to-work-in-your-security-operations-center

https://securityboulevard.com/2023/07/ais-impact-on-security-risk-and-governance-in-a-hybrid-cloud-world/

A few great examples of how the SOC can use generative AI and Large Language Models:

Counter-Argument (sort-of)

https://www.darkreading.com/threat-intelligence/hackers-say-generative-ai-unlikely-to-replace-human-cybersecurity-skills-according-to-bugcrowd-survey

Bugcrowd’s annual “Inside the Mind of a Hacker” report for 2023 found that 72% of hackers believe AI will not replace the creativity of humans in security research and vulnerability management. I agree – it won’t replace creativity, but it will certainly change the kinds of tasks that you need to be creative in and how quickly creative ideas can be deployed.

The report also found that generative AI technologies have increased the value of ethical hacking and security research, and that most hackers are Gen Z or Millennials. To keep safe from AI-enabled threats, organizations need to understand what AI can and can’t do, check their security readiness, include AI in their risk plans, and keep watch on how AI is being used.

The risks of AI are real but manageable

https://www.gatesnotes.com/The-risks-of-AI-are-real-but-manageable

Bill Gate’s perspective on how to handle the risks that are all around AI without becoming overwhelmed. To ensure that AI is used for good, we need to understand the risks and take steps to mitigate them. This includes developing tools to detect and prevent deepfakes, creating global regulations to prevent an AI arms race, and making sure that workers are not left behind as AI changes the workplace. We also need to be aware of the biases that AI models can inherit and take steps to ensure that AI is used responsibly. With the right approach, AI can be a powerful force for good.

Keeping an Eye on AI Regulations in Europe – Google Bard Finally Launches There

https://techcrunch.com/2023/07/13/eu-privacy-watch-ai-chatbots/

Google’s AI chatbot Bard has launched in the European Union after making changes to boost transparency and user controls. The Irish Data Protection Commission will be continuing to engage with Google on Bard post-launch and Google has agreed to carry out a review and report back to the watchdog in three months. At the same time, the European Data Protection Board has a taskforce looking into AI chatbots’ compliance with the GDPR.


Have a Great Weekend!

No alt text provided for this image