Big Data & Analytics - Thinks and Links | June 10, 2023

Big Data & Analytics - Thinks and Links | News and insights at the intersection of cybersecurity, data, and AI

Happy Weekend!

Conversations about Data and AI this week included a lot of discussion about getting Enterprise AI programs off the ground. Investors and boards are asking about it, and people in all business functions are grappling with the questions of how AI can impact the business. The benefits that investors expect from AI will not come simply from email autocomplete and better chatbots. We’ve been looking at studies that speak to the significant value delivered with AI which is only possible when significant investments in capabilities has been made.

It is also clear that regulatory and operational risks are only going to increase from here. State, Country, and Continental government bodies are all putting forth potential regulation and laws that could impact how firms work with AI. Legal challenges to firms using Generative AI have already started and are likely to increase.

All this opportunity and risk translates to a clear need for Strategy, Governance, and Execution around Data and AI. Enterprises need to have a plan. Leading organizations are drafting AI Policies, governance processes, staffing plans, and technology infrastructure to be ready for the surge in demand for AI capabilities and associated risk. Here are some steps every organization should be considering:

Watch this space for updates on service briefs that address these market needs. For now, feel free to request an AI Executive Briefing where we cover these topics and more in depth.


Want AI? How’s your data?

https://www.wsj.com/articles/rush-to-use-generative-ai-pushes-companies-to-get-data-in-order-c34a7e13

AI is all about data. Here’s a Wall Street Journal article describing just that. The article describes several examples of companies working on data to out-innovate competition with AI. Syneos Health’s data journey was shared in the article. This included:

Shadow IT is Increasing; Risks Too

https://www.csoonline.com/article/3698277/shadow-it-is-increasing-and-so-are-the-associated-security-risks.html

Gartner found that 41% of employees acquired, modified, or created technology outside of IT’s visibility in 2022. They expects that number to climb to 75% by 2027. Shadow IT introduces major risks, including data breaches, compliance violations, and loss of visibility and control. Digital transformation, expansion of cloud services, and remote work are all driving this trend. One topic this article doesn’t discuss: Artificial Intelligence. I believe that estimates not including AI should be revised up. AI makes it easier to build your own shadow IT. It also accelerates capabilities for data breaches, compliance violations, and loss of visibility and control. Organizations should adopt a proactive approach to manage shadow IT (and AI) by identifying, assessing, and securing it. They should also provide guidance for teams about the rules for Shadow IT and the alternative, secure options to solve their business problems.

Nasty Zero Day Found in Popular MLOps System

https://protectai.com/blog/hacking-ai-system-takeover-in-mlflow-strikes-again-and-again

Machine Learning Operations (MLOps) is the process by which AI/ML are developed, deployed, managed, and monitored. It is a core function of Machine Learning engineering that streamlines the model development lifecycle from data ingestion to model production. MLFlow is very popular open source tool to facilitate MLOps. The vulnerability was discovered and patched weeks ago, but it would allow for system and cloud takeover because the tool could be exposed to the internet without authentication. No technology is immune from zero days, but with many organizations rushing to adopt AI capabilities vulnerabilities like this could become a larger issue. And if the AI development is happening as a form of Shadow IT – vulnerabilities like this could in your environment, running without the latest update.

2 Ways to Block AI and 1 Way to Use AI from Palo Alto

Firewall Settings: https://www.paloaltonetworks.com/blog/2023/05/securing-and-managing-chatgpt-traffic/

Enterprise DLP configuration: https://docs.paloaltonetworks.com/enterprise-dlp/enterprise-dlp-admin/configure-enterprise-dlp/enterprise-dlp-and-ai-apps/how-enterprise-dlp-safeguards-against-chatgpt-data-leakage

How to use ChatGPT within XSOAR Playbooks: https://www.paloaltonetworks.com/blog/security-operations/using-chatgpt-in-cortex-xsoar/

Lariar on AI – My Better Half Writing About AI in Search

https://media.monks.com/articles/how-ai-influencing-future-search

Shameless spousal promotion. My wife has been working with Google and Bing for years and seeing first-hand the AI developments that are going into their flagship products. In this article she shares what the likely impacts are in her industry where AI is accelerating some things and introducing new risks.

AI Will Save The World

https://a16z.com/2023/06/06/ai-will-save-the-world/

This long essay by Marc Andreessen is worth a read. Andreessen notably coined the phrase “Software is eating the world” twelve years ago – and it has. For Andreessen the potential of artificial intelligence (AI) to augment human intelligence and improve life far outweighs the existential risks. He argues that AI is a tool that can help us solve problems, create new knowledge, and enhance our creativity. He gives examples of how AI can assist us in education, health, law, art, and other domains. He also addresses some of the common fears and misconceptions about AI, such as its impact on jobs, ethics, and privacy. He concludes by urging us to embrace AI to make everything we care about better.

In another hopeful (and shorter) story –amazing software engineering R&D to speed up code using Google’s Deepmind AI.

Errata: The Air Force Did Not Simulate a Drone Killing Its Operator

https://arstechnica.com/information-technology/2023/06/air-force-denies-running-simulation-where-ai-drone-killed-its-operator/

Oops. I got caught up sharing an article last week that fit a narrative about AI running off the rails. It turns out there was significant human error in the reporting. The Air Force now denies that the simulation ever took place. The original source “misspoke” and the story went viral. Perhaps more evidence in favor of Marc Andreessen’s thesis above.


Have a Great Weekend!

No alt text provided for this image


If you’re still reading down this far, reply back to share which AI hype story should be deconstructed next week!