Big Data & Analytics - Thinks & Links | September 8, 2023

Big Data & Analytics - Thinks and Links | News and insights at the intersection of cybersecurity, data, and AI

ChatGPT Enterprise

Ten months. From the launch of ChatGPT into one of the fastest growing products of all time to the launch of an enterprise-grade version of the tool. Ten months feels like an eternity when every week brings exciting new AI models and use cases along with stories of misuse and risk. Its within one budget cycle. If individuals in your firm were becoming proficient in ChatGPT over the past year, you can now purchase an option that aligns with company policy.

Here’s how Enterprise ChatGPT aligns with many of our AI best practices:

OpenAI guarantee to not train models on any provided business data or conversations

SOC2 compliant platform

Encryption of data at rest and in transit

Single Sign-on integration

Monitoring of usage patterns

In addition to the ability to more confidently use the chatbot features, Enterprise ChatGPT will provide a secure access to the “Code Interpreter” functionality (now renamed to be “Advanced Data Analysis”). A promise to soon be able to directly integrate company data into the platform is also promising for a variety of use cases.

It will be very interesting to see how adoption of Enterprise ChatGPT goes. I suspect that many organizations will consider this the “easy button” to enable the benefits of the tool while complying with security and privacy policy. And while it is an excellent option, there are more ways that Large Language Models will impact businesses outside of ChatGPT:

Custom applications using API-based models or self-hosted ones

Tools which integrate with ChatGPT, Bard, or bring their own LLM

Anything employees build with a few lines of code (e.g. “Shadow AI”)


Optiv Q&A on Safe Adoption of Generative AI

https://betanews.com/2023/09/04/how-organizations-can-safely-adopt-generative-ai-qa/

Introducing generative AI tools like ChatGPT can be a game-changer for organizations, but it’s crucial to take proactive steps to ensure safe adoption. These include sharing guidance on what AI can (and should not) do, writing clear policies, and establishing governance processes. LLMs and other AI can provide a lot of power, but they also bring risks and should include application of new controls. And let’s not forget the human role in AI adoption - clear policies, training, and user involvement are essential. Embrace AI capabilities while managing risks to stay competitive in the evolving AI landscape.

OpenAI / ChatGPT in More European Privacy Hot Water

https://techcrunch.com/2023/08/30/chatgpt-maker-openai-accused-of-string-of-data-protection-breaches-in-gdpr-complaint-filed-by-privacy-researcher/

A complaint has been filed with the Polish data protection authority, alleging that OpenAI is violating the EU’s General Data Protection Regulation (GDPR). The complaint contends that OpenAI’s practices, particularly with its ChatGPT technology, infringe on GDPR principles related to lawful basis, transparency, fairness, data access rights, and privacy by design. It further accuses OpenAI of failing to engage in prior consultation with EU regulators before launching ChatGPT in Europe. This is not the first GDPR-related concern directed at OpenAI, and the complaint highlights ongoing regulatory scrutiny of AI chatbots and the potential consequences, including substantial fines if GDPR violations are confirmed.

https://blogs.microsoft.com/on-the-issues/2023/09/07/copilot-copyright-commitment-ai-legal-concerns/

Microsoft has announced a Copilot Copyright Commitment to address concerns about intellectual property (IP) infringement claims when using generative AI like its Copilot services. Microsoft will assume responsibility for potential legal risks related to copyright claims if customers use Copilots and their outputs. This commitment extends Microsoft’s IP indemnity support and covers copyright infringement claims, provided customers use the built-in guardrails and content filters. Microsoft aims to stand behind its customers, address authors’ concerns, and advance the responsible use of AI while respecting copyright and promoting competition and innovation.

AI Reached Peak Hype in August

https://www.gartner.com/en/newsroom/press-releases/2023-08-16-gartner-places-generative-ai-on-the-peak-of-inflated-expectations-on-the-2023-hype-cycle-for-emerging-technologies

Generative AI has hit the peak of expectations on Gartner’s 2023 Hype Cycle for Emerging Technologies. It’s part of the broader trend of emergent AI and is expected to bring major benefits in two to five years. We can now expect the Trough of Disillusionment to set in when companies discover that significant security and governance focus is needed to safely experience the benefits. We will see if the normal timelines apply to Generative AI or if the automations help accelerate the automation.

Advanced Analytics is Still Hard

https://www.techtarget.com/searchbusinessanalytics/feature/Advanced-analytics-challenges-limit-analytics-AI-benefits

While everyone is focusing on Generative AI, don’t forget that to truly succeed in analytics requires robust data management practices. Organizations must address existing advanced analytics challenges to get the most out of generative AI (and other forms of innovation). These include:

Clean, well-governed data is essential for both traditional analytics and generative AI

Poor data quality will hinder AI’s performance

Companies also need to focus on the value analytics provides rather than just capabilities

Teams must adopt a human-centered design approach

Businesses should foster a data culture that extends beyond and analytics team

Walmart Built an AI App

https://www.axios.com/2023/08/30/walmart-generative-ai-app

Walmart has rolled out its own version of an internal ChatGPT-like system called “My Assistant” to ~50,000 employees. The app can assist with various tasks, such as summarizing documents and creating content. It’s available on mobile too. Walmart has not disclosed which LLM and AI tools they’re using, but the launch will enable employees to automate repetitive tasks, while maintaining security over data and use.

How to Build an Enterprise LLM Application – Lessons Learned by the Github Copilot Team

https://github.blog/2023-09-06-how-to-build-an-enterprise-llm-application-lessons-from-github-copilot/

I really enjoyed this article which shared how one of the most-used LLM applications in the world (behind ChatGPT) was brought into production. There’s a lot of lessons in here if you’re considering building an LLM application.

Copilot development process consisted of three stages: “find it,” “nail it,” and “scale it.” They started by finding a problem that balanced potential value with difficulty/quality/risk. They nail it with iterative development and very tight user feedback loops. They then scaled it by prioritizing quality, usability, and responsible AI use, setting performance metrics and optimizing costs. The GitHub Copilot team engaged with the developer community to address concerns and ensure responsible AI usage. The article also shares a bit about their go-to-market strategy: initially targeted individual users before businesses, emphasizing simplicity and predictability in pricing.


Have a Great Weekend!