The Power Duo: How Platforms and Governance Can Shape Generative AI

By Kirsten Hoogenakker

Solutions Engineer, Dataiku 

There’s no shortage of information, published or experienced, that describes the governance challenges of implementing Large Language Models (LLMs) or Generative AI. Whether you are considering the downstream ethical impacts of Generative AI, the data security and privacy of training a model in house or using a third-party model, or the implications of audit logging and documentation for regulatory compliance or readiness, putting these models into production comes with a lot of overhead.

According to Gartner, the top 3 challenges in implementing Generative AI are as follows:

  1. Generative AI doesn’t fit into existing and future business and operating models.⁠
  2. Learning how to experiment productively with Generative AI use cases.⁠
  3. Preparing for the longer-term disruptions and opportunities resulting from Generative AI trends.⁠

Any organization can begin finding efficiencies and streamlining these challenges by providing data professionals with a platform that is both governed and technology agnostic.

Oftentimes when I’m talking with customers and prospects, the dealbreaker conversations center around whether the platform I’m representing will fit within their current architecture. This question is critical from an implementation standpoint. However, it doesn’t consider the future implications or tech debt if the software is incompatible with future technologies. Less than a year ago, ChatGPT and all of the surrounding considerations such as compute requirements, security and data privacy, responsible LLMs, were unknown or bucketed under heavily technical teams. Now that it’s more mainstream, there are additional governance aspects to bring to the table.

ChatGPT has challenged a lot of current infrastructure and created frictions for IT and architects alike. Below are the top two challenges that I have encountered over the past 6 months:

  1. Receiving requests to connect to open source APIs (hello, privacy concerns!)
  2. Looking for best practices for prompt engineering (do you choose to upskill or hire new and relatively unavailable talent?)

The list of challenges to implementing LLMs is exhaustive, so instead of focusing on what’s not working, let’s examine how to make developing these models easier for the inevitable future.

A Tech Agnostic Future that is Responsible and Governed

Tech agnostic architecture could solve some of these issues when we think about preparing for the future stack. First, users need access to the data. Consider rolling over user or group-based roles and SSO from existing active directory groups to prevent data security headaches when granting access to a variety of data sources. Once access is granted, it’s time to connect. Tech agnosticism requires the ability to connect to any data source which might be used for fine tuning or as reference augmentation with an LLM (on-prem, in the cloud, apis, even uploading csv files or unstructured data). Lastly, this data should be wrapped in a governance layer that tracks versioning and metadata. The ability to document along the way is vital for understanding the decision making process and provides transparency across the process for traceability and project handoffs.

Wrapping data connections, transformations, modeling, deployment, and governance into a single orchestration layer may sound like an impossible task. However, there are platforms that provides a space where IT feels secure, data scientists and engineers can happily code with visibility and control, analysts and business users can interact through a point and click interface, and analytics leaders and compliance offer oversee and control the project portfolio. Meaning all users are on the same platform, able to collaborate, and break down silos that often exist because of tooling complexity.

There are some great solutions out there for data specific governance, like Alation or Collibra. However, when it comes to AI projects and LLMs specifically, it’s bigger than governing data. It’s also about governing projects and models. When governance for data, models, and projects converge in a single platform, it’s like magic. A sense of transparency and trust are built into the Analytics and AI lifecycle, allowing for efficient hand-offs between teams. Governance starts with project ideation and qualification that follows through to model iteration, deployment, monitoring, and cycling back into the preparation phase. LLMs require the same level, if not more documentation. It requires an understanding of the prompt engineering, the business justification, and understanding why a certain model was called over another.

If you aren’t sure where to start with governance or Responsible AI, especially around Generative AI, here is a RAFT framework. This short ebook provides organizations a reference point for identifying how they will approach responsible LLM production. Responsible AI is the beginning of a robust AI Governance framework which tackles how an organization might keep data privacy and ethical guidelines at the forefront of their project and model development and require approvals and sign offs. It also includes making sure a human is in the loop or initiating regular model drift checks depending on the use case. 

Utilize your organization’s existing skill sets

As you catalog the tools in your organization, consider where most of your development takes place. Is it happening solely in notebooks requiring code knowledge? Are you versioning your work through a tool like Github, which is often confusing to a non-coding audience? How is documentation handled and maintained over time? Oftentimes, business stakeholders and consumers of the model are locked out of the development process because there is a lack of technical understanding and documentation. When work happens in a silo, hand-offs between teams can be inefficient and result in knowledge loss or even operational roadblocks. This leads to results that are not trusted, oreven worse, adoption of the outputs. Many organizations wait too long before leveraging business experts during the preparation and build stages of the AI lifecycle.

Business stakeholders have the context for understanding whether or not the LLM outputs are in line with expectations or needs of the business.  This might be because only some of the glued together infrastructure is understood by the business unit, the hand off between teams is clunky and poorly documented, or the steps aren’t clearly laid out in an understandable manner. Folding these users into the development cycle earlier in the process by providing tooling, and proper guardrails, decreases overall workload to get robust models into production. Collaboration starts at project description, not once a model is being validated and recoded.

As artificial intelligence continues to evolve and integrate into our daily work and lives, the importance of a tech-agnostic approach becomes increasingly clear. This approach empowers organizations to embrace technological advances without accumulating unnecessary technical debt. However, it’s vital to emphasize that this doesn’t eliminate the need for robust governance around new models’ end-to-end lifecycle. As we move forward into the era of Generative AI and Large Language Models, a responsible and adaptable approach, coupled with comprehensive governance, will be the cornerstone of successful implementations. By fostering collaboration, leveraging diverse skill sets, and staying committed to guidelines, organizations can navigate the ever-changing AI landscape with confidence and efficiency, ensuring AI’s positive impact on both business and society.