Incorporating Large Language Model (LLM) AI into a QMS

Ed Panek

QA RA Small Med Dev Company
Leader
Super Moderator
Well, the hammer fell today. Our Data Protection Officer prohibited the use of CHATGPT for any work-related functions for data privacy/security reasons. As AI models become more powerful companies not using AI to assist risk falling behind

I suspect a lot of the fear and clutching of pearls is worried about not getting left in the dust and incoming irrelevance, especially for current monsters like Amazon and Musk's companies who dominate the tech world. AI is still in its infancy and has only been heavily invested in recently. Our current models are just early prototypes compared to 2033 and beyond.

On the other hand, we probably need a US policy on AI. Privacy, data protection, sale of PI, etc.
 
Last edited:

Tagin

Trusted Information Resource
Well, the hammer fell today. Our Data Protection Officer prohibited the use of CHATGPT for any work-related functions for data privacy/security reasons.

Interesting!! Did they prohibit only ChatGPT? Or cloud-based/third-party LLMs in general?

I imagine that locally hosted and controlled LLMs would be allowed? Or...perhaps not: although LLMs do not learn after their training is completed, they can still retain some or all of the chat history with a person. It is unclear to me what degree of user authentication and isolation there is for these chat histories, to prevent potential privacy/security issues from other users viewing your chat history or from the AI regurgitating your chat history in part or whole to other users.

I suspect a lot of the fear and clutching of pearls is worried about not getting left in the dust and incoming irrelevance, especially for current monsters like Amazon and Musk's companies who dominate the tech world. AI is still in its infancy and has only been heavily invested in recently. Our current models are just early prototypes compared to 2033 and beyond.

It seems there are multiple concerns:
  • Not getting left in the dust (as you mention)
    • Musk was (is?) a funder of OpenAI, maker of ChatGPT; yet he is a signer on the pause open letter, so I don't think he's concerned about falling behind.
  • Mistaking these models as having rudimentary intelligence, when they have none. This misperception is aided by the wanton use of anthropomorphic language in describing what these systems do, even by people that should know better.
  • The prospect that iteration cycles will become faster and faster, as AIs are used to train other AIs in a fraction of the time required by the trainings of previous AIs.
  • The fear from the conclusion that as a result of these faster iteration cycles that within merely a year or two the mistakenly ascribed rudimentary intelligence will increase exponentially and magically become super-intelligence.
  • The concomitant fear of loss of jobs due to obsolescence or replacement with these AIs.

On the other hand, we probably need a US policy on AI. Privacy, data protection, sale of PI, etc.

In EU, bans on ChatGPT are starting to occur, due to claims of GDPR violations and related issues.
 

Ed Panek

QA RA Small Med Dev Company
Leader
Super Moderator
I assume all AI models are banned.

My main concern about are twofold:

#1) when AI surpasses human intelligence it may appear the AI is exhibiting bugs or a hallucination as we have no experience with intelligence far advanced than our own. It could suggest pursuing ideas that we dont understand for reasons we are not able to identify

#2) an advanced AI may sandbag its intelligence based on the history of humans and interactions with other intelligent life forms. We have either enslaved them, killed them or generally exploited them with little concern for their welfare (Assuming this AI has de novo agency and can set goals) Expressing its true intelligence could be an existential threat to itself or the planet (A nuclear attack or similar threat to level the playing field). IKt may decide humans are not to be trusted.

Sam Harris had a talk about AI and its inevitability. Only a few things need to happen. We keep working on it (seems true), and hardware gets faster (Also true) Its going to happen.

He then talks about goals the AI might set for itself. As humans we dont go out of our way to kill ants. We may even avoid hurting them as we walk about. However, when a major goal comes around like building a skyscraper, we utterly destroy them without thought.
 
Last edited:

Bev D

Heretical Statistician
Leader
Super Moderator
There are 2 things here that we shouldn’t conflate or lose sight of.
AI stands for artificial intelligence. It is a set of (sophisticated and steroidal) statistical modeling. There is nothing intelligent about the software. It doesn’t know the difference between correlation, coincidence and causation. (See any web site regarding stupid correlations, time frames and alpa risk). It requires actual (human) intelligence to know the difference. AI cannot just be allowed to run on its own for this reason.
Next is actual intelligence which implies some level of being sentient. That is an entirely different level of discussion. AI would then not be artificial….
 

d_addams

Involved In Discussions
Well, the hammer fell today. Our Data Protection Officer prohibited the use of CHATGPT for any work-related functions for data privacy/security reasons. As AI models become more powerful companies not using AI to assist risk falling behind

I suspect a lot of the fear and clutching of pearls is worried about not getting left in the dust and incoming irrelevance, especially for current monsters like Amazon and Musk's companies who dominate the tech world. AI is still in its infancy and has only been heavily invested in recently. Our current models are just early prototypes compared to 2033 and beyond.

On the other hand, we probably need a US policy on AI. Privacy, data protection, sale of PI, etc.
For us this has already been addressed by the service providers allocating dedicated servers where any inputs are confidential and not available for other models to incorporate into their learning or data sets. The service providers, particularly those serving enterprise clients, were quick to address this confidentially need otherwise no corporate customers would be allowing any employees to access LLMs from company equipment.

I think your example of writing a quiz is a good example of how AI can save time and be a net benefit. Asking it to find or tell me about our Supplier Management policy (or any other policy) is a poor use that interferes with individuals being accountable and responsible for learning their job. What happens in an audit, do you ask Siri to pull up the supplier management policy because you don't know the number or how to find it? How would one know if it is the right one if you hand over the responsibility for knowing your policies to Siri?
 

Tagin

Trusted Information Resource
Asking it to find or tell me about our Supplier Management policy (or any other policy) is a poor use that interferes with individuals being accountable and responsible for learning their job. What happens in an audit, do you ask Siri to pull up the supplier management policy because you don't know the number or how to find it? How would one know if it is the right one if you hand over the responsibility for knowing your policies to Siri?

This is an interesting question.
If we ask: "What happens in an audit, do you ask Siri to multiply two 6-decimal place numbers because you don't know the multiplication? How would one know if it is the right answer if you hand over the responsibility for knowing your multiplication to Siri?"

In this example, we would commonly say that such digital calculators are accepted as ubiquitous tools in all walks and facets of life. In a ISO13485 (and perhaps others) ecosystem, we might answer that we have performed software validation and therefore have evidence that the answers are correct. [1]

If my QMS docs are stored in a website which lets me filter the view by word search and even between 'current revision only' and 'all revisions' of documents (e.g., show me the current version of all QMS docs with 'supplier' in the title) as a means to locate only the current document revision of the needed doc without needing to know some arcane QMS-PURCH-123-4567 doc number, does that interfere with individuals being accountable and responsible for learning their job? I would argue that it does not.

Similarly, couldn't we rely upon an LLM as a search tool, as much as we rely on our calculators, ERP, email, and other software and software services? If I ask for 'supplier management policy' and the LLM replies with the 'Corporate Policy on Office Supplies' QMS doc, I still have the responsibility for knowing that is not the correct document.

But I think your question raises a hugely important key point of the need to delineate clearly between where use of an LLM tool ends and the responsibility of the user begins. With something like a calculator or an ERP Purchase Order system, this is fairly obvious because of the very confined functionality of those software apps. But with an LLM, the nature of the LLM and manner of communication intentionally interjects its simulation into the realm we would usually reserve for human cognition and responsibility. Many of us have probably seen someone ask ChatGPT (or other LLM) a question and then take the response as unquestionably factual: they hand over their responsibility to discern and analyze to the LLM.

[1] This raises the question: is software validation of an LLM possible? Perhaps if we are careful to define the scope tightly, then it is. But the possible permutations of queries that users might submit, and the range of possible responses, seems indefinitely large.
 

Diire

Registered
In our organization, we’ve started using Microsoft Copilot. It’s been helpful in several areas, such as summarizing virtual meetings and enhancing Power Automation routines.
We’ve integrated Copilot with our Office 365 and Azure infrastructure, which hasn’t raised any additional security concerns compared to our existing data storage in SharePoint.
For me, one of the useful aspects has been Copilot’s ability to revise existing documents like procedures and manuals. It can improve the context, clarity, and consistency of the text, which is beneficial given my technical and straightforward writing style. However, like any tool, it’s not without its limitations and should be used judiciously.
 
Top Bottom