Skip to content
AI chat GPT

Navigating AI in Your Business

Addressing Your Concerns

Recently, we’ve been receiving a lot of inquiries from our Managed Services clients about artificial intelligence (AI) technologies, particularly AI language models like ChatGPT. The questions often revolve around data privacy and the behaviour of these AI systems, including the phenomenon of AI ‘hallucinations.’ Given the growing interest and the pivotal role of AI in IT services, we think it’s crucial to address these concerns.

Data Interaction and Privacy in AI

ChatGPT, like other AI models, processes user input data in real-time to generate responses. Importantly, the models shouldn’t ‘remember’ or store the data it processes, at least not making it publicly available. However, while the AI model may not retain data beyond your private interaction in your account, the broader system in which it operates, such as the hosting platform or application, might log, store, or monitor user inputs and AI responses. If in doubt don’t share anything private or sensitive. Remember that not all AI tools operate in the same way or to the same standards.

It’s critical to be aware that while the AI shouldn’t store sensitive data input by users, the interface you’re interacting with could potentially pose a data privacy risk. Here are some ways to safeguard your data in an AI environment:

  1. Avoid Sharing Sensitive Information: Treat AI interactions as a public domain and refrain from inputting confidential data.
  2. Understand the Hosting Platform: Investigate the data handling policies of any third-party platform that hosts the AI tool. Ensure the data is encrypted and securely stored.
  3. Data Protection Compliance: Use AI tools and platforms that comply with data protection regulations like GDPR, CCPA, PIPL and others.
  4. Limit Access: Implement access controls to prevent unauthorised data access to the AI interfaces.

AI ‘Hallucinations’ and Responsible Use of AI

AI models like ChatGPT, while powerful, aren’t perfect and may sometimes generate ‘hallucinations’ or outputs that aren’t based on their training data and are consequently false or inaccurate. These hallucinations can lead to misinformation and pose potential reputation risks if not managed properly.

Here are some strategies to prevent and manage AI hallucinations:

  1. Staff Training: Ensure staff members who use AI tools understand the concept of hallucinations and know how to verify the information generated by AI models.
  2. Implement Safeguards: Carefully monitor the outputs of the tools to detect and manage hallucinations.
  3. Set Clear Usage Policies: Develop internal policies outlining how and when AI tools should be used, addressing the potential for hallucinations and providing guidelines to manage them.
  4. Feedback Mechanism: Encourage users to provide feedback about AI outputs, especially if they spot potential hallucinations.

AI tools like ChatGPT offer immense potential benefits but also come with unique challenges. By understanding these challenges, implementing safeguards, and promoting responsible usage, we can navigate these complexities successfully. As always, we’re here to support our clients in understanding and leveraging these powerful technologies effectively and safely. PTS will never input private client information into these tools unless they have absolutely stringent privacy policies just like any other cloud managed system.

 

If you need help or advice related to this topic please get in touch with us here