That chat is going places. Specifically, Virginia.
Yesterday, I mentioned the privacy risks of internal AI chatbot use. Today I’m going to talk about putting one on your website.
Many businesses install them today, often without proper thought beforehand. The dev teams gets asked to install one and, the next thing you know, visitors are typing their names, email addresses, order details, and complaints into a text box that beams it all to a third party, often outside Europe.
Once again: that’s personal data processing under the GDPR. You’re the data controller, the chatbot company is the data processor. And you need a lawful basis, transparency, and yet another Data Processing agreement. You also need to handle user access and deletion requests and, thanks to the EU AI Act, you must clearly tell visitors that they’re talking to an AI.
Here’s the tricky part though: loads of these chatbot vendors market themselves as European or EU-hosted, but many of them are still sending your visitors’ data to OpenAI, Google or Anthropic. It takes heavy infrastructure to run these large language models and most companies outside of the big Silicon Valley giants don’t have the capacity for this.
If you want to know where the data actually goes (and you should), ask them for their sub processor list. If you get a vague answer like “we use trusted partners”, run away.
And then we come back to that same question: do you really need one? Surveys regularly show the vast majority of people prefer talking to real flesh-based humans.
When chatbots do work well, it’s for basics: FAQs, order status, things like that. As soon as things get even slightly complex, the best systems work by handing the conversation off to a human.
And, if a bot’s main job is to route visitors to a human, wouldn’t a well organised help section and a good contact form do the same thing without all the overhead, risk, and compliance issues?
Colin