Ideta blog image

Chatbot privacy and the law: how good design helps us avoid getting into troubles

What’s a Rich Text element?

The rich text element allows you to create and format headings, paragraphs, blockquotes, images, and video all in one place instead of having to add and format them individually. Just double-click and easily create content.

Static and dynamic content editing

A rich text element can be used with static or dynamic content. For static content, just drop it into any page and begin editing. For dynamic content, add a rich text field to any collection and then connect a rich text element to that field in the settings panel. Voila!

How to customize formatting for each rich text

Headings, paragraphs, blockquotes, figures, images, and figure captions can all be styled after a class is added to the rich text element using the "When inside of" nested selector system.

Chatbot privacy has become a sensitive issue in the last few years.

If it is true that conversational AI gives organizations the opportunity of drilling a huger amount of data from customers and users, it is also true that chatbots lead to a higher risk of violation of privacy legislation.

This is the reason why possible legal concerns should be taken into duly consideration by any business when planning to use chatbots in conversational marketing.

Chatbots pose a myriad of legal issues that need to be addressed through the adoption of appropriate internal policies and the use of great care in choosing the external agencies and suppliers that offer chatbot services.

Which risks to privacy the utilization of chatbots creates?

The main problem with chatbot privacy is that bots are not so smart as we expected.

For example, if a chatbot has to identify someone, it needs to collect a quite wide amount of personal data, which conflicts with the principle of data minimization(“the less you ask the better”) of the European General Data Protection Regulation (GDPR).

Customer profiling is another activity at the edge of legality. The bot needs to analyze and combine large sets of personal data, which collides with the chatbot privacy principle of purpose limitation.

The rule of purpose limitation states that personal data may be used only for a specific goal. This set a rigid limit to any data drilling.

Other limits come from the ban on sensitive personal data collection (like ethnicity, religion, health) and the obligation to process data in transparently.

Unfortunately, artificial intelligence cannot be transparent, as in some ways it is opaque even to the software developers that built it.

Last, people have the right to object to the use of their personal data, and they may ask the deletion. Respecting chatbot privacy means then also to safeguard these two rights.

Chatbot privacy and bot mistakes

The right of people to ask for deletion of their personal data deserves some further reflections. Chatbots are far from being perfect and infallible, which creates risks of infringing chatbot privacy rules.

The leading case is an Amazon recruitment AI chatbot that was found to be gender-biased. The bot was led to think that men are better than women because of the higher volumes of men’s resumes that it got!

Another case is Tay, an AI chatbot that was introduced by Microsoft with the purpose to increase engagement on social networks. Unfortunately, Tay started to speak a racist language and express far-right-wing and Nazi opinions.

The last case is Beauty, an AI chatbot that was supposed to judge beauty impartially. After some time, they discovered that Beauty AI did not like dark faces.

The above-mentioned cases of AI failure should make us reflect. Nobody wishes to give it sensitive data to someone that has prejudices against him. Therefore, the problem of privacy is common to both humans and bots, especially when chatbots make the kinds of mistakes that are exemplified by the three examples above.

Chatbot privacy and AI service suppliers

Another issue of chatbot privacy is that most organizations do not have the ability to process data or manage the complex functioning of bots.

The most popular solution is to outsource the whole process of chatbot building, managing, and maintenance. Unfortunately, giving the job to a specialized IT company does not release an organization from legal liability, according to privacy law.

This creates a bad situation under the profile of chatbot privacy: the organization is still liable for privacy violations, even though the violations have been committed by a third company on which the organization has no power.

Chatbots: how good design helps with privacy?

There are several design strategies that an organization can implement to reduce the legal risks with chatbot privacy.

Establishing a policy on chatbots

An organization should create internal policies and internal protocols to establish the permitted extent of the activities of its chatbots. Other important points to consider are:

  • what information is collected and how
  • where information is sent and stored
  • the other usual points that any good privacy policy should treat

Design your chatbot in a way that your privacy policy is embedded

Chatbot biases can be prevented, and any conversational chatbots can be designed in a way to respect chatbot policy rules.

In particular, it is possible to avoid issues with religion, health, ethnicity, or other kinds of personal sensitive data.

Utilize data encryption and data filtering

Obscuring sensitive personal data could be necessary to implement an effective chatbot privacy policy. These two technologies allow chatbot designers to achieve this goal.

Consent human controls on AI chatbots

AI chatbots must be set up in a way to consent human control on them, according to section 25 of EU GDPR.

Chatbot privacy: final considerations

As we have seen in the previous paragraphs, most problems with chatbot privacy can be addressed by taking care of their design.

In other words, chatbot design and chatbot privacy should become part and parcel of your best practice and the mistakes to avoid with chatbots.

If you are a board director or a CEO, a good starting approach could be getting a smattering of what is a chatbot and how does it work.

A ground knowledge of the topics is the only way to be able to ask developers and lawyers the right questions and correctly understand their answers.

Knowing what the right questions to ask are means starting on the right foot which is basilar to avoid getting in trouble later with chatbot privacy.

Tactics without strategy is the noise before defeat”, said Sun Tzu. And getting the necessary information on the enemy is the first step to work out a good strategy even in the world of chatbots.

Best Articles

Written by
Marco La Rosa

Marco La Rosa is a blogger, writer, content creator, and UX designer. Very interested in new technologies, he wrote Neurocopywriting, a book about neurosciences and their applications to writing and communication‍

Linkedin
Try our chatbot builder for free!
COMPLETELY FREE FACEBOOK AUTO COMMENT & REPLY TOOL