Ideta blog image

OpenAI Function Calling : Bridging AI and Customer Support

What’s a Rich Text element?

The rich text element allows you to create and format headings, paragraphs, blockquotes, images, and video all in one place instead of having to add and format them individually. Just double-click and easily create content.

Static and dynamic content editing

A rich text element can be used with static or dynamic content. For static content, just drop it into any page and begin editing. For dynamic content, add a rich text field to any collection and then connect a rich text element to that field in the settings panel. Voila!

How to customize formatting for each rich text

Headings, paragraphs, blockquotes, figures, images, and figure captions can all be styled after a class is added to the rich text element using the "When inside of" nested selector system.

When you use Open AI’s model via API, you are amazed by all the things you can now do. However, there was an issue. You could answer people but you could not do anything. The simplest thing you could not do was human handover. If the chatbot could not accomplish its task, there was no easy way to transfer the request to a human being. OpenAI function calling solved exactly that.

In the realm of artificial intelligence (AI) and chatbots, one concept that will gain significant traction is OpenAI function calling. Function calling can be best understood as the AI's ability to execute specific tasks or commands during an interaction. In essence, it allows chatbots to 'call' predefined functions based on user inputs. These functions can be anything from querying a database, calling an external API, or executing a particular action.

OpenAI has brought this concept to the forefront with its latest models, which can intelligently choose to output a JSON object containing arguments to call certain functions. This advancement provides a new way to connect the AI's capabilities more reliably with external tools and APIs, thereby enhancing the chatbot's ability to respond to user needs and perform tasks.

Function calling in chatbots brings us closer to a world where users can interact with AI in a more natural, intuitive, and productive way. It's not just about asking questions or getting answers; it's about having an intelligent assistant that can perform actions, make decisions, and provide services based on our instructions. This exciting development holds immense promise for the future of customer support.

The importance of function calling in customer support

Function calling plays a pivotal role in customer support, dramatically enhancing the scope of services and solutions that can be provided to customers. Instead of merely responding with pre-defined messages, AI chatbots equipped with function calling can carry out actions to resolve customer queries in real-time.

For instance, if a customer asks about the status of their order, the chatbot can call a function to fetch the latest information directly from the database, providing an immediate and accurate response. Similarly, queries about product availability or booking appointments can be handled in a much more streamlined manner.

Not only does this speed up the resolution process, but it also reduces the need for human intervention, allowing support teams to focus on more complex issues. The result is an improved customer experience that is quick, efficient, and highly personalized.

Moreover, OpenAI function calling has the potential to revolutionize the way businesses handle customer data. With the ability to convert natural language into API calls or database queries, AI chatbots can extract, process, and utilize data in ways that were previously challenging or time-consuming.

GPT models enabling Open AI’s function calling

This part can be quite technical.

OpenAI's GPT models, such as gpt-4-0613 and gpt-3.5-turbo-0613, have been instrumental in integrating function calling into AI chatbot operations. These models have been fine-tuned to not only detect when a function needs to be called based on user input but also to respond with JSON that adheres to the function signature.

For instance, a user query like 'Email Anya to see if she wants to get coffee next Friday' can be converted into a function call like send_email(to: string, body: string). Similarly, a question like 'What’s the weather like in Boston?' can be translated to get_current_weather(location: string, unit: 'celsius' | 'fahrenheit').

In the context of customer support, a question such as 'Who are my top ten customers this month?' could lead to an internal API call like get_customers_by_revenue(start_date: string, end_date: string, limit: int). This seamless translation of natural language into precise API calls or database queries allows AI chatbots to provide more accurate, context-specific responses.

Furthermore, these models can even extract structured data from text. For instance, a function like extract_people_data(people: [{name: string, birthday: string, location: string}]) can pull out all people mentioned in a Wikipedia article. This capability opens up new possibilities for chatbots to interact with, and understand, a wide range of content.

The GPT models' ability to utilize function calling amplifies the power and potential of AI chatbots, enabling them to perform a wider array of tasks and provide more detailed and accurate responses.

The great thing when you combine it with Ideta is that you can use with zero code. You just need to call the right bubble that performs the action you need.

Understanding OpenAI Function Calling

In the context of OpenAI's models, function calling is a unique capability that allows the AI to 'call' or execute specific tasks during an interaction. This feature enables developers to describe functions to the models and have them intelligently choose to output a JSON object containing arguments that call these functions. The result is a more reliable connection between the AI's capabilities and external tools or APIs.

A function call typically begins with a user's input. The AI model detects when a function needs to be called based on this input, and responds with a JSON object that adheres to the function's signature. This object contains the function name and the necessary arguments, structured in a way that the function can understand and execute.

For example, a user input like 'I want to talk to a human being about cancelling my account' could result in a function call to human_handover(reason: string). Here, human_handover is the function name, while the 'reason' is an argument that the function requires.

This ability of function calling opens up a range of possibilities for developers. They can create chatbots that answer questions by calling external tools, convert natural language into API calls or database queries, and extract structured data from text, among other things.

A word about JSON Schema

JSON Schema is a declarative language designed to annotate and validate JSON documents. It is a tool that allows for a more reliable and confident use of the JSON data format. The benefits of JSON Schema include its ability to describe existing data formats, provide clear human- and machine-readable documentation, and validate data which is crucial for automated testing and ensuring the quality of client-submitted data​.

The JSON document being validated is referred to as the instance, while the document containing the description is called the schema. The most basic schema is a blank JSON object. By adding validation keywords to the schema, you can apply constraints on an instance. For instance, the "type" keyword can be used to restrict an instance to an object, array, string, number, boolean, or null. JSON Schema documents are identified by URIs, which can be used in HTTP Link headers, and inside JSON Schema documents to allow recursive definitions​.

For example, the following schema :

JSON Schema for OpenAI function calling
Example of JSON Schema associated with getting the weather in a location.

Safety and Considerations

Despite the benefits of function calling in AI-powered customer support, it's crucial to acknowledge the safety considerations and potential challenges. A major concern is the risk of untrusted data instructing the model to perform unintended actions. This can happen when a model is fed with manipulated or malicious data, potentially leading it to carry out actions that are inappropriate or harmful.

For instance, a proof-of-concept exploit has demonstrated how untrusted data from a tool’s output can instruct the model to perform actions that it wasn't intended to. This issue is particularly challenging because of the open-ended nature of language models and their capacity to generate diverse responses based on the input they receive.

In response to these challenges, OpenAI is actively working on mitigating these risks. One of the recommended practices for developers is to protect their applications by only consuming information from trusted tools. Developers can also include user confirmation steps before performing actions with real-world impact, such as sending an email, posting online, or making a purchase. These measures can help ensure that the function calling is used safely and responsibly, preserving the integrity of the application and the security of the user data.

Protecting applications while using OpenAI function calling

Developers have an array of tools at their disposal to safeguard their applications when using function calling with AI models. A primary strategy is to consume information only from trusted tools. This means that the data fed into the model should come from reliable sources, which can significantly reduce the risk of the model being manipulated by untrusted or malicious data.

Another critical measure is to include user confirmation steps before the execution of actions that have real-world consequences. For instance, if a chatbot is designed to send emails or make purchases based on user input, it should ask for explicit user confirmation before performing these actions. This can be as simple as having the chatbot generate a response like, 'Are you sure you want to send this email?' or 'Do you confirm this purchase?'

These precautions might seem straightforward, but they are incredibly effective in enhancing the security and reliability of applications that utilize function calling. By following these practices, developers can ensure their applications are not only powerful and user-friendly but also secure and trusted by users.

Conclusion

In conclusion, the use of function calling in AI models, particularly OpenAI's GPT models, presents a significant opportunity for revolutionizing customer support services. It allows for the creation of more responsive, versatile, and efficient chatbots that can effectively handle a variety of customer inquiries.

However, as with all powerful tools, careful and responsible use is paramount. Developers must be aware of the potential risks associated with untrusted data and take necessary steps to protect their applications. By consuming information only from trusted sources and incorporating user confirmation steps, developers can ensure the safe and efficient operation of their AI-powered customer support services.

If you're looking to harness the power of OpenAI function calling in your customer support or have a project in mind that could benefit from these advancements, feel free to reach out to us at contact@ideta.io. Our team is ready to help you navigate these new waters and leverage AI to enhance your customer support services.

Best Articles

Written by
Ideta

Nous aidons les entreprises à construire leurs propres chatbots, voicebots ou callbots, en leur fournissant un logiciel puissant.

Linkedin
Try our chatbot builder for free!
COMPLETELY FREE FACEBOOK AUTO COMMENT & REPLY TOOL