The AI Book
    Facebook Twitter Instagram
    The AI BookThe AI Book
    • Home
    • Categories
      • AI Media Processing
      • AI Language processing (NLP)
      • AI Marketing
      • AI Business Applications
    • Guides
    • Contact
    Subscribe
    Facebook Twitter Instagram
    The AI Book
    AI Language processing (NLP)

    How to integrate the Dialogflow CX API to add NLP capabilities to your Chatbot?

    16 May 2023No Comments6 Mins Read

    [ad_1]

    Dialogflow API integration

    To access Dialogflow CX via API, we must first have Gcloud CLI installed on our system. If you don’t have Gcloud installed, please follow the given link https://cloud.google.com/sdk/docs/install.

    After installing gcloud follow these steps to grant authorization permission.

    • Select the project with the given command gcloud config set project <ProjectID>
    • Now run the following command gcloud auth application-default login
    • You will be prompted for a login page. “You are now authenticated with the gcloud CLI!” This will appear after you have successfully logged in.

    Next, we use the function provided in the official Dialogflow API documentation. The function used to detect the intent will take the project ID, session ID, texts, and language code as arguments.

    from google.cloud.dialogflowcx_v3.services.sessions import SessionsClient
    from google.cloud.dialogflowcx_v3.types import session
    from google.protobuf.json_format import MessageToDict
    
    
    def detect_intent_disabled_webhook(project_id, location, session_id, agent_id, text, language_code):
       client_options = None
       if location != "global":
           api_endpoint = f"location-dialogflow.googleapis.com:443"
           print(f"API Endpoint: api_endpoint\n")
           client_options = "api_endpoint": api_endpoint
       session_client = SessionsClient(client_options=client_options)
      
       session_path = session_client.session_path(project=project_id,location=location,agent=agent_id,session=session_id)
    
       # Prepare request
       text_input = session.TextInput(text=text)
       query_input = session.QueryInput(text=text_input, language_code=language_code)
       #If there is a webhook running in the background, you can disable it by setting 'disable_webhook' to False instead of True.
       query_params = session.QueryParameters(disable_webhook = True)
       request = session.DetectIntentRequest(session=session_path,query_input=query_input,query_params=query_params)
       response = session_client.detect_intent(request=request)
       # print(response)
       response_dict = MessageToDict(response._pb)
       print(response_dict)
      
    project_id = "Your Project ID"
    location_id = "Your Location ID"
    agent_id = "Your Agent ID"
    session_id = "test_1"
    text = "Hello"
    language_code = "en-us"
    detect_intent_disabled_webhook(project_id, location_id, session_id, agent_id, text, language_code)

    where the project id is the Bot id from Dialogflow. We can define a session ID to maintain the session between Dialogflow and the user. The request text will be entered as “text” and the “language_code” will be the code of the language the bot will work on.

    To set the training phrase in the code, we define a static variable “text”. We’re going to run the function in the last line of code.

    Sample answer

    After executing the above function, the Dialogflow API will detect the intent based on the text entered by the user. In response to our request, it sends an object.

    Below is a sample response generated by the Dialogflow API. To get a response to our message, we need to get the execution text from the received response object.

    {
     "responseId": "d91c430d-7b4e-44c1-a5f5-f4563fdd4e6f",
     "queryResult": {
       "text": "Hello",
       "languageCode": "en",
       "responseMessages": [
         
           "text": 
             "text": [
               "Welcome to Dialogflow CX."
             ]
           
         
       ],
       "currentPage": 
         "name": "projects/appointment-cxx/locations/us-central1/agents/fcc53e9b-5f20-421c-b836-4df53554526c/flows/00000000-0000-0000-0000-000000000000/pages/START_PAGE",
         "displayName": "Start Page"
       ,
       "intent": 
         "name": "projects/appointment-cxx/locations/us-central1/agents/fcc53e9b-5f20-421c-b836-4df53554526c/intents/00000000-0000-0000-0000-000000000000",
         "displayName": "Default Welcome Intent"
       ,
       "intentDetectionConfidence": 1,
       "diagnosticInfo": {
         "Transition Targets Chain": [],
         "Session Id": "test_1",
         "Alternative Matched Intents": [
           
             "Type": "NLU",
             "Score": 1,
             "DisplayName": "Default Welcome Intent",
             "Id": "00000000-0000-0000-0000-000000000000",
             "Active": True
           
         ],
         "Execution Sequence": [
           {
             "Step 1": 
               "Type": "INITIAL_STATE",
               "InitialState": 
                 "FlowState": 
                   "FlowId": "00000000-0000-0000-0000-000000000000",
                   "Name": "Default Start Flow",
                   "PageState": 
                     "Status": "ENTERING_PAGE",
                     "Name": "Start Page",
                     "PageId": "START_PAGE"
                   ,
                   "Version": 0
                 ,
                 "MatchedIntent": 
                   "Type": "NLU",
                   "Score": 1,
                   "DisplayName": "Default Welcome Intent",
                   "Active": True,
                   "Id": "00000000-0000-0000-0000-000000000000"
                 
               
             
           },
           {
             "Step 2": 
               "FunctionExecution": 
                 "Responses": [
                   
                     "text": 
                       "redactedText": [
                         "Welcome to Dialogflow CX."
                       ],
                       "text": [
                         "Welcome to Dialogflow CX."
                       ]
                     ,
                     "responseType": "HANDLER_PROMPT",
                     "source": "VIRTUAL_AGENT"
                   
                 ]
               ,
               "Type": "STATE_MACHINE",
               "StateMachine": 
                 "FlowState": 
                   "FlowId": "00000000-0000-0000-0000-000000000000",
                   "PageState": 
                     "Status": "TRANSITION_ROUTING",
                     "Name": "Start Page",
                     "PageId": "START_PAGE"
                   ,
                   "Version": 0,
                   "Name": "Default Start Flow"
                 ,
                 "FlowLevelTransition": True,
                 "TriggeredIntent": "Default Welcome Intent",
                 "TriggeredTransitionRouteId": "f48cf8b5-c147-42a2-b967-12d2a3c0fcad"
               
             
           },
           {
             "Step 3": 
               "Type": "STATE_MACHINE",
               "StateMachine": 
                 "FlowState": 
                   "FlowId": "00000000-0000-0000-0000-000000000000",
                   "Name": "Default Start Flow",
                   "PageState": 
                     "Status": "TRANSITION_ROUTING",
                     "Name": "Start Page",
                     "PageId": "START_PAGE"
                   ,
                   "Version": 0
                 
               
             
           }
         ],
         "Triggered Transition Names": [
           "f48cf8b5-c147-42a2-b967-12d2a3c0fcad"
         ]
       },
       "match": 
         "intent": 
           "name": "projects/appointment-cxx/locations/us-central1/agents/fcc53e9b-5f20-421c-b836-4df53554526c/intents/00000000-0000-0000-0000-000000000000",
           "displayName": "Default Welcome Intent"
         ,
         "resolvedInput": "Hello",
         "matchType": "INTENT",
         "confidence": 1
       
     },
     "responseType": "FINAL"
    }

    Voice recognition using the Dialogflow API

    The Dialogflow API can also recognize targets based on speech audio input with the help of their feature in the documentation.

    It takes similar input as the text version, the only change is that instead of taking the text of the request, we pass the path of the audio file to be processed.

    It can accept audio files as input and convert the audio to query text, which is then passed to Dialogflow. The answer is received with all the necessary answers and answers for our audio input.

    The function we used to detect the audio file is shown below, taken from the Dialogflow API documentation. The audio file must be in “wav” format and in mono channel, otherwise the code will give an error.

    See below script to convert audio to wav format.

    from pydub import AudioSegment
    
    input_audio = AudioSegment.from_wav("YOUR-AUDIO-FILE-PATH")
    input_audio = input_audio.set_channels(1)
    input_audio.export("YOUR-AUDIO-FILE-PATH", format="wav")
    import uuid
    
    from google.cloud.dialogflowcx_v3.services.agents import AgentsClient
    from google.cloud.dialogflowcx_v3.services.sessions import SessionsClient
    from google.cloud.dialogflowcx_v3.types import audio_config
    from google.cloud.dialogflowcx_v3.types import session
    
    def detect_intent_audio(agent, session_id, audio_file_path, language_code):
       """Returns the result of detect intent with an audio file as input.
    
       Using the same `session_id` between requests allows continuation
       of the conversation."""
       session_path = f"agent/sessions/session_id"
       print(f"Session path: session_path\n")
       client_options = None
       agent_components = AgentsClient.parse_agent_path(agent)
       location_id = agent_components["location"]
       if location_id != "global":
           api_endpoint = f"location_id-dialogflow.googleapis.com:443"
           print(f"API Endpoint: api_endpoint\n")
           client_options = "api_endpoint": api_endpoint
       session_client = SessionsClient(client_options=client_options)
    
    
       input_audio_config = audio_config.InputAudioConfig(
           audio_encoding=audio_config.AudioEncoding.AUDIO_ENCODING_LINEAR_16,
          
       )
    
    
       with open(audio_file_path, "rb") as audio_file:
           input_audio = audio_file.read()
    
    
       audio_input = session.AudioInput(config=input_audio_config, audio=input_audio)
       query_input = session.QueryInput(audio=audio_input, language_code=language_code)
       request = session.DetectIntentRequest(session=session_path, query_input=query_input)
       response = session_client.detect_intent(request=request)
    
       print("=" * 20)
       print(f"Query text: response.query_result.transcript")
       response_messages = [
           " ".join(msg.text.text) for msg in response.query_result.response_messages
       ]
       print(f"Response text: ' '.join(response_messages)\n")
      
    project_id = "YOUR-PROJECT-ID"
    location_id = "YOUR-LOCATION-ID"
    agent_id = "YOUR-AGENT-ID"
    agent = f"projects/project_id/locations/location_id/agents/agent_id"
    session_id = str(uuid.uuid4())
    audio_file_path = "YOUR-AUDIO-FILE-PATH"
    language_code = "en-us"<br>
    detect_intent_audio(agent, session_id, audio_file_path, language_code)

    Sample answer for audio

    After executing the above function, below is the sample response received from Dialogflow API for given audio.
    Query text: hello
    Response text: Welcome to Dialogflow CX.

    If you need a JSON response, you can get it by printing the response variable. These JSON responses from the Dialogflow API will allow you to integrate with other chatbots. We hope this documentation helps you use the Dialogflow API with a chatbot.

    Please let us know if you face any difficulty in implementing the above features in the comments section. We will be happy to help you. If you are looking for chatbot development or natural language processing services then contact us or send us your inquiry at letstalk@pragnakalp.com. We will be happy to offer you our expert services.



    [ad_2]

    Source link

    Previous ArticleThe opaque nature of influencer marketing
    Next Article Benchmarking Supply Chain Visibility Software in 2023
    The AI Book

    Related Posts

    AI Language processing (NLP)

    The RedPajama Project: An Open Source Initiative to Democratize LLMs

    24 July 2023
    AI Language processing (NLP)

    Mastering Data Science with Microsoft Fabric: A Tutorial for Beginners

    23 July 2023
    AI Language processing (NLP)

    Will AI kill your job?

    22 July 2023
    Add A Comment

    Leave A Reply Cancel Reply

    • Privacy Policy
    • Terms and Conditions
    • About Us
    • Contact Form
    © 2025 The AI Book.

    Type above and press Enter to search. Press Esc to cancel.