Showing posts with label Artificial Intellegence(AI). Show all posts
Showing posts with label Artificial Intellegence(AI). Show all posts

Saturday, December 8, 2018

Best way of Customizing Your Api.ai Assistant with Context and Intent

Leave a Comment
Best way of Customizing Your Api.ai Assistant with Context and Intent 


If you are interested in learning more about AI, then for your AI AI feeling, see our screencast Microsoft Cognitive Services and Text Analytics API.

Best way of Customizing Your Api.ai Assistant with Context and Intent


Api.ai is a very simple service that allows developers to create their own personalized Ai Assistant / Chatbot that works like an Alexa of Siri and Amazon. How did I recently build my AI Assistant using API, where I showed the basics of establishing AI Assistant and teaching it some basic things. In this article, I want to go one step ahead and present "intentions" and "references", there is a way to teach our AI assistants another specific task to personalize their specific needs. This is where things can be really exciting.

Note: This article was updated in 2017 so that the recent changes can be reflected.


Good Building an AI assistant with Api.ai:
This post is one of a series of articles aimed at helping you achieve a simple personal assistant running with APIs:

You can also Read this AI Article:



What is an right Intent?
An intent is a concept that will be able to understand and react to a specific action. An intent contains a range of contexts that we can enter as sentences that the user has to say to our assistant. A few examples could include "Order me lunch", "Show me today's daily Garfield comic strip", "Send a random GIF to the SitePoint team on Slack", "Cheer me up" and so on. Each of those will be custom intents which we could train our assistant to understand.

Creating an right Intent:
To create an intent, log in to the agent that you want to add new functionality to the API console page and click on the "intend" button next to the heading "intent" on the top of the page or in the " Intentions "plus icon:



The sample objective for the assistant of this demo is to teach our assistant to excite people when they are feeling with movie quotes, jokes and other things. To get started, call "I have to say" a new intention and write your first trigger sentence under "The user says". The first sentence given below is "make me happy" Press the Enter key or click "Add" to add your sentence:



Generally, there is a series of different ways that we can say the same thing. Add them to a series of statements in the account, which represent various methods that users can indicate that they want to get excited, like "smile to me" and "I feel unhappy":



Now there is a sentence for many sentences to understand the helpers, but you have not said that when it listens then action is expected. To do this, create "Action". The Assistant will return the "Action" name back to your web app to allow it to respond.

In this case, you will not respond to the first verb called "chemup", but it will be easier in the future when answering the actions in your web app. I always recommend including action names for your intentions.



You can also add parameters to your works, but I will cover it in detail in my next article at Api.ai!

Direct Guiding Via Speech Response:
Your user has told the agent that they want to be excited, you want to direct the conversation to the user, tell the agent whatever you want. Intelligence helps to confuse confusion by limiting how much this chatbot needs to be handled. To do this, you provide speech feedback in the form of questions within the "speech feedback" section. For example, "Let's make you happy! Do you want a joke or a movie quote?"



Finally, click on the "Save" button next to your desired name to save your progress.

Testing Your Agent:
You can test your new intentions by typing a test statement in the test console on the right. Test it by saying "make me happy":



The agent responds with the intention of one of your trained responses. Once the phrase is provided, Api.ai learns! After this you will find variations on the phrase of the statement. As a result, your intention is also going on as a result, "Please give me a smile", "Say something smiling to me" or "I feel sad"



These differences seemed to trigger only after using the original phrase with the trigger. I'm not sure if it takes time to produce an understanding of similar phrases, but if any difference does not work, then first try to ask basic details. If your diversity is very different, then you have to add it to the statement of reference. The more statements you add here, the better our agent will be able to answer.

One thing you can see if you used a statement "I am sorry to hear this! How can I help you feel better?" It is that it is not specific enough to guide the user. If they are not aware of the options of "movie coat" or "joke", then they can ask for something that you have not covered! Over time, you can train your agent to understand many other concepts. However, for now I recommend to be specific with your questions!

Using Best Contexts:
By guiding the conversation with your speech feedback, your agent needs to follow the matter when the conversation occurs when the user speaks further. If a user says "a joke" or even "either one" without any prior conversation, the sentence can not be clear to give the agent feedback outside the context. If I talk to you and just say "a joke" how would you react? At this time your assistant will be given, because there is no way to remember that the conversation was moving forward first.

This is where the setting references in Api.ai come. We make reference to find out what the user and agent are talking about. Without references, each sentence will be completely separated before that.

To make a reference, click on the "Define Context" link at the top of the Api.ai console for your intentions:



Here you will have a section for input references and a section for output references. Input refers to the agent in which reference should be intent. For your first intent, you want to run it at any time, so leave the input reference blank. The output references intend to be raised in future messages. This is what you want:



Now create an output reference named "cheering-up". When naming a reference, Api.ai suggests an alphanumeric name without spaces. Type in your reference and press the Enter key to add it. Then click "Save" to save your changes:



If you check your agent once again by saying "make me happy", the result shows that your reference is still visible:



Filtering Intents With Contexts:
Your agent now understands that the conversation of "enthusiasm" is a reference. Now you can only establish an intention to run when it is referenced. For example, question your agent - create a potential response for "a movie coat". Go back to the menu on the left and click the plus icon to create a new intention:



Call your intentions "Movie Quotes" and set the input reference to "cheering-up". It tells your agent that only our users should accept this feedback if they had previously asked to be excited. We add some sample methods that users can say "I want a movie quote":



Then scroll down and include a series of more movie quotes in your feedback (feel free to include your favorites):



To make your movie quote purpose, click once again on "Save" next to the name of your intent. Then try to enter "Test me up" in the Test console next to it and follow it with "Movie Quotes". Agent should now say you a movie quote!

Then you can follow the same procedure to add a response to the "one joke" intent.

You are not required to provide your agent with a list of hardcode responses. You can instead set an action name for each purpose and respond to that action within your web app. This is another concept which I will include in future articles! You can be ready for future additions by giving an action called "cheermeup.moviequote" to your "movie coat" (the dot helps you to ensure that the action will take you to the normal "moviequote" action of any future you added Is not mixed with).

In Action:
If you have added these intentions in the same personal assistant used for your web app in the previous intention, then new functionality should automatically appear! If you made a new one, you must first update the API key in your web app. Open your personal assistant web app in your web browser and try it by asking your assistant to ask:



Then, tell it that you want a movie quote and see what happens:



Then, tell it that you want a movie quote and see what happens:

Conclusion:
There are several ways to personalize your assistant and use the concepts of contexts. Chances are there are some ideas already in your mind! To identify concepts (which are known as institutions) within our custom intentions, we can do more to train our API assistants, which we include in the next article in this series!

If you are following and using your personal assistant using Api.ai, then I would love to hear about it! What custom intention did you come from? Please tell me in the comments below.


Read More

How Can You Build Your Own AI Assistant Using Api.ai

Leave a Comment
How Can You Build Your Own AI Assistant Using Api.ai

If you are interested in learning more about AI, then for your AI AI feeling, see our screencast Microsoft Cognitive Services and Text Analytics API.



How Can You Build Your Own AI Assistant Using Api.ai


Artificially increasing the world of intelligent assistants - Siri, Cortana, Alexa, OK Google, Facebook M, Bixby - all big players of technology have their own. However, many developers do not realize that making your own AI assistant is easy! You can customize it to your own needs, your own IOT connected device, in your own custom API. sky's the limit.



Note: This article was updated in 2017 so that the recent changes can be reflected.


Earlier, in 2016 I made a guide on five simple methods of making artificial intelligence, where I included some simple options to make AI assistants. In this article, I want to see a special service that makes incredibly easy to get a fully featured AI assistant with very little initial set up - Api.ai.



Constructing an AI Assistant through Api.ai:

This post is one of a chains of articles aimed to help you get a simple personal assistant running with API. One of the series is:...

  1. How to Create Your Own AI Assistant Using Api.ai (this one!).
  2. Customizing Your Api.ai Assistant with Intent and Context.
  3. Empowering Your Api.ai Assistant with Entities.
  4. What is the way to Connect Your Api.ai Assistant to the IoT.
Know What is Api.ai?
Api.ai is a service that allows developers to build speech-to-text, natural language processing, artificial intelligent systems, which you can train with your own custom functionality. They have a series of existing knowledge bases that the systems created with Api.ai are automatically called "domains" - which we will focus on in this article. The domain encyclopedia provide a thorough knowledge base of knowledge, language translation, weather and more. In future articles, I will cover some more advanced aspects of Api.ai which allow you to further your assistant.

Let's Getting Started With Api.ai:
To begin, we will go to the Api.ai website and click on the "Start Free" button or "Sign up free" button in the top right corner.

Then we are taken to the registration form which is very simple: Enter your name, email and password and click "Sign up". For those who avoid another set of login credentials, you can also use the right button to sign up using your account or Google account.

Since Google was purchased by Api.ai, it is fully migrated to use Google Accounts to log in. So if you are new to Api.ai, you will need to sign in with your Google Account:



Click on Allow on screen to grant Api.ai access to your Google account:



You will also need to read and agree to your Terms of Service:



Once signed up, you will be taken directly to the API interface where you can make your virtual AI assistant. Each assistant whom you create and teach specific skills, the APIs In the "agent" is called. Therefore, to begin, you create your first agent by clicking on the "Create Agent" button on the left-hand side:



It is to be needed to authorize Api.ai again to have additional permissions for your Google Account. It's normal and OK! Click "Authorize" to continue:



And allow:



On the next screen, enter in your agent's description, which includes the following:

Name: It is for your own reference to isolate agents in the api.ai interface. You can call the agent whatever you like - either the person's name (I have chosen Barry) or the name that represents the work they are supporting (such as the light controller).

Description: A human readable description so that you can remember what is responsible for the agent. This is optional and if your agent's name is self-explanatory, then it may not be necessary.

Language: The language in which the agent works. After selecting it, it can not be changed - so choose wisely! For this tutorial, choose English, because English has access to the most Api.ai domains. You can see which domains are available for each language in the language table in Api.ai documents.

Timezone: As you'd expect, it's a timezone for your agent. Chances are this will have already detected your current timezone.

It will also automatically set up the Google Cloud Platform project for your agent, so you do not need to do anything in this regard; It's all automatic! It is good to know that this is happening, however, if you make many trials and many agents, just know that many Google Cloud Platform projects are being created which you want to clear a few days.

When you input your agent's settings, select "Save" next to the agent's name to save everything:



The Best Test Console:
Once you become your agent, you can check it with the test console on the right. You can enter the questions at the top and send it to your agent, who will tell you what will be done by listening to those statements. Enter a question like "How are you?" And see what it returns. Your results should appear below:



If you scroll down on the right side of results, you'll see more details for how Api.ai interpreted your request (as seen in the above screenshot). Below that, there is a button called "Show JSON". Click here to see how the API will return....

Api.ai will open JSON viewer and show you a JSON response that looks like this:

{
   "id": "21345678",
   "timestamp": "2017-05-12T08: 04: 49.031Z",
   "lang": "en",
   "result": {
     "source": "agent",
     "resolvedQuery": "How are you?",
     "action": "input.unknown",
     "actionIncomplete": false,
     "parameters": {},
     "contexts": [],
     "metadata": {
       "intentId": "6320071",
       "webhookUsed": "false",
       "webhookForSlotFillingUsed": "false",
       "intentName": "Default Fallback Intent"
     },
     "fulfillment": {
       "speech": "Sorry, can you say that again?",
       "messages": [
         {
           "type": 0,
           "speech": "Sorry, could you say that again?"
         }
       ]
     },
     "score": 1
   },
   "status": {
     "code": 200,
     "errorType": "success"
   },
   "sessionId": "243c"
}
As you will see ... your agent does not know how to answer! Right now, this is not exactly "intelligent" artificial intelligence: it should still be added to the intelligence bit. Input field input. The unknown value tells you that it is not sure how to move forward. Above, "Sorry, can you call it again?" A message is returning, which is one of its default fallbacks. Rather than telling people that it does not understand, it just asks them to say again ... more and more. It's not ideal, and I want to convert it to something that does not understand the bot. If you like about such a thing and want to change what you say here, you can find it on the "intentions" page by clicking on the "Default Fallback Intent" item there.

A note for those who had used Api.ai some time ago (or seen it in action): you were actually hoping to be slightly more available outside the box. First, by default, "Who is Steve Jobs?" As was able to answer questions. This is no longer the case! You need to add your own integration with third party APIs to take action and source information. Api.ai provides sentences for the parsing of the sentence and the interpretation of things.

Adding Small Task with Api.ai:

There is a default functionality that you can add that gives your bot a small indication of intelligence - the "little thing" feature. This provides answers to commonly asked questions ... "How are you?" Including However this is not turned on by default. To turn it on, go to the "Little Thing" menu item on the left and click on "Enable".




After enabling, if you scroll down, you can see a series of categories of normal small talk phrases.There we are to find the "Hello / Goodbye" section and click on it to expand it. Question "How are you?" Add some different responses to the question and then click "Save" on the top right. Upon adding the phrase, you will see a percentage figure next to the "Hello / Bye" section, to show how much you have optimized your chatbot.





If you then go to the test console and ask him "How are you?" Again, now you should answer with one of the responses you have entered!





If it does not respond properly, check that you actually clicked "Save" before going! It does not save automatically.


Ideally, you would like to customize as many small talk responses as you can: This is what will give your API another unique personality. You can choose the tone and structure of your reactions. Is this a mess chatbot that hates people to talk to? Is this chatbot surrounded by cats? Or maybe a chatbot that answers in the pre-teen internet / text talk? You decide!


Now that you have at least a few small talk elements running, your agent is now ready to integrate you into your web app interface. To do this, you will need to get your API key to give your agent remote access.


 Find your Api.ai API keys:


The API key you need will be on the agent's settings page. To find it, click the cog icon next to the name of your agent. Copy and paste "Client Access Token" anywhere on the page that appears. That is, we will need to ask questions for the service of APIs:


The Code:
If you want to see the working code and play with it, then this is available on the dot. Feel free to use it and expand on the idea of your own AI personal assistant.

If you can try it, then I am running a baritone. Enjoy it!

Connecting to Api.ai Using through JavaScript:
You currently have a working personal assistant who is running anywhere in the API's cloud. Now you need a way to talk to your personal interface from your personal interface. Api.ai has a series of Platform SDKs that work with Android, iOS, Web app, Unity, Cordova, C ++ and more. You can integrate it into Slack bot or Facebook Messenger bot too! For this example, you will use HTML and Javascript to create a simple personal assistant web app. My demo creates the concepts shown in Api.ai's HTML + JS Gist.

Your app will do the following:

  • Accept the written commands in the input field, when you press the Enter key, submit that command.
  • Or, using the HTML5 Speech Recognition API (this works only on Google Chrome 25 and above), if the user clicks on "say", they can speak their orders and they will be automatically in the input field Can be written in the form.
  • Once the order is received, you can use jQuery to submit AJAX POST request on api.ai. Api.ai will return your knowledge as a JSON object, as you saw in the above test console.
  • You will read in that JSON file using Javascript and display results on your web app.
  • If available, your web app will also use the Web Speech API (available in Google Chrome 33 and above) to give you verbal feedback.
The entire web app is available on the given link above. Feel free to see how I have styled things and have structured HTML. I will mainly explain each piece kept in this article, focusing on the sides of the Api.ai SDK. I will also tell you which bits are briefly using the HTML5 Speech Recognition API and Web Speech API.

Your javascript contains the following variables:
var accessToken = "YOURACCESSTOKEN",
    baseUrl = "https://api.api.ai/v1/",
    $speechInput,
    $recBtn,
    recognition,
    messageRecording = "Recording...",
    messageCouldntHear = "I could not hear you, could you say that again?", 
    messageInternalError = "Oh no, there has been an internal server error",
    messageSorry = "I'm sorry, I don't have the answer to that yet.";
Here's what for each of these:


  • access token. This is the API key that you have copied from the API interface. These allow you to access the SDK and also say which agent you are accessing. I want to use Barry, my personal agent.
  • baseurl. This is the base URL for all calls of Api.ai SDK. If a new version of SDK comes, you can update it here.
  • $ SpeechInput It stores your <Input> element so that you can access it in your JavaScript.
  • $ RecBtn It stores your <Button> element that you will use when users want to click and instead want to talk to the web app.
  • Recognition You are to store your webkitSpeechRecognition () functionality in this variable. This is for the HTML5 speech recognition API.
  • Message Recording, Message Could NotHear, Messages InternalError and Message Souri. There are messages to show this message when the app is recording the user's voice, when you have an internal error, and if your agent does not understand, they can not hear their voice. You store these as variables so that you can easily change them at the top of your script, and also that you can specify which app you do not want to talk aloud later.

In these lines of code, see when the user presses the Enter key in the input field. When this happens, send the Send () function to send data to api.ai:


$speechInput.keypress(function(event) {
  if (event.which == 13) {
    event.preventDefault();
    send();
  }
});
After this, if the user clicks on the recording button to tell the app to listen (or stop listening to it if listening). If they click on it, switch the switch recognition () function to switch to switching from the recording and vice versa:


$recBtn.on("click", function(event) {
  switchRecognition();
});
Finally, for your initial jQuery setup, you set up a button that will show and hide the JSON response at the bottom right of your screen. It's only to keep things clean: Most of the time you do not want to look at JSON data, but every now and then some are unexpected, you can click this button to see if JSON is viewable or not:


$(".debug__btn").on("click", function() {
  $(this).next().toggleClass("is-active");
  return false;
});

Using the HTML5 Speech Recognition API Technique:

As mentioned above, you will use the HTML5 Speech Recognition API to listen to the user and transcribe the words written by them. It currently works in Google Chrome.

Our starting detection () function looks like this:

function startRecognition() {
  recognition = new webkitSpeechRecognition();

  recognition.onstart = function(event) {
    respond(messageRecording);
    updateRec();
  };
  recognition.onresult = function(event) {
    recognition.onend = null;

    var text = "";
    for (var i = event.resultIndex; i 
It runs the HTML5 Speech Recognition API. It uses functions within all webkitSpeechRecognition (). Here are some pointers for what's happening:

  • recognition.onstart. Runs while recording from a user's microphone. You tell your message to the user using the feedback () function that you are listening to. I will cover the feedback () function in more detail soon. updateRec () switches the text from "Pause" to "Speak" for your recording button.

  • recognition.onresult. When you have a result from voice recognition then it moves. You parse the result and set your text field to use that result through the set text (this function only adds text to the input field and then runs your sender () function).

  • recognition.onend Voice recognition runs on end. If you get successful results, you can set it to recognize it to prevent it from running. In this way, if the recognition. If you run online, then you know that the Voice Recognition API user has not understood. If the function runs, you answer to tell the user that you have not heard them correctly.

  • identity.lang Sets the language you are looking for In the case of demo, it is looking for US English.

  • recognition.start (). The whole process begins!

Your Stop Recognition () function is very easy. This prevents your identity and sets it to zero. Then, the button updates to show that you are no longer recording:
function stopRecognition() {
  if (recognition) {
    recognition.stop();
    recognition = null;
  }
  updateRec();
}
Switch recognition () Toggles whether you are starting recognition or stopping recognition by checking the identity variable. This allows your button to turn on and off recognition:
function switchRecognition() {
  if (recognition) {
    stopRecognition();
  } else {
    startRecognition();
  }
}

Communicating Through Api.ai:

To send your query to Api.ai, you use the Send () function that looks like this:

function send() {
  var text = $speechInput.val();
  $.ajax({
    type: "POST",
    url: baseUrl + "query",
    contentType: "application/json; charset=utf-8",
    dataType: "json",
    headers: {
      "Authorization": "Bearer " + accessToken
    },
    data: JSON.stringify({query: text, lang: "en", sessionId: "runbarry"}),

    success: function(data) {
      prepareResponse(data);
    },
    error: function() {
      respond(messageInternalError);
    }
  });
}

This is a general AJAX POST request on https://api.api.ai/v1/query using jQuery. You make sure that you are sending JSON data and expecting JSON data from it. To be your API key for Api.ai, you also need to set two headers - authorization and OCP-API-subscription-key. You send in your data format in {q: text, lang: "en"} and wait for the response.

When you receive a response, you run readyResponse (). In this function, you format the JSON string that you will insert into your debug section of the web app and you will take the result of the response of api.ai, which will give you the text response of your assistant. You display each message through the response () and debugRespond ():
function prepareResponse(val) {
  var debugJSON = JSON.stringify(val, undefined, 2),
      spokenResponse = val.result.speech;

  respond(spokenResponse);
  debugRespond(debugJSON);
}
Your debug response () function places text in your field for a JSON response:
function debugRespond(val) {
  $("#response").text(val);
}
Your response () has some more steps in your answer () function:
function respond(val) {
  if (val == "") {
    val = messageSorry;
  }

  if (val !== messageRecording) {
    var msg = new SpeechSynthesisUtterance();
    var voices = window.speechSynthesis.getVoices();
    msg.voiceURI = "native";
    msg.text = val;
    msg.lang = "en-US";
      window.speechSynthesis.speak(msg);
  }

  $("#spokenResponse").addClass("is-active").find(".spoken-response__text").html(val);
}
In the beginning, you check to see if the response value is empty or not. If so, then you set it to say that it is not certain about the answer to that question, because Api.ai has not given you a valid response:
if (val == "") {
  val = messageSorry;
}
If you do have a message to output and it is not a saying that you're recording, then you use the Web Speech API to say the speech out of the speech. SynthesisUtterance object. I found that without setting of voice URI and language, my browser's default voice was German! This made its speech You can use the window.speech Synthesis.speak (msg) function:...
if (val !== messageRecording) {
  var msg = new SpeechSynthesisUtterance();
  msg.voiceURI = "native";
  msg.text = val;
  msg.lang = "en-US";
  window.speechSynthesis.speak(msg);
}
Note: It is important not to talk about this "recording ..." text: if you do, the microphone will lift that speech and add it to the recorded query.

Finally, display your feedback box and add that text to it so that the user can also read it:
$("#spokenResponse").addClass("is-active").find(".spoken-response__text").html(val);
Hosting Your Web Interface is important:
For best results, you may need to host it on an HTTPS enabled web server. Your request for Api.ai is more than HTTPS, so it is also better to host your web interface on HTTPS. If you are looking to use it as a prototype and you do not have an easily available HTTPS secure web server, then try Glitch.com! This is a new service that can host the code snippet, which includes both front end and back-end (node.js) code.

For example, Barry is also hosted at https://barry.glitch.me. Hosting at this time is completely free of charge! This is a great service and I recommend going to it.

If you want to make this a big project, then either encrypt it for a free SSL / TLS certificate or to purchase it from your web host.

In Action:
If you run a web app using your styles within the GuitHub repo, something looks like this:



If you click on Speak and How are you? If you ask a question by clicking on it, in the beginning it shows that you are recording:



(When you click on that button, you may need to allow Chrome to access your microphone, of course it will be as long as you do not serve the page as HTTPS.)

After this it gives a visual feedback (and also speaks, which is difficult to show in the screenshot) such as:



You can also click on the button at the bottom right to see the JSON response Api.ai gave it to you, just if you want to debug the result:



If you were in the first time "I could not hear you, can you say it again?" Then message will check your microphone permissions in your browser. If you are loading pages locally (for example, if your address bar starts with file: ///), Chrome does not give any access to the microphone, and thus you will not make any difference with this error ! You will need to host it somewhere. (Try Glitch.com described above.)



Personally, I'm not a fan of some of the smallest things, like this:



I have optimized one of those groups in those settings that we had seen before. For example, I found this little talk statement quite awkward in the list and it was decided to adapt it like this:



So get out of there and customize your own chatbot! Make it unique and have fun!

Having Issues?

I found that sometimes, if the Web Speech API tried to say a little longer, then Chrome's speech stopped working. If this is the case appear for you, close the tab and open a new one to try again.

Remarkable Conclusion:
As I'm sure you can see, API chatbot-style AI is a very easy way to run and run personal assistant.

Want to develop your Api.ai bot? Anything can be done: Here is the full series written on the allinonedownload.

If you make your own personal assistant using Api.ai, then I would love to hear about it! Did you name your barrier? What questions have you established for this? Let me know in the comments below.






Read More