Skip to content

Latest commit

 

History

History
32 lines (32 loc) · 4.85 KB

AlexaSkillSetup.md

File metadata and controls

32 lines (32 loc) · 4.85 KB

To configure Alexa Skill as I discussed in the User Interface section, you can follow easy steps below:

  1. Sign in to the Alexa development portal https://developer.amazon.com/. You can create a free Amazon account if you don't have one. 1
  2. Create a new skill using Create Skill option in Alexa Development Console. Also choose Custom template option from tiles as below. 2
  3. Once your new skill project is created you will be navigate to screen as below. One of the most important item to consider in previous step is selecting right Default language, which should be based on your physical location. If you don't select language as per your location you will not be able to invoke the skill from Alexa device when in development stage. 3
  4. We need to provide an Invocation name. This is crucial as this will help Alexa skill back end identify your skill from a large skills data repository. In the real world when you will talk to an Alexa device like an echo dot or echo speaker or any of the Alexa devices, and say "Alexa !! open {your skill name}" Alexa skills service will look for invocation name to process request. 4
  5. Next, we need to set up an Intent. Intent is the user interaction model that the skill will be aligned to. There is a long list of predefined Intent available, and you have full privilege of custom building one of your own. 5.5
  6. Below is a list of some default Intents available, for this blog I will be creating a custom Intent which I will then extend to carry input parameters for processing by Alexa Skill backend API. 5
  7. Within the Intent, we need to define Utterances. This is a set of likely spoken phrases mapped to the intents. This should include as many representative phrases as possible 6
  8. In the Utterances we can add Slots, slots can be understood as a dictionary of different types such as Airlines, Cities, Countries, States, Artists, Colour, etc. Alexa has a long list of such types available, also if you don't find one suitable. You can use AMAZON.SearchQuery which I am using in the blog. This slot type will pass the user spoken word as is to skill API backend. 7
  9. If you complete till step 8 you are done with basic configuration. You can validate the configuration as JSON using JSON editor available. 8
  10. Now we are done with the skill setup, so next, we need to set up an API endpoint which will process the user income Intents. There are 2 options available at this stage to build Backend Service Endpoints
    • AWS Lambda ARN
    • HTTPS For this demo, I have used Azure Functions to build my API backend to server requests. In Part 2 of the blog, I will explain about the API Endpoints I build using Azure Functions 9
  11. The last item is to save the model and Build Model. Building the model will generate a Machine Learning Model which will be used by the platform to intercept user utterances and extract slot values to process. 10
With these 11 steps, we are done with the setup and configuration for Alexa Skill.  Next, we need to build the API endpoints to process user requests. The next part of this blog series will cover Azure Functions based API which is used in step 10.