skip to Main Content

Alexa, Remind Me To Blog

Who wants to go to all the trouble to look at things? You have to hold those heavy eyelids open, point both eyes in the same direction, and then apply brain power to perceive what you’re seeing! No thank you, sir. I’ll take my information the old-fashioned way: by shouting to my servant, demanding an answer right away.


Until relatively recently, you designed interactions with computers visual-first: here’s a screen, look at text/pictures, click/tap/type, done. But the world of chatbots and virtual assistants is blowing up. All the big players are producing voice-enabled and text-enabled bots that answer human-style questions, and anything with an API is becoming a fair game to integrate into those bots. This fits my continuing thesis: machine brains surround us, and will become ubiquitous and powerful in short order.

One of the most powerful current applications of voice interfaces is asking direct questions with simple phrases. Amazon’s Echo device with the Alexa Skills Kit fits the bill perfectly. That’s why I built this demonstration Alexa skill out of our Analytics for Fiori application. It was surprisingly easy to build the Alexa part, which made it a no-brainer to attach it to something with a lot of power.

If you want to either really impress people with your hacking skills or really annoy coworkers while testing a voice-powered interface, read on to see how I did it.


Here’s what you need before you write a single line of code:

  • An Amazon developer account. Sign up at
  • Enable AWS on your account:
  • You can test out the work you’ve done without any extra pieces of hardware, but to get the full effect, grab any of the Echo hardware from Amazon.
  • An SAP system with a working OData service. Technically speaking, you could also use some other sort of web interface into your SAP system…but OData is kind of designed for that. So just use what’s easy. I’m using the OData service that powers our Analytics for SAP BW Fiori application.
  • You can start the design of the skill and input the basic skill information.
    • For this example, use the skill type “Custom Interaction Model”
    • Choose a name that is small but distinct. “Mindset Business Dashboard” fits nicely for this one.
    • The invocation name is what Alexa will listen for in the voice interface. Should be a little shorter and easily memorable. “Mindset dashboard” for this example – so the skill will be invoked by saying “Alexa, ask Mindset dashboard…”

The next part of the wizard is the voice interaction model.

Design Alexa Skill

Alexa has several paths to handle requests that come from an Echo device. You can create a custom skill, which provides a web service to Alexa that does more or less whatever you want conversationally, you can create a smart home skill that uses an adapter to control devices from your home, or you can create a flash briefing skill that lets Alexa read off things from a designated RSS feed. Since I control the OData web service, we’ll use the custom skill path.

Someone who uses this Alexa skill will have to use their voice to activate it, so your first job is to define how the voice interface works. It’s actually fairly simple: you define an intention, which is like a header for an action, and then one or more slots that attach to that intention. The slots are placeholders that define a list of possible values for that slot within the context of the intention. Here’s mine: intents-json.

You then define what goes in the LIST_OF_QUERIES separately. For this example, I just picked two queries: a materials BEx query and a customer BEx query. “Customers” and “Materials”, respectively. We’ll add the intent schema and the slot configuration to the Alexa skill setup later.

To finish setting up the voice interface, you provide a list of sample “utterances” to Alexa. This is a list of typical phrases people might use to interact with your skill, and is used to train a model for Alexa to be flexible in interpreting users’ interactions with your skill. Here’s what I provided, starting with the intent name and including placeholders for the slot Query: utterances.


To create the web service for your skill, use Lambda. It’s low-touch, free for a ridiculous number of requests, and lets you plop your code right in to spin up a web function. Simple.

  • Sign into your AWS console and start here.
  • Click “Create a function” on the start screen and you’ll get to the blueprint page.
  • Click the “blank function” template and you’ll see this:
  • Click inside the dotted line box and choose “Alexa Skills Kit”.
  • In the next screen, provide a function name in the first field, and choose Python 2.7 as your runtime.
  • Paste this: lambda_skill-py into the editor. You’ll need to edit it later because I left some blanks or comment places for you to put your own function code or authentication details.
  • Set the Handler field to [name of your function].lambda_handler. So if your function name is “getCoolStuff”, your handler is “getCoolStuff.lambda_handler”
  • Choose the “lambda_basic_execution” role.
  • Leave the advanced settings as-is, unless you want to increase the timeout value to more than 3 seconds. That depends on how long it might take for your SAP OData service to respond.
  • Click “Next”, then choose “Create function” to finish the setup.

Finishing Touches

Now that you have a Lambda function, you can finish the Alexa skill setup.

  • From the Lambda functions dashboard, click on your new function.
  • The top right of the screen will have an ARN field you can copy.
  • Go back into the Alexa skills kit dashboard, click your skill, and go to the “Configuration” wizard step.
  • Choose “AWS Lambda ARN” as your “Service Endpoint Type”.
  • Click “North America” in the region section, and paste your ARN into the text box that appears.

You’ve now done enough to test your skill! I recommend using the “Test” step of the skill wizard to be able to type out a text utterance and see the response your skill makes. If you have an Echo Dot or Echo device connected to your developer account, you’ll also be able to use the device to test things out, as I did in the video above.

It’s helpful to log and view information about each request if they’re failing. A simple “print()” statement anywhere in your Lambda code will allow you to output debug information to Cloud Watch, which you can review from the “Monitoring” tab in your Lambda function.

Have fun!


If you have an interest in viewing similar content, visit our blog, here

View our LinkedIn, here

Paul Modderman loves creating things and sharing them. He has spoken at SAP TechEd, multiple ASUG regional events, ASUG Fall Focus, Google DevFest MN, Google ISV Days, and several webinars and SAP community gatherings. Paul's writing has been featured in SAP Professional Journal, on the SAPinsider blog, and the popular Mindset blog. He believes clear communication is just as important as code, but also has serious developer chops. His tech career has spanned web applications with technologies like .NET, Java, Python, and React to SAP soutions in ABAP, OData and SAPUI5. His work integrating Google, Fiori, and Android was featured at SAP SAPPHIRE. Paul was principal technical architect on Mindset's certified solutions CloudSimple and Analytics for BW. He's an SAP Developer Hero, honored in 2017. Paul is the author of two books: Mindset Perspectives: SAP Development Tips, Tricks, and Projects, and SAPUI5 and SAP Fiori: The Psychology of UX Design. His passion for innovative application architecture and tech evangelism shines through in everything he does.

Back To Top