Hello K5!

Please note that this article is being used as reference material for an internal competition I contributed to, but you’re more than welcome to read on!

Meet team 4L3K5a

See what we did there? We have to thank Simon Bromley for that clever team name!  We’re a team of four, all from different backgrounds, but all enthusiastic about giving the Hello K5 challenge a go!  Here’s our line-up:

  • David Taylor – Technical Consultant
  • Simon Bromley – Software Engineer
  • Chris Williams – Open Source Developer
  • Tejal Patel – Business Consultant

So what’s the idea and where did it come from?

It all started when we’d received an email recently regarding overdue internal training courses; attached to that email was an extract of data which listed all the people with courses that were marked as ‘overdue’. We know Fujitsu generates a great deal of Business Intelligence (BI) data which needs to be accessed through a specific web portal.  Not always ideal – especially when you’re juggling a couple of things, and just need that vital piece of data!

This is where our ‘what if’ question came into play… what if we could create an Amazon Skill which would allow us to easily query the BI data and retrieve the information we want, without having to navigate a variety of web pages?

Meet Fubiq!

We’ve come up with a what we’ve named the “Fujitsu BI Querying Skill”, or Fubiq for short.  

Fubiq – Your Fujitsu BI Friend

This allows us to demonstrate, using real Fujitsu BI data (from data we received in the overdue courses email) the possibility to query vasts amount of data in real time without having to manually look for it.

Here’s some questions you can ask Alexa using Fubiq

  • How many courses are overdue for BAS?
  • How many courses does David Taylor still needs to complete?
  • List the courses that David Taylor has to complete?
  • Send an email to remind David Taylor to complete outstanding courses?
  • Tell me who manages David Taylor?
  • Who in my team has overdue courses?

Stitching it all together

We looked at a number of technical approaches to prototyping a solution, and hit on a really great way of rapidly constructing an Alexa skill – node-red.  We’ve previously put node-red to good use when needing to quickly “glue” systems together (some live Fujitsu Forum 2016 demos had node-red behind them), and so it fitted perfectly here.

We stood up a Centos VM on K5, then deployed node-red, created a DNS entry, created an SSL certificate using LetsEncrypt, and then locked the service down.  K5 was perfect in allowing us to get a fast prototyping system in place.  We then deployed Postgres, and imported the training data.

It was then a case of building the skill out with node-red flows, and defining the interaction models in which would “point” at the service running on K5.

When node-red met Amazon Alexa

So how does Fubiq work in the real world?

We know a video blog was requested, which demonstrated what we’d built, so here it is.

Problematic Problems and Serious Successes

The Good: K5 deployment with node-red on top allowed extremely simple and rapid development of functionality for Alexa Skills: it’s extensible (with many Node packages available), easy to understand and easy to visualise the system being created.

The Bad: The names of people (using the AMAZON.Person slot type) was a little hit and miss – “David Taylor” could be heard as “David Taylor’s” or “David Talus”.  We found this could be counteracted (somewhat) by defining a custom type with all known valid names.  Sometimes, no name was passed at all from the Alexa call meaning a need to start building in a lot more validation.

Additionally, we were forced to use the US implementation of the interaction model because it contains significant enhancements (such as the AMAZON.Person slot type).  This meant our Echo expected to process a US accent, plus it also responded in a US accent!

Future Possibilities

We’ve thought about some next steps of how Fubiq could be significantly enhanced.

Our ideas centered building a connector into the BI system which would allow for a wider range of questions.  We would then design a more flexible interaction model (with specific skills for specific datasets), and focus on session based (i.e. Q and A) discussion – this would still likely be faster to retrieve snippets of data than navigating the BI system.  Our experience of building verbal queries has shown that its certainly possible to build more generic slot-based interactions.

In addition, we thought about how a user could ask for instant content (a bit like we did here with emails) whereby graphs, data summaries and tables could be delivered to the desktop.  More use of the cards support on the app might also be helpful.

Appendix (All the Techy Stuff)

Whilst we’ve summarised the approach we took above, we’ve included further detail below.

Development Procedure

Architecture

We initially setup Node-Red to operate over SSL so that the Echo service could talk to it, obtaining a free certificate from Letsencrypt.  Practically, we used Apache as the front-end, terminating the SSL there, and proxy onto Node-Red. We then added a Postgres module to Node-Red to allow it to query a postgres database.  These were all hosted on a modest K5 virtual CPU (S2).

Requests and Flows

We next started to look at the requests coming in from the Echo service: these break down into the three main request.type values of: “LaunchRequest”, “IntentRequest” and “SessionEndedRequest”.  For our flows we simply made a switch based on the request.type property.  We only expanded the handling of the flow for IntentRequest’s.  Next in our flow there’s another switch, this time on the request.intent.name property.  The value of this comes from the Intents created in the Amazon Developer Console where the Interaction Model: Intent Schema is defined, so one branch in the switch per intent type.

DB Queries

Next in our flows come the handling of the individual Intent types, so processing the incoming request data, generally consisting of the name of a person.  This needed to be processed to fit into our DB queries that come next in the flow, to return the records in the data set for the type of intent we are processing.

Responses

Data from the DB is then formatted into a textual version of the answer that we want to send back to Alexa.  That text is then substituted into the complete formatted JSON message that is a returned to the Echo Service.  At any point, the flow can be split into both the normal Echo Service response, and also to trigger off other functionality, i.e. send an Email to someone with the result of a query.

Testing

To allow us to test the flows, we used Node-Red’s ability to inject data into a flow, and simulated the data from the Echo Service expected for each of the types of Intent.  This allowed us to see the responses that our flows would generate for Echo and confirm that they were behaving as expected.

Leave a Reply

Your email address will not be published. Required fields are marked *