Talking to Tintri's Alexa speechbot might not actually be all that crazy
Users can do more as management goes automatic
Interview Tintri's Alexa speechbot is no piece of eye-candy gimmickry. CTO and founder Dr Kieran Harty says it will enable users to do more with less hassle as system management gets automated. We quizzed Harty on the how-and-why of its development.
What is the overall background case for implementing an Alexa interface for Tintri array management?
The Alexa and Slack interfaces are examples of self-service. They introduce opportunities for non-experts to interact with infrastructure. The key here, though, is the level of abstraction at which Tintri operates. You couldn't have a non-expert interacting through these channels with LUNs and volumes. They wouldn't know how to carve up LUNs, set RAID types and more. With Tintri they can take actions on individual virtual machines — a common language throughout the data centre. Now they can complete tasks and gain access to valuable data without dependencies on the IT team.
Could you explain the components involved here, such as the Alexa skill, Slack chatbot, and Tintri OS Web services API? How do they relate to each other and interoperate?
The Tintri foundation is our web services architecture – something that has defined our technology from its origins. It includes a set of REST APIs that allow us to automate and orchestrate tasks through these interfaces.
With Alexa, we are using an Amazon SDK to create a "skill" that lives in AWS. That skill communicates with Tintri's services and uses our APIs to perform actions. Slack is even simpler. We have developed a layer that understands text typed into Slack and then leverages those same APIs to complete tasks.
How do you see an Alexa "speechbot" helping Tintri array admin staff and/or users? What is the pay-off for them?
The pay-off is removing dependencies on storage specialists. Non-experts can now access and consume data about the performance of their virtual machines. They can gather this data and take actions in an easy way, no matter where they are — from an Amazon Echo in their home office, the Slack app on their mobile device, etc.
Some have suggested it's a gimmick because a speechbot wouldn't work in noisy environments or would make mistakes such as misinterpreting "detail" for delete". How would you respond to these issues?
First, storage actions can be permissioned. We don't expect organisations would allow any employee to take any action on the infrastructure. But within permissions, the line of business or DevOps could spin up or tear down VMs as needed (e.g. for a test exercise). Second, a request can trigger a follow-up; for example if an end user request for "detail" was misinterpreted as "delete", it could kick off an email to the admin to approve the request. The bottom line is there are multiple ways to introduce controls that prevent these types of errors.
Do you see the "speechbot" working alongside the existing Tintri GUI and CLI interfaces and, if so, how?
The Tintri UI and interfaces like Alexa and Slack are complements. These examples of ChatOps will not be nearly as comprehensive as the Tintri UI, for the foreseeable future, you are going to want to login to the UI to see trends and patterns, perform what-if analytics and much more. But simple requests for data and actions can be effectively accomplished through Alexa, Slack and similar interfaces.
Do you see the "speechbot" being able to act on an entire system and not just the Tintri storage component? How might this happen?
From day one, Tintri's web services architecture has allowed us to orchestrate actions beyond storage. For example, our realtime analytics include visibility into compute and network, and automation scripts enable actions that span an organisation's infrastructure. So yes, it is possible to use Tintri to take action on other pieces of the enterprise cloud ecosystem.
How might Tintri add its own intelligence to questions for the "speechbot" such as "How can I improve system performance?" and "What could be wrong with the system?" Is there a Tintri machine learning aspect to this?
Initially, we will be able to map certain questions to specific types of insights offered by Tintri. Consider Tintri VM Scale-out, which already identifies the optimal location for every virtual machine across an organisation's entire Tintri footprint. Recommendations from that engine could be fed through the Alexa or Slack interface.
So for example, a user could ask, "How can I save storage space?" and trigger an analysis that produces a response like "You have 100 VMs that have not been used in the past 90 days — would you like me to delete them?"
In five years' time how do you see system use, operation and management affected by "speechbot" and AI-related technologies?
Most of the operations of the system will be managed automatically without human intervention. You won't need people to perform load balancing, most troubleshooting or tracking of wasted resources. Software will monitor the system, take action and inform the relevant people when necessary. For example, consider a user who wants to look at some documents that are two years old, but doesn't currently have the right permissions to access the documents.
He makes a request by text to a bot. The bot detects that his manager is currently driving and calls her, rather than forwarding the text. She (a non-IT person) tells the bot whether or not to grant access. If access is granted, the bot requests the old documents from a public cloud archive and notifies the user by text when the documents are available. The normal cases will be handled completely by software and the exceptions will handled by bot-mediated communication. It will take less than five years for this to happen.
Does Tintri have any initiatives related to patenting its "speechbot" technology?
Some elements of our approach to ChatOps can be patented – for example, the use of multiple technologies in concert to complete these actions. We are currently examining opportunities to patent these innovations. ®