Talk to your Tableau Dashboard

Shouldn't an author be able to explain their Tableau Dashboard to every person who views it? Of course they should! This capability should be available to authors and accessible to their end users, regardless of the end user's abilities. We should also make Tableau's awesome interactive capabilities as accessible as we can as web users have vastly diverse abilities.

I encourage you to read through the detail below, however you can visit the working example here if you prefer to skip the write up. 

Background

A while ago Jeffrey Shaffer and I traded emails around his Tabitha post from 2016. We discussed the fact that in order to implement his technique, there was still a need to write some code, and that could end up being a barrier to entry for some. As usual, Jeff has done a great job of explaining what is needed in his post, but code still needs to be written to implement his technique. We noodled around some ideas of how we could enable the Tabitha project for those who did not want to write any code. This blog is the (hopefully) first step toward that effort, and thanks again to Jeff for the consults and inspiration along the way. I also reached out to Chris Toomey early on for some guidance in leveraging React for the project and how to structure things.

Also, thanks to Matt Francis and Adam McCann who let me use their vizzes for some of the built in examples. 

Approach

The approach is somewhat simple, we decided to build a JS API wrapper around our (or yours or any) Tableau viz which extends the end users mouse click UX to include voice recognition and response. This is centered on the Tableau JS API and getting as much information out of the workbook as we can once it is rendered (this also left me honestly wanting a good deal more out of the Tableau JS API). Lastly, I decided to build off the previous Tableau + React work I have done and leverage React for the build out of this “wrapper”. While I have used React, I am sure you could leverage other frameworks as well.

Implementation

The pieces leveraged for this project are as follows:

At a high level, the process followed in the code is:

Have Tabitha introduce itself and deactivate listening temporarily while speaking - With React, having Tabitha introduce itself was as easy as setting the initial state of the app with an introductory message and sending that to my responsive voice component. This could potentially be extended to the Tableau user via a Parameter as well.

Instantiate any Tableau viz submitted by an end user - This follows previous examples you have seen, nothing new here. I did push first interactive to state in my React app so that I could leverage that indicator in more than one component via props. Also, I leveraged the React life cycle to determine when a new viz URL has been provided and trigger the disposal of current viz and load of new viz in this situation.

Use JS API to pull all relevant information possible from the viz (based on JS API version at the time of this post) - This is a bunch of JS API calls triggered by first interactive, all of which store their result into an array. This is then referenced to validate voice recognition and also provide context in voice response. For example, this powers the information provided about the rendered viz back to user after the viz is loaded for the first time.

Start listening for voice commands and respond with confirmation or guidance if successful or unsuccessful request is received. - I honestly struggled for a while on this one. I was fighting the voice listener to turn it on and off based on whether responsive voice was speaking. It is currently working via a callback from responsive voice which toggles state and thus determines whether or not to take action on any phrase consumed. I ended up leaving the listener active at all times, but toggle whether or not to take action based on the aforementioned voice response call back.

Note: I decided not to paste a bunch of code into this blog, as this blog is more about use of the JS API wrapper and not actually having to write code (see below section). Having said that, all code is readily available on my Git, fork the repo and reverse engineer or better yet, extend the capabilities to your own needs. I would be thrilled to see some PRs roll through to help increase capabilities faster than I can get to on my own (which is pretty slow). I am always happy to answer any questions or provide more details on the code written for this project, so don’t hesitate to reach out if you have any.

Interacting with Tabitha

Here is how you can go about talking to your own Tableau viz...

  • Go to my page hosted on Github pages (same link is at bottom of this post) or fork your own copy to work with locally (instructions on how to get up and running locally are in the readme within the Git repo).

  • Put in your tableau public viz URL (use the share link, not the browser URL) and click the submit button... THAT IS IT!!

  • Feel free to take advantage of limited API via Tableau Parameters:

  • DESCRIPTION

    • Type: String, Any Value

    • Usage: include this parameter in your workbook and provide a brief description that you want Tabitha to read out to your user.

  • SELECT CONFIGURATION

    • Type: String, List

    • Usage: include this parameter with Sheet-Field pair in Value-Display As parameter list respectively in order to enable mark selection from voice recognition. I took this approach to try and ensure that we don’t overload the browser, but that is still a possibility if you submit massive data to this config. Target a summarized sheet to be the most efficient with this. See examples of this in the example workbooks I embedded into the Tabitha app.

  • If Tabitha is not responding, check that the listening toggle button on the top left is in listening state, otherwise, click it to toggle it back into listening mode. Worst case, refresh the page and try again.

  • Just want to play around? Try the command “Tabitha Show Example” a few times.

Where can this go next?

  • This project is open source and available on my Git. It is not perfect (nor is it meant to be at this point). See something you don’t like? Please go ahead and try to address it and submit a PR.

  • I did not do anything from a CSS perspective, so styling could be greatly improved.

  • I only tested this on Chrome on my MacBook, I doubt it will work in older browsers and/or phones.

  • Extend this to additional types of filters, allow multi selection, etc.

  • Improve lifecycle and async related activity between Tableau and Voice Response/Recognition as this can get tripped up sometimes (and then the computer talks to itself).

  • Augment the process to parse voice commands and enable less stringent commands to work well. For example, my daughter asked for two selections by saying “Tabitha select Anna and Elsa”, the current status of the project only selects the first, Anna in this case.

Lastly, click here to enjoy talking to your viz!