The 10 apps I want to see up their voice assistant game
The average user has 80 apps installed on their phone but day to day only use 8 of them. Judging by the current top app download charts…
The 10 apps I want to see up their voice assistant game
The average user has 80 apps installed on their phone but day to day only use 8 of them. Judging by the current top app download charts, you can see that the same apps dominated by the tech giants.
These apps run both our personal and work lives but with the two leading mobile platforms (iOS and Android) bringing voice to be a first world citizen with Siri shortcuts on iOS and Google Assistant on Android. Voice first is going to make people think more in a “jobs to be done” mindset over which icon looks the best on their home screen. Will that change which apps we choose to use daily?
This will provide a gap in the market for a new wave of service,s but what can the apps we use today do with voice and why aren’t they doing it now rather than waiting for native platform intergration.
Food ordering is a repetitive task, and like most people, I order food mostly from the same places and re-order the same things. It would be great to be able to say “Alexa, tell Deliveroo to re-order me chicken”. With that, it then prompts me to a location or locations based on where I have ordered chicken before. If I have recently ordered for multiple people, then I get prompted to confirm the number of meals, or it just places the order. Meaning I can do this in the kitchen when I realise there is no food or safely and hands free as I am driving home.
Blueprints is a tool (currently only in the USA) that enables anyone to build a basic Amazon Alexa skill for your household. So why do I want the Alexa skill builder to be more voice enabled? The current version of blueprints is excellent for basic skills but what I want, and I feel most households would benefit from is slightly more dynamic. Being able to verbally ask what is for dinner each night or the ability to make a recall game for the kids so that they can practice their homework.
Though these skills don’t need to be complex the tool for their creation has to be simple to follow and can make dynamic voice experiences easy and fluid, my kids need to be able to build skills via an app in 10 minutes. Based on the pace and support the Amazon Alexa team have been running at, I would be surprised if Blueprints isn’t accelerated at pace.
There is no secret of my love for Monzo and the challenger bank model, as the Monzo API becomes production ready, their recent link in with IFTTT is starting to create exciting integrations. For me in a voice first world, budgeting would be fantastic especially when linked to the soon to be available joint accounts.
Being able to ask the remaining shopping budget for the month as I am writing the weekly shopping list. When planning a night out and budgeting the babysitter and booking a meal, Monzo knowing the average meal price at the restaurant and alerting me that will send be over my set budget during planning would be special. I think passive payments/donations could be massive here too (Alexa recently added donations natively in the US), imagine watching TV and just saying “Alexa, donate to the cause on TV now”.
Netflix set itself as being the content service you could get anywhere, regardless of platform, device or technology. If you have a screen, Netflix will probably run on it. Netflix itself has no voice integration, though if you use an Amazon Fire TV or Apple TV the native voice OS integration to search for Netflix shows is ok but limited to title and actor. What if you could say “Alexa, show me films with Daniel Craig where he isn’t James Bond” or “Siri, what is that film where Adam Sandler is a singer”, discover of new shows would become much easier.
Podcasting is very much on the rise and as someone who listens to around 25 weekly shows through Podcast Cast as it ensures my listening positions and to control playback speeds (fluctuating between shows I hear at between 1.5 and 1.8-speed rate). Because of this I mostly listen on my phone via an AirPod, it would be great to have Alexa or Sonos integration, so you could jump between the different services similar to how whisper-net works on the kindle and audible. In the car is the killer move as most podcasts I listen to are under 30 minutes, and I tend to forget to backfill, I want to be able to say without touching my phone “Siri, start the next business wars podcast episode” and it not go to the Apple podcast app which I don’t use. Maybe with Siri Shortcuts, this will be possible in Shortcuts v2.
As the dominant map service and it was great to see some new announcements at Google IO this year on innovations they have coming, in particular, the use of AR to look at your directions on the road around you. A great feature but it feels lazy as means I am still the guy walking around with my phone in the air. What I want is Audible Reality.
I want Google Maps to tell me where to walk using the landmarks around me, telling me to walk 300 meters (how far is 300 meters) isn’t helpful. However, if it said, “You see Starbucks on your left if you cross the road now (mind the road as people drive fast along here) and after Starbucks turn right and walk down the road until your outside Waterstones on your left”. No need to take my phone out which is both annoying and makes it clear I have no idea where I am going. I also get the benefit to take in my surroundings but still get the directions I need to get me to my location.
Project management isn’t something you would think you could use with voice and maybe of all the apps this one is a little way off full integration but what about getting up to speed with tasks on your way to work.
It would be great if Asana could give me both an overview of tasks I am working on and their status if changed since yesterday, reading out the latest new comments and how timings are going but also allow me to comment back and update task status’.
Strava is an excellent tool for tracking your exercise and their audio triggers at points during a run or cycle are good but repetitive and to be honest boring in tone, new services such as Aaptiv are showing that workouts are much better when in audio form so what can/should Strava do.
As I am running or cycling look at my route, if I am about it hit an incline, help push me through it, if my pace is slowing down, let me know to recover and then push harder. Through something like Google Maps calculate my timings and work out if my route and pace mean I can get the session complete in time or should I take an adapted course to get me home or back to work on time.
With Office 365 you can already use Cortana as both an audio and textual assistant, and it is pretty good at the basics, but deeper integration would be fantastic. The read-aloud function in Word I can’t live without for blog writing now but its voice isn’t very humanistic, nor does it pick up UK grammar that well, a tighter integration into Azure’s speech service I think would create a more fluid output.
Aside from wanting Unsplash (Royalty free hi-quality images) added to any service, being able to say Cortana show me photos of a bike, which can be pulled from Unsplash and inserted into my document would be excellent. Excel formulas I always find annoying, do you add a + or a & or have I missed a bracket. What if you could say “Cortana, add the total of 71 and 32 in the cell to the right”, no need for cell references. You reference the data as it stands as you instruct Cortana, simple but for complex formulas, this could be both more convenient and productive, “Cortana, sum up column 3 at the bottom and then extract the VAT from the total”.
Ok on iOS I get it Apple has been holding you at bay so that they can grow Apple Music, and with the new Siri, Shortcuts media playback was added though in a restrictive way. You also lost out on getting Shazam, what I want is an in-app mic button, so I can try and find excellent music.
The soon to be Siri shortcut integration will be tremendous and enable the ability to play recent playlists or top tracks. However, I want more complex queries, that work around Apple’s restrictions otherwise Apple will win me over to the sub-par Apple Music app experience for this voice first experience (You see how voice can change a decision) of being able to say “Siri, play me songs with a beat like [INSERT SONG THAT DOES MAKE ME SOUND OLD]”.
I am sure some or all the services mentioned will support better voice integration as the mobile platforms they sit on top of make voice a first-class citizen. If the top apps wait as they have waited until now and if they react rather than taking the opportunity, this will enable new services to pierce a hole in their app armour.
As with all interface paradigm changes, voice is no different and by taking a more “tasks to be done” approach to aiding their users get things done rather than drive app addiction (nearly got ranting then), they will be the services that win.