Do you remember those old cartoons where a mad scientist creates a weapon that can make anything gigantic? They would shoot an innocent and cute rabbit and turn it into a huge and cruel beast the size of Godzilla. I always wondered why an innocent and cute rabbit would become meaner as it grew bigger. Shouldn’t it stay a good boy?
Give a man power, and you will see his true character. This idea has been discussed by philosophers from Plato to our days, and somehow, it seems to apply to User Experience design. The gigantizer laser is, in his most recent form, machine learning.
An insight that Machinalis got after almost a decade running Machine Learning projects is that we have the ability to turn seamlessly innocent microinteractions into powerful emotional anchors. Microinteractions, a term coined by Dan Saffer, refer to those moments that revolve around a single use case with only one main task. For example, changing a setting, or synchronizing your devices. The challenge of designing microinteractions is to balance the big picture of product principles and identity, with the particular instance we are trying to address. However, as microinterations are a “utility”, there are increasing levels of standardization in UIs as well as interaction patterns. Machine Learning has been, despite the hype around the buzzword, a refreshing air for designers in terms of revamping and making more powerful micro experiences.
AI is changing the way designers approach experience creation, adding new complexities and enabling situations close to Sci-Fi. Tools and methods of design are strong on finding usable and efficient solutions to user needs, latent or explicit. Imprinting style and personality to a product, however, is still closer to an art. Machine Learning is now putting prediction and recommendation in the designer’s toolbox, giving them the ability to construct higher level interactions. Just imagine if you were thinking about your grandma the last few days, and a conversational interface resonates with an old fashion word she always used (in a peaceful evocation), or the ability to show the hassle a change in your purchase order is giving to some clerk at the backend of an online store (empathy) or engaging in creative games just like you do when you make up stories for your children (creativity loops). Possibilities are endless to jump from basic utilities in the perceived value of an interaction, to a real emotional response.
Back to the giant rabbit, power is a deceitful friend. In a conversational input, for instance, where the current interaction is just a text box including voice enabled input machine learning can bring a lot of power with autocompletion. I love being completed in my sentences. I hinted it most of the times, since it gives me a confirmation the other person is following me and understanding the point. Sometimes I hint foul language so I keep my vocabulary clean and make others feel dirty. A few years ago, I had a boss with a speech difficulty, a slight stutter that accentuated in heated meetings. I naturally tended to complete other people’s sentences if I noticed the flow of conversation was choppy. I learned the hard way that the habit can be irritating to some. So we are talking about the same friendly rabbit of autocompletion, growing once to be a wit companion, and other an obnoxious jerk. But…is it not the same experience by design?
Machine Learning is, for a designer’s mind, more like machine intuition. This is not about rules over rules. It is about software deciding how to behave based on data from the context. A number of algorithms and glue-code for machine learning are becoming boilerplate for classical use cases such as recommendation. Some implementations, however, seem to deliver a much better experience on similar technology. The difference lies in the way they are able to quantify all the relevant aspects of their business and value proposition. Namely, in their Data.
Today, designing or improving experiences based on Machine Learning is as dependent on technological feasibility and sound interaction principles as it is on data sanity. A regular Design Thinking process takes us, eventually, to the construction of a prototype that foresees a technical feasibility and is functional enough to validate the experience with users. The envisioned experience, though, does not gain fidelity by improving the interaction or even tweaking on the Machine Learning algorithms. From start to end, it depends on the quality of data. The potential for an interaction to go wrong, based on wrong data, is equally powerful. So you would be remembering an argument with your grandmother instead of a nice moment, or the idiot who fired you in your first job instead of the troubled clerk. The whole valence could go from neutral to very positive, or very negative, in a second.
So, is it possible to assess the quality of your data before I start designing UX or solving the actual Machine Learning problem? If you guessed the answer NO, of course you are right. But you surely can approach data quality as one more (critical) vector on the development, and develop an iterative strategy to sanitize data feeding Machine Learning algorithms. A trained eye can detect very early in the process data problems, inconsistencies and incompleteness. A thorough work in data management is closely intertwined with a superior machine learning outcome, an engaging experience and a successful business (more to come on this).
Your UX differentiator, at the end, is going to be in the small details in your data. Where the Devil of this mess actually lives.