What if I told you a tool exists to make you infinitely more resourceful — MacGyver or Horatio Alger level resourceful — so your earnings potential ratchets way up; the tool recommends what you should do today, tomorrow, or three years from now to achieve your most important goals, even when addled by the fog of real-life uncertainty; it even maximizes your impulse control and introduces you to “better” friends, high-value social capital as it were. Would you use it?
For better or for worse, no one has built it.
Sure, the technical challenges before building this breed of automata are big: capturing and harmonizing unstructured data, personalizing algorithms, mitigating objective, systemic data-bound bias. But these are also the challenges of allaying systemic poverty in America, and the economics of commercializing solutions for the least economically able class is among the hardest nuts to crack, leaving these problems unviable in the short term, possibly unless there’s a market intervention.
At the MIT Technology Review’s EmTech Conference at MIT in September, a number of the “Innovators under 35” addressed how AI-equipped technology can, and has, helped relieve the pressing societal challenges of poverty, the global food crisis, wildlife poaching, dumb cities, and even the protection of critical American infrastructure.
Fei Fang, an associate professor at Carnegie Mellon, was one such innovator who outlined work she and colleagues did on the latter: protecting the 60,000+ passengers who ride the Staten Island Ferry between Manhattan and Staten Island everyday by building a system that intelligently minimizes terrorist attacks by scheduling patrol boat routes, using both computational game theory and machine learning.
Fang et al.’s brainchild is actually being used by the US Coast Guard today, but it wouldn’t be without the academic uterus that nourished it with grant money early in its development. I, and a ton of others, argue that such reliance on academic funding and/or little commercial incentive remain the greatest impediment to using more machine learning —and tech generally — to address social problems.
However, Fang and others at EmTech 2018 pointed to the recent emergence of incubators that step up, acting as market interventions in our free market economy that prizes profit above social impact. MIT itself built one such incubator, called The Engine, with a focus on providing a stable economic environment for founders working on so-called tough tech, technical problems that may not be commercially viable in the near-to-mid-term, but need to be solved to advance society. C.E.O. and Managing Partner, Katie Rae, spoke on what they believe at the conference. Similar organizations are spinning up too: Stanford’s Poverty & Technology Lab, IBM’s Science for Social Good, Tech for Good, and, if you’re a fellow data scientist, DataKind.
The advent of these pro bono orgs raise the tricky question of how to go about deciding what social problems should be worked on at all and the order in which they stand in line. As more and more data scientists — and other technocrats — devote their skills to solving our toughest humanitarian problems, no doubt a certain ethical scaffolding to inform our decisions on what ought to be worked on when will be erected, possibly leading to a future where an algorithm could, in fact, solve poverty.