Once upon a time, the very concept of Open Source was absurd, and only its proponents ever thought it could be other than marginal. Important software could only be built and supported by sophisticated businesses, an expensive industrial component whose blueprints — the source code — was extremely valuable.
But Open Source won. It became clear, to no historian’s surprise, that once knowledge is sufficiently distributed and tools become cheap enough, distributed development by heterogeneously (and heterogeneously motivated) people not only creates high-quality software at zero marginal cost; because it only takes a single motivated individual to leverage existing developments and move them forward regardless of its novelty or risk, it’s inherently much more creative.
Open Source developers can take risks others can’t, and they begin from further ahead, on the shoulder of other, taller developers. What’s more adventurous than a single individual toying with an idea out of love and curiosity? When has true innovation began in any other way?
The form of this victory, though, wasn’t the one expected by early adopters. Desktop computers as they were known are definitely on the wane, and it’s still not “the Year of Linux on the Desktop.” Relatively few people knowingly use Open Source software as their main computing environment, and the smartphone, history’s most popular personal computing platform, is regardless of software licenses as regulated a proprietary environment as you could imagine.
The social and political promise of Open Source is still unrealized. Things have software inside them now, programs monitoring and controlling them to a larger degree than most people imagine, and this software is closed in every sense of the word. It’s not just for surveillance: the software in car engines lies to pass government regulation tests, the one controlling electric batteries makes them work worse than they could so you have the “option” of paying more to the manufacturer for flipping a software switch to de-hobble them, and so on and so forth. Things work worse than they say they do, do things they aren’t supposed to, and are not really under your control even after you bought them, and there’s little that you can do about that, and that little very difficult, not just because the source code is hidden, but because in many cases, and through a Kafkian global system of “security” and copyright laws, it’s literally a crime to try to understand, never mention fix, what this thing you bought is doing.
Even if you don’t own an smartphone or a computer, finance, government, culture, our entire society has been profoundly influenced by an Internet, and a computing ecosystem in general, simply unthinkable without Open Source. Like many of the truly influential technological shifts, its invisibility to most people doesn’t diminish, but rather highlights, its ubiquity and power.
More Open Source is an obvious, true, but conservative observation. Of course people, governments, and companies (even those whose business model includes selling some software) will continue to write, distribute, and use Open Source. Each of them for their own goals, some of them attempting to cheat or break the system, but, most likely, always coming back to the economic attractor of a system of creating and using technology that, for many uses and in many contexts, simply works too well to abandon.
What comes next is what’s happening now. Still not fully exploited, the Internet is no longer the cutting edge of how computing is impacting our societies. Call this latest iteration Artificial Intelligence, cognitive computing, or however you want. Silicon Valley throws money at it, popular newspapers write about the danger it poses to jobs, China aims at having the most advanced AI technology in the world as an strategic goal of the highest priority, and even Vladimir Putin, not a man inclined to idealistic whimsy, said that whichever country leads in Artificial Intelligence “will rule the world.”
Unlike Open Source during its critical years, Artificial Intelligence certainly isn’t a low-profile phenomenon. But a lot of the coverage seems to make the same assumptions the software industry used to make, that truly relevant AI can only be built by superpowers, giant companies, or cutting-edge labs.
To some degree this is true: some AI problems are still difficult enough that they require billions of dollars to attack and solve, and the development of the tools required to build and train AIs requires in many cases extremely specialized knowledge in mathematics and computer science.
However, “some” doesn’t mean “all,” and once the tools used to build AIs are Open Source, which many if not most of them are, using them becomes progressively eaiser. There’s something happening that has happened before: almost every month it’s cheaper, and it requires less specialized knowledge, to make a program that learns from humans how to do something no machine ever could, or that finds ways to do it much better than we can. Rings a bell?
The more intuitive parallel isn’t software, but rather another success story of open, collaborative development that went from a ridiculous proposition to upending a centuries-old industry: Wikipedia. Like Open Source software, and with a higher public profile, Wikipedia went from an esoteric idea with no chances of competing in quality with the carefully curated professional encyclopedias, to what’s very often the first (and, too often for too many people, the only) source of factual information about a topic.
What we’re beginning to build is a Wikipedia of Artificial Intelligences, or, better yet, and Internet of them: smart programs highly skilled in specific areas that anybody can download, use, modify, and share. The tools have just began to be available, and the intelligences themselves are still mostly built by programmers for programmers, but as the know-how required to build a certain level of intelligence becomes smaller and better distributed, this is beginning to change.
Instead of scores of doctors contributing to a Wikipedia page or a personal site about dealing with a certain medical emergency at home, we’ll have them contributing to teach what they know to a program that will be freely available to anybody, giving perhaps life-saving advice in real time. A program any doctor in the world will be able to contribute to, modify, and enhance, keeping up with scientific advances, adapting it to different countries and contexts.
It won’t replace doctors, lawyers, interior decorators, editors, or other human experts — certainly not the ones who leverage those programs to make themselves even better — but it’ll potentially give each human in the world access to advice and intellectual resources in every profession, art, and discipline known to humankind, from giving you honest feedback about your amateur opera singing, to reading and explaining the meaning of whatever morass of legal terms you’re about to click “I Accept” to. Instantaneously, freely, continuously improving, and not limited to what a company would find profitable or a government convenient for you to know.
If the Internet, whenever and wherever we choose to, is or can be something we build together, a literal commons of infinitely reusable knowledge, we’ll be building, when and where we choose to, a commons of infinitely reusable skills at our command.
It will also resemble Wikipedia more than Open Source on the ease with which people will be able to add to it. Developing powerful software has never been easier, but contributing to Wikipedia, or making a post on a site or social network about something you know about, only requires technical knowledge many societies already take for granted: open a web page and start typing about the history of Art Deco, your ideas for a revolutionary fusion of empanadas with Chinese cousine, or whatever else it is you want to teach the world about.
Teaching computers about many things will be even easier than that. We’re close to the point where computers will be able to learn your recipe just from a video of you cooking and talking about it, and if besides sending that video to a social network you give access to it to an Open Cook, then it’ll learn from your recipy, mix it with other ideas, and be able to give improved advice to anybody else in the world. You’ll also be able to directly engage with these intelligences to teach them deliberately: just as artificial intelligences can learn to beat games just by playing them, they’ll be able to “pick up” skills from humans by doing things and asking for feedback. And if you don’t like how it does something, you can always teach it to do in a different way, and anybody will be able to use your version if they think it’s better, and in turn modify it any way they want.
Neither Open Source nor Wikipedia, under different names, looks, and motivations, are as new as they seem to be. They’ve been known for decades, and only seemed pointless or impossible because our shared imagination often runs a bit behind our shared power. We’ve began to realize we can make computers do an enormous number of things, much sooner than we thought we would, and while we try to predict and shape the implications of this, we’re still approaching at it as if revolutionary technology can only work if built and controlled by giant countries and companies.
They are a part of it, but not the only one, and over the long term perhaps not even the most important part. Google matters because it gives us access to the knowledge we — journalists, scientists, amateurs, scholars, people armed with nothing more and nothing less than a phone and curiosity — built and shared. We go to Facebook to see what we are doing.
Some Artificial Intelligences can only be built by sophisticated, specialized, organizations; some companies will become wealthy (or even more so) doing it. And some others can and will be built by all of us, together, and over the long term, their impact will be just as large, if not more. The world changed once everybody was able, at least in theory, to read. It changed again when everybody was able, at least in theory, to write something than everybody in the world can read.
How much will it change again once the things around us learn how to do things on their own, and we teach them together?