Imagining what is not.
Fifteen years ago a friend of mine showed up to school with this thing called an iPod. Being my mother's son, I scoffed at how expensive it was. I had a portable music player, too. It was made by a company called iRiver. It cost $50. My friend's $250 iPod just seemed silly to me in comparison. Boy, was I wrong.
The iPod did essentially the same thing as my iRiver and other portable music players at the time. It stored high-quality digital audio files that could be downloaded from a computer. It played them back. It featured a user interface that allowed one to select a specific song, fast forward, rewind, or group songs together into files, or playlists.
So how did Steve Jobs manage to take an already existing technology that not many people used, and turn it into a technology that everyone could use and wanted to use, and sell it at a significantly higher price? The answer to that question is the inflection that turned Apple from a nearly-bankrupt afterthought into one of the world's most profitable and admired companies.
Early MP3 players worked well enough. They were reliable, reasonably easy to use and they were, at the time of the iPod's release, affordable, and they were increasingly profitable in the years leading up to the iPod's dramatic adoption. At the time, it was difficult to imagine that one could actually improve the experience of listening to music on portable devices. For 12-year-old me, at least, my MP3 player already seemed so easy to use.
But Jobs saw what wasn't there. He saw that it still took a basic, limited understanding of computer software to upload and arrange songs on the iRiver and other early portable music players. Moreover, the user interfaces were easy to use, but not as easy as the peripheral devices people were used to engaging with, like a keyboard, mouse, or touch screen. The learning curve to use the devices was just steep enough to dissuade the average music listener, and the relative clumsiness of the early devices was a turn-off for everyone who wasn't foaming over the latest and greatest technology.
By filing down the rough edges of an MP3 player, and creating a seamless, easy user experience where one could find songs, pick songs and load them onto the device in one friendly space. The clumsy multi-button user interface was eliminated and boiled down into an intuitive, mouse-like track wheel. Refinement of hardware helped assuage some early sound quality gripes.
Only once Apple broke this threshold did portable digital music players become a standard fixture in American life. Since 2004, Apple products have held at least a 70 percent market share in U.S. digital audio player sales.
The same incremental progression unlocked tremendous potential in other formerly unattractive technologies. Air travel slowly grew in popularity since commercial airlines started operating in the early part of the 20th century, but it wasn't until Boeing took the 707 jet airliner to market in 1958 that commercial aviation gave railroads a run for their money.
The 707 was the first of a long string of jet airliners, but the airplane was by no means a reinvented wheel. It was an amalgam of technology that already existed at the time, including turbojet engines, cabin pressurization, and resilient aluminum construction techniques. But by tactfully incorporating a half-century of aviation advancements, Boeing's 707 became the first aircraft to offer an air travel experience that was comfortable and economical enough to draw mainstream travelers to the skies.
Today, Boeing remains the most popular manufacturer of commercial airliners around the world. The basic experience of flying on the newest Boeing jets, over 60 years after the company's first 707 took to the skies, is mostly unchanged from those maiden 707 voyages.
Something similar is happening today with a technology most of us take for granted. Web search engines have been part of the daily life of Americans for nearly three decades now. It's probably difficult to imagine how you used the internet before sites like Yahoo! and Google became ubiquitous.
When they first became available, powerful web search engines represented a vast improvement in user experience for just about anyone using the web. Before search engines were available, you either needed to know the site address of a webpage you were looking for, or you needed an available hyperlink to click on. At the time, internet service providers like America Online and Earthlink paid employees to manually curate web content for their users to access when they signed on.
But search technology that represented enormous progress when it was introduced is increasingly inadequate to deal with the massive volume of content and diversity of information sources now available on the internet. Search engines still function mostly as they did 20 years ago, by receiving a real text query and, through continually adjusted algorithms, returning a list of relative websites to the user. Just as they did 20 years ago, a user scrolls through search results one page at a time.
Imagining what is not there, but could be there, is crucial to understanding how we are limited by present search technology and how advancements could give even average users the power to unlock a more meaningful and useful internet experience.
New tools have already been developed and marketed to expand the accessible scope of the internet. Using professionally available search tools, it's now easy to track a website outwards through the internet, and determine where a specific piece of information has been shared, propagated and iterated throughout the web. Among these inbound link search engines are Moz and Ahrefs, and they make a formerly awkward technological process easy for just about anyone with a membership.
So far these tools have been targeted and optimized for professionals who work in information technology, more specifically the field of inbound marketing. On a daily basis these professionals analyze the success and accessibility of promotional websites, so this type of search tool is both highly relevant and valuable in their professional capacity.
Similarly, technologies exist -- mainly in use by academics and college professors -- that can find similarities among text in documents that are stored across a database. These technologies are used to identify plagiarism.
While both of these tools remain out of reach to the average internet user, that doesn't mean they have to be forever. Remember, high quality digital audio was once something attractive and useable only by dedicated audiophiles and music professionals.
By combining these more advanced search technologies, and channeling them into a user interface that allows the layperson to harness their collective power, it could soon be possible to give every living person access to information they currently have a hard time finding, and to more accurate and trustworthy news and online content.
The present limitations of our search capabilities makes it difficult to take advantage of the network effect of the internet. The internet, as a network, is vast and seemingly limitless. If you're looking for information about a subject, chances are it exists in cyberspace. Search engines can help comb this vast trove of files, pages and documents and organize them in a way that allows us to at least sort through some of it.
But what if a user isn't quite sure what they are looking for. How can you find something in a call-and-response search engine if you aren't even sure what your search terms should be?
An example from my present life challenges is the arduous task of finding investors and business partners who could help grow my fledgling company. I know that the people who I need to connect with in order to make our ambitious concepts a reality are out there, somewhere, perhaps within the second or third rung of my online social network. But given the limited search tools I have, it's difficult to find them.
The search engines we have now only show us the first part of any available network of information, based on results delivered through a complex algorithm that includes not only a map of the web, but a map of you and your browsing and searching habits. All of the other relevant metadata -- the hidden information about a web page that includes clues about how it's connected to the greater network of information about a given subject -- is hidden beneath the surface, largely unused in any way that's visible to the user.
The contacts I'm looking for might be out there in the vast web of information I have access to online, but in order to find them I have to manually search back through each potential connection in that information web. I have to scroll through potential contacts, then read each one individually, just to glean information that is publicly available on their profile. This isn't impossible using the current search tools I have available, but my goodness is it time consuming.
The same problem exists with news research. It's easy to find content about a topic, whether it be about cancer or politics, but search engines only deliver a single result at a time. The power of the network, and all of its hidden clues about how news information originates, where it is shared, how it is updated and how it is verified, remains untapped but for the single page of results that your conventional search engine returns for any given search query.
The idea for my software program, Grapple, started when I was still working as a journalist at a metropolitan newspaper in Cleveland, Ohio. As a reporter, I knew how to search questionable articles and webpages to see where they came from, and to make a judgement about whether on not that content was legitimate or questionable. But it became apparent as fake news overtook the 2016 election cycle that most people couldn't do this, or at least didn't have the time to invest in doing so.
If computers know how to read articles, and if professional-grade search tools already help connect related articles using metadata and inbound links, why can't we just combine these pre-existing technologies in a way that makes this type of sophisticated research easy and available to anyone with a curious mind and a computer trackpad to click on.
A search tool, like the one we have designed, would deliver results not just based on the keywords of a story, but on the real text of that story, the metadata of that story, and the connections that exist between a single source of information, and all of the other potential sources where that information might come from.
By crawling the web laterally, and pulling in relevant content that might be omitted from a basic call-and-response search platform like Google, users can easily access insights that aren't presently available. Is a viral news story originating from a single, questionable source? Is there corroborating information that is coming from independent sources? Is there more up-to-date information on the subject that your social media networks or news websites might be omitting, or might not yet include? By creating a singular, powerful search interface, all of this information can be made available with the single click of a button.
By bringing to light the connectedness of online content, a search tool like ours could also cut down the time it takes for newsrooms, business analysts, laboratories, universities and law offices to perform sophisticated research, a task that is still very labor intensive (I would know, as I have been one paid to do this type of research in my recent career).
But such a tool isn't immediately obvious. It is not just a plug-in, or a patch to fix some glitch or inadequacy in the economy, like Uber did for vehicular transport or Venmo did for money transfers.
Rethinking how we find content on the internet requires us to take a gamble on imagination, to see what's just not here yet, and how people can better use what we already have in front of us. All of the tools we need are alright at hand, they're just lonely and starved of their collective potential.