Nicholas Napp and Sophia Napp-Vega
Published: 18 Sep 2020
CTN Issue: September 2020
A note from the editor:
The Internet has changed everyday life dramatically. During this global pandemic, the Internet has become even more the center of people’s life with many more people working remotely and kids going to “virtual online schools”.
In this article, Nick and Sophia argue that the digital divide, the uneven distribution of information between those who have access to Internet and those who do not, is part of a much larger problem, the Technology Divide, and that as the rate of technology development is accelerating, the Technology Divide will widen and have more impact on those with socio-economic disadvantages. It calls out for the technology industry to be aware of the issue and proactively encourage and support engagement with technologies.
This article is the first in what we hope will be a series of articles in cooperation with our colleagues in the future technology society of IEEE. Special thanks to Doug Zuckerman for his support. As always your comments are most welcome.
Yingzhen Qu, CTN Editor
The Digital Divide: Humanity's Greatest Challenge?
The Law of Accelerating Returns
In 2001, Ray Kurzweil published an essay called, "The Law of Accelerating Returns". Kurzweil is a pioneer in the field of pattern recognition. He is also a technologist and futurist, known for thoughtful essays. In this particular essay, he suggests that the rate of technological change is not linear, but exponential.
His conclusion is startling.
"We won’t experience 100 years of progress in the 21st century — it will be more like 20,000 years of progress (at today’s rate)."
Twenty thousand years! Let that sink in for a moment. Twenty thousand years ago, humans were living in the late Stone Age. The wheel was a crazy futuristic dream. Thousands of years would pass before its invention. The peak of human technology was a sharpened rock.
Photograph by José-Manuel Benito Álvarez
The difference between 20,000 years ago and today is unimaginable. And yet that represents the magnitude of change Kurzweil predicts in the next 100 years.
As you read this, and the idea sinks in, you may think it all sounds absurd. But there is a reason why you may not be able to judge the situation clearly. Writer Tim Urban explains the issue in a blog post about Artificial Intelligence. Any curve, no matter how steep, looks flat when you zoom in close enough.
We all live in the present. Change is all around us. In terms of history, we are all zoomed in and too close to the curve.
In the same blog post, Tim proposes a thought experiment. He suggests we build a time machine. We use the machine to kidnap someone from just before the industrial revolution. We bring them to the modern day to see what they think of our world. He points out that our whole society would be utterly confusing to them. A person from 1750 would recognize almost nothing, from cities to cars to electronics. Many things would seem magical and beyond all explanation.
Tim then proposes that the person we kidnapped decides to get their revenge. They steal our time machine and play the same trick on someone from 1500, bringing them to 1750. The person from 1500 recognizes almost everything. Society as a whole is not that different.
This illustrates Kurzweil's point. Far more technological change happened between 1750 and 2000 than 1500 and 1750.
Let's look at more recent history. The past twenty years have seen massive change, far more than the previous 20. In 1980, computers were still expensive and large. The world wide web didn't exist. ARPANET, the precursor to the internet, had begun to use TCP/IP. Metal-oxide Integrated Circuits, the foundation of most computing devices today, were rare. Most consumer products had little or no processing power. Not even the military was using GPS. Genetic editing of any kind was new. Genetic testing didn't exist. Black holes were still theoretical. It was a very different world.
The point is that technology development compounds over time. One development begets many more. It’s like compound interest gone berserk. If you have ten breakthroughs today, each one can lead to ten new breakthroughs tomorrow. Those hundred lead to a thousand. And so it goes on. In every field of human study, technological development is accelerating. And that presents some enormous societal challenges.
Falling Behind the Curve
We are all familiar with the stereotype of a child helping a grandparent use the internet. In the 1980's, it took a child to program a VCR. Presumably, in 20,000 BCE, it was Ugg's granddaughter that demonstrated how to use a sharp edged rock.
Younger generations typically have a better grasp of new technologies than older generations. It takes considerable effort on the part of older generations to stay up to date.
But age is not the only reason why some people are technology laggards. Economic and societal challenges play a role too.
The economic challenges are obvious. New technology usually costs money. Furthermore, access to new technology often relies on foundational infrastructure. Someone who wants to use Wikipedia has to have access to a computer, smartphone or similar device. That device needs an internet connection. The internet connection requires infrastructure. And so the list of requirements goes on. Similarly, the latest smartphone apps need a recent model smartphone. The latest medical advancements are unusable without supporting infrastructure.
Societal challenges also contribute to the problem of falling behind. Some societies limit access to, or use of, technology to certain groups. Or they reject technology entirely. Examples include restrictions based on gender, ethnicity, religion, and sexual orientation.
But societal challenges aren't always imposed by third parties. Many of us know individuals that "can't use" or "won't use" technology. Every generation has groups that pride themselves on willful ignorance and rejecting change. In the 1800's, it was the Luddites destroying machinery. Today, we have 5G conspiracy theorists and Flat Earthers. In the latter case, it is ironic that the tools they use to spread misinformation can only work if they are wrong!
Regardless of why someone falls behind the curve, the effect is the same. It is a simple truth that the further behind an individual is, the more difficult it is to catch up.
Picture yourself in a broken down car by the side of a road. Another car passes and accelerates away from you without stopping. Not only are they getting further away, but they are getting further away from you faster. Their speed continues to increase relative to you. Every second, they cover more ground than the second before. They quickly disappear out of sight.
It is the same with technological progress. If the rate of technology development is accelerating, it is easier than ever to fall behind. Not only is it easier, but it will happen faster and the separation will become greater, faster. The gap between technology adopters and laggards will grow exponentially.
It is hard to predict how long it will take for the gap to become irrecoverable. However, it is relatively easy to predict many of the impacts such a gap will have.
History as Precedent
The Digital Divide is a relatively new concept. It has received increasing attention with the rollout of computers in schools and the availability of broadband internet access. However, there is a clear historical example of a government directly funding the education and technological advancement of a significant percentage of the population. That example is the USA’s GI Bill.
The GI Bill
The Servicemen's Readjustment Act, now known as the GI Bill, was passed by the U.S. Congress in 1944. It was by far the largest direct intervention to improve education and skills in the country’s history. It aimed to help World War II veterans by giving them money for educational and vocational training, as well as access to low interest mortgages. The GI Bill is broadly credited with the increase in college attendance, homeownership, wealth, and birth rate in the United States. In the first seven years of the bill’s existence, around eight million veterans took advantage of it.
While the U.S. government wrote this bill to help veterans, many were left behind. The implementation of the GI Bill was decided at a state level, which led most Black veterans to not receive the same benefits as their White counterparts. Many banks refused to give loans to Black veterans. This left many Black veterans to live in decaying inner cities, while their white counterparts lived in the suburbs commonly associated with the American 1950s. Many believe the exclusion of Black World War II veterans has led to the current socio-economic gap in the USA. The effects have lasted across multiple generations due to White World War II veterans passing on their acquired wealth to their children.
The GI Bill was designed to stimulate learning and training. It was incredibly successful for those who were allowed to take advantage of it. However, it is also an example of the impact of wide scale exclusion of a group of people. It showcases what happens to those who are left behind by education and technology, and the corresponding socio-economic impact.
The Digital Divide Examined in the Modern Era
Numerous studies have been conducted in the USA by groups such as the Pew Research Center and other research teams. Their conclusions are clear:
- “Teenagers who have access to home computers are 6–8 percentage points more likely to graduate from high school than teenagers who do not”.
- Students with internet access at home earn $2 Million USD more over their lifetimes.
- In 2009, broadband internet access accounted for $32 Billion USD in net consumer benefits, including annual savings on purchases of more than $12,000 USD per household.
- 82% of middle skill jobs require digital skills.
Similar studies conducted in Europe and elsewhere broadly agree with these findings.
As demonstrated by the GI Bill, the advantages and disadvantages of one generation are often passed on to the next. That leads to a lasting, multi-generational, socio-economic impact.
Predicting the future is always difficult. Predicting future problems is even more so. Yet the scale of the problem is greater than implied here. This is likely to be true for one simple reason: our definition of technology.
Throughout this article, we have used a limited definition of technology. The implied focus has been computers and computer-related applications. But if the rate of technology development is increasing, so is its breadth.
We have already seen broad adoption of early versions of wearable technology. Consumers have access to devices like fitness trackers, smartwatches and cameras. More exotic wearables, like smart clothing and exo-skeletons seem right around the corner. Researchers have already demonstrated glasses with autofocus. Many other novel devices are in active development. Robotics also seems to be on the verge of much broader adoption. Quantum Computing, Quantum Simulators and Quantum Sensors are developing rapidly.
And yet even these examples stick with a familiar computing paradigm.
What about implantable technology? Implants without technology have existed in many forms for decades. Examples include hip and knee replacements, bone pins and many others. Implants with technology, such as pacemakers and Cochlear implants are still somewhat rare.
But many more implants are under development. Examples include in vivo biosensors and brain-computer interfaces. There are also a growing number of "bio-hackers" exploring implantable technology.
It was not long ago that using a headset with a smartphone seemed unusual. Now it is commonplace. Some of the acceptance and normalization of headsets is generational. Is it a huge jump from a wearable headset to an implanted device that performs the same function? History implies that "no" is the likely answer.
Another emerging field to consider is genetics. The Human Genome project cost more than $2.7 Billion USD. Today, an individual can get their genome sequenced for less than $200. Genetic testing is now routine during pregnancy in many parts of the world. Gene editing techniques such as CRISPR are becoming quite accessible. CRISPR appears to be useful for everything from pest control to alternative fuels. It may even help with allergies.
Some novel surgical procedures are now considered routine. Cosmetic procedures, tattoos and body piercing are all routine. Are implantable technology and genetic engineering a giant leap? It would seem not.
Pharmaceutical technology is also developing rapidly. Commercially available drugs have been shown to improve memory, focus and cognitive function,. Considerable resources are focused on developing drugs to prevent or even reverse the effects of aging. For an increasing number of diseases, odds of survival correlate directly with access to technology and adequate financial resources.
Furthermore, technology often takes unexpected turns.
One small example of an unexpected technology that is now mainstream is the consumer drone. The rapid rise of consumer drones owes much to the smartphone. Smartphone adoption drove rapid technology development in several key areas. Some components became dramatically cheaper, more functional and more reliable. Around 2010, low cost drones became a practical consumer proposition. Today, you can find the technology that once cost millions embedded in a child's toy that costs $30 USD. As late as 2005, such an outcome within 15 years seemed inconceivable.
It is clear that any discussion of the digital divide has to look beyond computing. Humans will soon have opportunities to upgrade themselves. These upgrades will seem straight from the pages of a sci-fi novel. They will go far beyond anything we currently expect or imagine. The people that stand to benefit the most are those that can afford the technology and have access to it.
The Haves versus Have Nots
As discussed, the digital divide transcends our traditional notions of "digital technology". For the purposes of clarity, we propose a new, broader term: the Technology Divide.
For those of us deeply embedded in the world of technology, it can be hard to picture the impact of the Technology Divide. In the following paragraphs, we present some examples designed to illustrate its potential impact.
During the past few months, one of the authors was involved in an electronics project that required a significant amount of soldering. Due to a change in PCB design and the relative cost of some of the components used, a number of PCBs had to be desoldered. Desoldering was attempted using inexpensive tools such as wire braid and a hand-operated solder pump. Both methods were tedious, time consuming and somewhat ineffective. The author invested in a powered desoldering gun with a built in vacuum pump. The effect was transformative. Desoldering became quick and easy, taking no more than 15 seconds per component. In short, for an investment of $250 USD, a task that was taking 5-10 minutes became 20-50 times faster. If the author was paid per piece for this work, the increase in income would have been dramatic. While this is a trivial example, the key takeaway is that a modest investment in technology resulted in a sizable increase in effectiveness and efficiency.
Imagine such a boost in efficiency replicated hundreds, or thousands, of times during someone’s life. That is the type of long term compounded advantage we are facing. And of course, the comparable disadvantage for those that cannot keep up will be just as significant.
To move away from electronics, consider the rapidly emerging area of Nootropics. An increasing number of chemical compounds have been shown to modify brain function, improving cognition, memory and learning. One study explored the effects of a commercially available citicoline-caffeine-based beverage in 60 adults. The experiment was a fully randomized, double-blind, placebo-controlled trial. Participants drinking the beverage exhibited significantly improved cognitive performance. In one task, Maze Completion Time, the group taking the supplement completed the task more than 50 seconds faster than the control group!
Of course, such supplements are relatively expensive. But for those that can afford them, they offer an easy way to boost productivity when needed. This could clearly impact the outcome of critical examinations, presentations or other mentally intensive activities.
Moving into the near future, imagine the abilities of an individual with an always connected augmented reality headset. It is easy to picture a headset coupled with cloud and fog computing services, making it a formidable everyday tool. The owner could automatically recognize every executive at their place of work, be instantly informed of new and vital information, and continuously process and model vast amounts of data. Their career could be dramatically enhanced through the use of Digital Twins assigned to different tasks. One could monitor and automatically apply for new job opportunities. Another could deal with all non-essential communication and task follow-up. It is easy to see how such technology would enhance the career of the wearer, inevitably leading to an increase in socio-economic status.
Conversely, the person without such technology would have to expend enormous amounts of time and energy just to try and keep up. And they would fail, just like a traditional typist trying to keep up with a high speed laser printer.
Moving further into the future, those with more disposable income will be able to invest in highly personalized medicine. This will likely include in-vivo wellbeing sensors, dynamically customized diets and and genetically tailored medications that lead to a longer, healthier life. Ample data and improved wellbeing will lead to additional benefits such as lower costs for insurance (car, life and healthcare). Considerable research is underway into anti-aging. It is no longer science fiction to say that we may be able to significantly slow, if not prevent, various aspects of aging. And of course, individuals that stay healthier for longer have greater earning potential.
Such advantages will doubtless be expensive and only available to a subset of the population. Those that are unable to access highly personalized medical care and wellbeing support, or anti-aging therapies, will clearly be at an increasing disadvantage. They are likely to succumb to common maladies that simply no longer exist for the technologically advantaged.
And life for the elderly will be radically different too. Sophisticated in-home monitoring and adaptive technologies such as exo-skeletons will allow the Advantaged to live independently for far longer.
When you consider the impact of the Technology Divide on disposable income and wellbeing, it is clear that the Advantaged will prosper. Over time, they will gain increasing financial stability and financial resilience, which typically leads to increased wellbeing. Conversely, much has been written about the correlation between lower income, poverty, and poorer wellbeing.
All of the technologies discussed are already available, or likely to be available within the decade. Many of the circumstances, both advantageous and disadvantageous, already occur in some form in today’s society. The disparity is stark. The Technologically Advantaged will live longer, have a better quality of life and are likely to amass significantly more wealth. That wealth will be passed on to their descendants, giving them a considerable head start. Conversely, the Technologically Disadvantaged will have shorter, harder lives. As will their descendants.
The outcomes seen with the beneficiaries of the GI Bill, and their children, support our notion of multi-generational advantage. They also support the notion that investment in the future pays great dividends.
But the GI Bill is hardly the only source of such evidence. As already discussed, in an increasingly technology-driven society, it is inevitable that socio-economic success is tied to technological advantage.
Individuals with socio-economic benefit can invest in additional technology, further enhancing their advantages. It is obvious that this effect compounds over time and can be significant.
It is also clear that getting left behind will become more commonplace. This will be particularly true for those with socio-economic disadvantages and marginalized groups. For example, individuals requiring assistive technology are often inadvertently excluded and left behind.
In other words, those that are ahead will get further ahead, faster. Conversely, those that are left behind will fall behind further, faster.
The impact of our rate of technological advancement will dwarf the effects of the industrial revolution in every respect. The gap between rich and poor will be magnified to an almost unfathomable degree. Without radical course correction, it would appear that humanity is well on its way to a dystopian future. One group will increasingly accrue advantages, while the other falls further and further behind.
Given the rapid rate of technological advancement, the window of opportunity to remedy the situation is likely to be quite small. However, there are some bright spots to consider. Again, history can be our guide.
Improving the Future
In 2010, the European Union adopted the “Digital Agenda for Europe”. The agenda identified improvements in broadband internet access as a critical step in addressing the digital divide. It also set targets for internet usage and digital inclusion. While not entirely successful, this program has had a significant impact30.
The South Korean government has made technology access a key part of the country’s culture. Internet usage rates are some of the highest in the world. As much as 89% of the broader population routinely uses broadband internet. South Korea claims to have the smallest digital divide among 40 major countries. Other countries such as Singapore, Finland and Sweden have also made shrinking the digital divide a priority.
The term “Digital Divide” was first used in the early 2000s. Interestingly, all four countries with a strong focus on the Digital Divide have shown significant growth in GDP per capita since then. It is unclear whether there is a direct causal relationship, but it would be an interesting topic for further explanation.
South Korea GDP Per Capita
While programs addressing the Digital Divide have shown some success, their focus has been very narrow: internet access and computer usage. We must expand our approach to consider and address the broader Technology Divide. This will be a far more difficult challenge to overcome.
To address such a sizable and broad problem, we must start with small goals.
The first step, as individuals working in technology, is simply to be aware of the issue.
The next step is for all of us to make a conscious effort to explain our work and its benefits to broader, more diverse audiences. Proactively embracing diversity and youth is a key component in addressing the broader issue.
Furthermore, our organizations, like IEEE, must take a leadership role in bringing science and technology back to the forefront of society in a meaningful and engaging way.
We must all make a long term commitment to education, engagement and enlightenment. Simply publishing information is not enough. We must proactively encourage and support engagement with technology in all of its forms. While the level of complexity in every field encourages separation between disciplines, we must fight against the siloing of information. The idea of a “Renaissance Man”, albeit inappropriately gendered for today’s world, is a useful model to follow. Our goal can only be achieved with a significant overhaul of public policy and educational systems. And that has to start with each of us.
Like climate change and a global pandemic, addressing the Technology Divide requires unprecedented coordinated action. We must start a meaningful conversation about its potential impact and how it can be addressed. Failure to do so will create an unsolvable problem that could dramatically re-write the structure of society.
 “Poverty and Well-Being”, Society for the Study of Economic Inequality (ECINEQ), http://www.ecineq.org/milano/WP/ECINEQ2013-291.pdf
Statements and opinions given in a work published by the IEEE or the IEEE Communications Society are the expressions of the author(s). Responsibility for the content of published articles rests upon the authors(s), not IEEE nor the IEEE Communications Society.