Skip to main content
IEEE CTN
Written By:

Nicholas Napp and Sophia Napp-Vega

Published: 18 Sep 2020

network

CTN Issue: September 2020

Girl in fields holding a laptop

A note from the editor:

The Internet has changed everyday life dramatically. During this global pandemic, the Internet has become even more the center of people’s life with many more people working remotely and kids going to “virtual online schools”. 

In this article, Nick and Sophia argue that the digital divide, the uneven distribution of information between those who have access to Internet and those who do not, is part of a much larger problem, the Technology Divide, and that as the rate of technology development is accelerating, the Technology Divide will widen and have more impact on those with socio-economic disadvantages. It calls out for the technology industry to be aware of the issue and proactively encourage and support engagement with technologies.

This article is the first in what we hope will be a series of articles in cooperation with our colleagues in the future technology society of IEEE. Special thanks to Doug Zuckerman for his support. As always your comments are most welcome.

Yingzhen Qu, CTN Editor

The Digital Divide: Humanity's Greatest Challenge?

Nicholas Napp

Nicholas Napp

CEO

Xmark Labs, LLC

Skyrocketing Technology

The Law of Accelerating Returns

In 2001, Ray Kurzweil published an essay called, "The Law of Accelerating Returns"[1]. Kurzweil is a pioneer in the field of pattern recognition. He is also a technologist and futurist, known for thoughtful essays. In this particular essay, he suggests that the rate of technological change is not linear, but exponential.

His conclusion is startling.

"We won’t experience 100 years of progress in the 21st century — it will be more like 20,000 years of progress (at today’s rate)."

Twenty thousand years! Let that sink in for a moment. Twenty thousand years ago, humans were living in the late Stone Age. The wheel was a crazy futuristic dream. Thousands of years would pass before its invention. The peak of human technology was a sharpened rock.

Sharpened rock
The peak of Stone Age technology.

Photograph by José-Manuel Benito Álvarez
Source: https://en.wikipedia.org/wiki/Later_Stone_Age#/media/File:Pieza_foli%C3%A1cea_africana.jpg

 

The difference between 20,000 years ago and today is unimaginable. And yet that represents the magnitude of change Kurzweil predicts in the next 100 years.

As you read this, and the idea sinks in, you may think it all sounds absurd. But there is a reason why you may not be able to judge the situation clearly. Writer Tim Urban explains the issue in a blog post about Artificial Intelligence[2]. Any curve, no matter how steep, looks flat when you zoom in close enough.

We all live in the present. Change is all around us. In terms of history, we are all zoomed in and too close to the curve.

In the same blog post, Tim proposes a thought experiment. He suggests we build a time machine. We use the machine to kidnap someone from just before the industrial revolution. We bring them to the modern day to see what they think of our world. He points out that our whole society would be utterly confusing to them. A person from 1750 would recognize almost nothing, from cities to cars to electronics. Many things would seem magical and beyond all explanation.

Tim then proposes that the person we kidnapped decides to get their revenge. They steal our time machine and play the same trick on someone from 1500, bringing them to 1750. The person from 1500 recognizes almost everything. Society as a whole is not that different.

This illustrates Kurzweil's point. Far more technological change happened between 1750 and 2000 than 1500 and 1750.

Let's look at more recent history. The past twenty years have seen massive change, far more than the previous 20. In 1980, computers were still expensive and large. The world wide web didn't exist.  ARPANET, the precursor to the internet, had begun to use TCP/IP. Metal-oxide Integrated Circuits, the foundation of most computing devices today, were rare. Most consumer products had little or no processing power. Not even the military was using GPS. Genetic editing of any kind was new. Genetic testing didn't exist. Black holes were still theoretical. It was a very different world.

The point is that technology development compounds over time. One development begets many more. It’s like compound interest gone berserk. If you have ten breakthroughs today, each one can lead to ten new breakthroughs tomorrow. Those hundred lead to a thousand. And so it goes on. In every field of human study, technological development is accelerating. And that presents some enormous societal challenges.

Falling Behind the Curve

We are all familiar with the stereotype of a child helping a grandparent use the internet. In the 1980's, it took a child to program a VCR. Presumably, in 20,000 BCE, it was Ugg's granddaughter that demonstrated how to use a sharp edged rock.

Younger generations typically have a better grasp of new technologies than older generations.  It takes considerable effort on the part of older generations to stay up to date.

But age is not the only reason why some people are technology laggards. Economic and societal challenges play a role too.

The economic challenges are obvious. New technology usually costs money. Furthermore, access to new technology often relies on foundational infrastructure. Someone who wants to use Wikipedia has to have access to a computer, smartphone or similar device. That device needs an internet connection. The internet connection requires infrastructure. And so the list of requirements goes on. Similarly, the latest smartphone apps need a recent model smartphone. The latest medical advancements are unusable without supporting infrastructure.

Societal challenges also contribute to the problem of falling behind. Some societies limit access to, or use of, technology to certain groups. Or they reject technology entirely.  Examples include restrictions based on gender[3], ethnicity[4], religion[5], and sexual orientation[6].

But societal challenges aren't always imposed by third parties. Many of us know individuals that "can't use" or "won't use" technology. Every generation has groups that pride themselves on willful ignorance and rejecting change. In the 1800's, it was the Luddites destroying machinery[7]. Today, we have 5G conspiracy theorists[8] and Flat Earthers[9]. In the latter case, it is ironic that the tools they use to spread misinformation can only work if they are wrong!

Regardless of why someone falls behind the curve, the effect is the same. It is a simple truth that the further behind an individual is, the more difficult it is to catch up.

Falling Faster

Picture yourself in a broken down car by the side of a road. Another car passes and accelerates away from you without stopping. Not only are they getting further away, but they are getting further away from you faster. Their speed continues to increase relative to you. Every second, they cover more ground than the second before. They quickly disappear out of sight. 

It is the same with technological progress. If the rate of technology development is accelerating, it is easier than ever to fall behind. Not only is it easier, but it will happen faster and the separation will become greater, faster. The gap between technology adopters and laggards will grow exponentially.

It is hard to predict how long it will take for the gap to become irrecoverable. However, it is relatively easy to predict many of the impacts such a gap will have.

History as Precedent

The Digital Divide is a relatively new concept. It has received increasing attention with the rollout of computers in schools and the availability of broadband internet access. However, there is a clear historical example of a government directly funding the education and technological advancement of a significant percentage of the population. That example is the USA’s GI Bill.

The GI Bill

The Servicemen's Readjustment Act, now known as the GI Bill, was passed by the U.S. Congress in 1944. It was by far the largest direct intervention to improve education and skills in the country’s history. It aimed to help World War II veterans by giving them money for educational and vocational training, as well as access to low interest mortgages. The GI Bill is broadly credited with the increase in college attendance, homeownership, wealth, and birth rate in the United States. In the first seven years of the bill’s existence, around eight million veterans took advantage of it.[10]

While the U.S. government wrote this bill to help veterans, many were left behind. The implementation of the GI Bill was decided at a state level, which led most Black veterans to not receive the same benefits as their White counterparts. Many banks refused to give loans to Black veterans. This left many Black veterans to live in decaying inner cities, while their white counterparts lived in the suburbs commonly associated with the American 1950s.[11] Many believe the exclusion of Black World War II veterans has led to the current socio-economic gap in the USA. The effects have lasted across multiple generations due to White World War II veterans passing on their acquired wealth to their children.

The GI Bill was designed to stimulate learning and training. It was incredibly successful for those who were allowed to take advantage of it. However, it is also an example of the impact of wide scale exclusion of a group of people. It showcases what happens to those who are left behind by education and technology, and the corresponding socio-economic impact.

The Digital Divide Examined in the Modern Era

Numerous studies have been conducted in the USA by groups such as the Pew Research Center[12] and other research teams. Their conclusions are clear:

  • “Teenagers who have access to home computers are 6–8 percentage points more likely to graduate from high school than teenagers who do not”.[13]
  • Students with internet access at home earn $2 Million USD more over their lifetimes[14].
  • In 2009, broadband internet access accounted for $32 Billion USD in net consumer benefits, including annual savings on purchases of more than $12,000 USD per household[15].
  • 82% of middle skill jobs require digital skills[16].

Similar studies conducted in Europe[17] and elsewhere broadly agree with these findings.

As demonstrated by the GI Bill, the advantages and disadvantages of one generation are often passed on to the next. That leads to a lasting, multi-generational, socio-economic impact.

Exponential Challenges

Predicting the future is always difficult. Predicting future problems is even more so. Yet the scale of the problem is greater than implied here. This is likely to be true for one simple reason: our definition of technology.

Throughout this article, we have used a limited definition of technology.  The implied focus has been computers and computer-related applications. But if the rate of technology development is increasing, so is its breadth.

We have already seen broad adoption of early versions of wearable technology. Consumers have access to devices like fitness trackers, smartwatches and cameras. More exotic wearables, like smart clothing[18] and exo-skeletons[19] seem right around the corner. Researchers have already demonstrated glasses with autofocus[20]. Many other novel devices are in active development. Robotics also seems to be on the verge of much broader adoption. Quantum Computing, Quantum Simulators and Quantum Sensors are developing rapidly.

And yet even these examples stick with a familiar computing paradigm.

What about implantable technology? Implants without technology have existed in many forms for decades. Examples include hip and knee replacements, bone pins and many others. Implants with technology, such as pacemakers and Cochlear implants are still somewhat rare. 

But many more implants are under development[21]. Examples include in vivo biosensors and brain-computer interfaces. There are also a growing number of "bio-hackers" exploring implantable technology.

It was not long ago that using a headset with a smartphone seemed unusual. Now it is commonplace. Some of the acceptance and normalization of headsets is generational. Is it a huge jump from a wearable headset to an implanted device that performs the same function? History implies that "no" is the likely answer.

Another emerging field to consider is genetics. The Human Genome project cost more than $2.7 Billion USD. Today, an individual can get their genome sequenced for less than $200[22].  Genetic testing is now routine during pregnancy in many parts of the world[23]. Gene editing techniques such as CRISPR are becoming quite accessible. CRISPR appears to be useful for everything from pest control[24] to alternative fuels[25]. It may even help with allergies[26].

Some novel surgical procedures are now considered routine. Cosmetic procedures, tattoos and body piercing are all routine. Are implantable technology and genetic engineering a giant leap? It would seem not.

Pharmaceutical technology is also developing rapidly. Commercially available drugs have been shown to improve memory, focus and cognitive function[27],[28]. Considerable resources are focused on developing drugs to prevent or even reverse the effects of aging. For an increasing number of diseases, odds of survival correlate directly with access to technology and adequate financial resources.

Furthermore, technology often takes unexpected turns.

One small example of an unexpected technology that is now mainstream is the consumer drone. The rapid rise of consumer drones owes much to the smartphone. Smartphone adoption drove rapid technology development in several key areas. Some components became dramatically cheaper, more functional and more reliable. Around 2010, low cost drones became a practical consumer proposition. Today, you can find the technology that once cost millions embedded in a child's toy that costs $30 USD. As late as 2005, such an outcome within 15 years seemed inconceivable.

It is clear that any discussion of the digital divide has to look beyond computing. Humans will soon have opportunities to upgrade themselves. These upgrades will seem straight from the pages of a sci-fi novel. They will go far beyond anything we currently expect or imagine. The people that stand to benefit the most are those that can afford the technology and have access to it.

The Haves versus Have Nots

As discussed, the digital divide transcends our traditional notions of "digital technology". For the purposes of clarity, we propose a new, broader term: the Technology Divide.

For those of us deeply embedded in the world of technology, it can be hard to picture the impact of the Technology Divide. In the following paragraphs, we present some examples designed to illustrate its potential impact.

During the past few months, one of the authors was involved in an electronics project that required a significant amount of soldering. Due to a change in PCB design and the relative cost of some of the components used, a number of PCBs had to be desoldered. Desoldering was attempted using inexpensive tools such as wire braid and a hand-operated solder pump. Both methods were tedious, time consuming and somewhat ineffective. The author invested in a powered desoldering gun with a built in vacuum pump. The effect was transformative. Desoldering became quick and easy, taking no more than 15 seconds per component. In short, for an investment of $250 USD, a task that was taking 5-10 minutes became 20-50 times faster. If the author was paid per piece for this work, the increase in income would have been dramatic. While this is a trivial example, the key takeaway is that a modest investment in technology resulted in a sizable increase in effectiveness and efficiency.

Imagine such a boost in efficiency replicated hundreds, or thousands, of times during someone’s life. That is the type of long term compounded advantage we are facing. And of course, the comparable disadvantage for those that cannot keep up will be just as significant.

To move away from electronics, consider the rapidly emerging area of Nootropics. An increasing number of chemical compounds have been shown to modify brain function, improving cognition, memory and learning[29]. One study explored the effects of a commercially available citicoline-caffeine-based beverage in 60 adults. The experiment was a fully randomized, double-blind, placebo-controlled trial. Participants drinking the beverage exhibited significantly improved cognitive performance. In one task, Maze Completion Time, the group taking the supplement completed the task more than 50 seconds faster than the control group![30]

Of course, such supplements are relatively expensive. But for those that can afford them, they offer an easy way to boost productivity when needed. This could clearly impact the outcome of critical examinations, presentations or other mentally intensive activities.

Moving into the near future, imagine the abilities of an individual with an always connected augmented reality headset. It is easy to picture a headset coupled with cloud and fog computing services, making it a formidable everyday tool. The owner could automatically recognize every executive at their place of work, be instantly informed of new and vital information, and continuously process and model vast amounts of data. Their career could be dramatically enhanced through the use of Digital Twins assigned to different tasks. One could monitor and automatically apply for new job opportunities. Another could deal with all non-essential communication and task follow-up. It is easy to see how such technology would enhance the career of the wearer, inevitably leading to an increase in socio-economic status.

Conversely, the person without such technology would have to expend enormous amounts of time and energy just to try and keep up. And they would fail, just like a traditional typist trying to keep up with a high speed laser printer.

Moving further into the future, those with more disposable income will be able to invest in highly personalized medicine. This will likely include in-vivo wellbeing sensors, dynamically customized diets and and genetically tailored medications that lead to a longer, healthier life. Ample data and improved wellbeing will lead to additional benefits such as lower costs for insurance (car, life and healthcare). Considerable research is underway into anti-aging. It is no longer science fiction to say that we may be able to significantly slow, if not prevent, various aspects of aging. And of course, individuals that stay healthier for longer have greater earning potential.

Such advantages will doubtless be expensive and only available to a subset of the population. Those that are unable to access highly personalized medical care and wellbeing support, or anti-aging therapies, will clearly be at an increasing disadvantage. They are likely to succumb to common maladies that simply no longer exist for the technologically advantaged.

And life for the elderly will be radically different too. Sophisticated in-home monitoring and adaptive technologies such as exo-skeletons will allow the Advantaged to live independently for far longer.

When you consider the impact of the Technology Divide on disposable income and wellbeing, it is clear that the Advantaged will prosper. Over time, they will gain increasing financial stability and financial resilience, which typically leads to increased wellbeing. Conversely, much has been written about the correlation between lower income, poverty, and poorer wellbeing[31].

All of the technologies discussed are already available, or likely to be available within the decade. Many of the circumstances, both advantageous and disadvantageous, already occur in some form in today’s society. The disparity is stark. The Technologically Advantaged will live longer, have a better quality of life and are likely to amass significantly more wealth. That wealth will be passed on to their descendants, giving them a considerable head start.  Conversely, the Technologically Disadvantaged will have shorter, harder lives. As will their descendants.

Conclusions

The outcomes seen with the beneficiaries of the GI Bill, and their children, support our notion of multi-generational advantage. They also support the notion that investment in the future pays great dividends.

But the GI Bill is hardly the only source of such evidence. As already discussed, in an increasingly technology-driven society, it is inevitable that socio-economic success is tied to technological advantage.

Individuals with socio-economic benefit can invest in additional technology, further enhancing their advantages. It is obvious that this effect compounds over time and can be significant.

It is also clear that getting left behind will become more commonplace. This will be particularly true for those with socio-economic disadvantages and marginalized groups. For example, individuals requiring assistive technology are often inadvertently excluded and left behind[32].

In other words, those that are ahead will get further ahead, faster. Conversely, those that are left behind will fall behind further, faster.

The impact of our rate of technological advancement will dwarf the effects of the industrial revolution in every respect. The gap between rich and poor will be magnified to an almost unfathomable degree. Without radical course correction, it would appear that humanity is well on its way to a dystopian future. One group will increasingly accrue advantages, while the other falls further and further behind.

Given the rapid rate of technological advancement, the window of opportunity to remedy the situation is likely to be quite small. However, there are some bright spots to consider. Again, history can be our guide.

Improving the Future

In 2010, the European Union adopted the “Digital Agenda for Europe”[33]. The agenda identified improvements in broadband internet access as a critical step in addressing the digital divide. It also set targets for internet usage and digital inclusion. While not entirely successful, this program has had a significant impact30.

The South Korean government has made technology access a key part of the country’s culture. Internet usage rates are some of the highest in the world. As much as 89% of the broader population routinely uses broadband internet. South Korea claims to have the smallest digital divide among 40 major countries[34]. Other countries such as Singapore, Finland and Sweden[35] have also made shrinking the digital divide a priority.

The term “Digital Divide” was first used in the early 2000s. Interestingly, all four countries with a strong focus on the Digital Divide have shown significant growth in GDP per capita since then. It is unclear whether there is a direct causal relationship, but it would be an interesting topic for further explanation.

Finland GDP Per Capita

Finland GDP Per Capita[36]

Sweden GDP Per Capita

Sweden GDP Per Capita[37]

 
Singapore GDP Per Capita

Singapore GDP Per Capita[38]

South Korea GDP Per Capita

South Korea GDP Per Capita[39]

 

While programs addressing the Digital Divide have shown some success, their focus has been very narrow: internet access and computer usage. We must expand our approach to consider and address the broader Technology Divide. This will be a far more difficult challenge to overcome.

To address such a sizable and broad problem, we must start with small goals.

The first step, as individuals working in technology, is simply to be aware of the issue.

The next step is for all of us to make a conscious effort to explain our work and its benefits to broader, more diverse audiences. Proactively embracing diversity and youth is a key component in addressing the broader issue.

Furthermore, our organizations, like IEEE, must take a leadership role in bringing science and technology back to the forefront of society in a meaningful and engaging way.

We must all make a long term commitment to education, engagement and enlightenment. Simply publishing information is not enough. We must proactively encourage and support engagement with technology in all of its forms. While the level of complexity in every field encourages separation between disciplines, we must fight against the siloing of information. The idea of a “Renaissance Man”, albeit inappropriately gendered for today’s world, is a useful model to follow. Our goal can only be achieved with a significant overhaul of public policy and educational systems. And that has to start with each of us.

Like climate change and a global pandemic, addressing the Technology Divide requires unprecedented coordinated action. We must start a meaningful conversation about its potential impact and how it can be addressed. Failure to do so will create an unsolvable problem that could dramatically re-write the structure of society.

Footnotes

[1] https://www.kurzweilai.net/the-law-of-accelerating-returns
[2] https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html
[3] https://www.aljazeera.com/news/2016/02/india-banning-women-owning-mobile-phones-160226120014162.html
[4] https://www.wsj.com/articles/u-s-adds-chinese-firms-to-blacklist-citing-repression-of-muslim-minorities-11570488642
[5] https://www.britannica.com/topic/Amish
[6] https://www.theguardian.com/technology/2019/sep/26/tiktoks-local-moderation-guidelines-ban-pro-lgbt-content
[7] https://www.history.com/news/who-were-the-luddites
[8] https://www.snopes.com/news/2020/06/12/how-the-5g-coronavirus-conspiracy-theory-began/
[9] https://www.cnn.com/2019/11/16/us/flat-earth-conference-conspiracy-theories-scli-intl/index.html
[10] https://www.defense.gov/Explore/Features/story/Article/1727086/75-years-of-the-gi-bill-how-transformative-its-been/
[11] https://www.khanacademy.org/humanities/us-history/postwarera/postwar-era/a/african-americans-women-and-the-gi-bill
[12] https://www.pewresearch.org/fact-tank/2018/10/26/nearly-one-in-five-teens-cant-always-finish-their-homework-because-of-the-digital-divide/
[13]  https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1465-7295.2009.00218.x
[14] https://www.federalreserve.gov/pubs/ifdp/2008/958/ifdp958.pdf
[15] https://internetinnovation.org/special-reports/savings/
[16] https://www.burning-glass.com/research-project/digital-skills-gap/
[17] https://www.europarl.europa.eu/RegData/etudes/BRIE/2015/573884/EPRS_BRI(2015)573884_EN.pdf
[18] https://www.lifewire.com/what-are-smart-clothes-4176103
[19] https://www.nbcnews.com/mach/innovation/robotic-exoskeletons-are-changing-lives-surprising-ways-n722676
[20] https://www.sciencedaily.com/releases/2019/07/190701144250.htm
[21] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4987398/
[22] https://www.wired.com/story/whole-genome-sequencing-cost-200-dollars/
[23] https://www.webmd.com/baby/pregnant-genetic-testing
[24] https://www.wired.com/story/heres-the-plan-to-end-malaria-with-crispr-edited-mosquitoes/
[25] https://www.sciencealert.com/gene-editing-algae-doubles-biofuel-output-potential
[26] https://www.abc.net.au/catalyst/gene-editing-made-simple/11016800
[27] https://www.scientificamerican.com/article/a-safe-drug-to-boost-brainpower/
[28] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2690227/
[29] https://www.psychologytoday.com/us/blog/understanding-nootropics/202008/how-nootropics-boost-mental-clarity-and-focus
[30] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4517431/
[31] “Poverty and Well-Being”, Society for the Study of Economic Inequality (ECINEQ), http://www.ecineq.org/milano/WP/ECINEQ2013-291.pdf
[32] https://www.pewresearch.org/fact-tank/2017/04/07/disabled-americans-are-less-likely-to-use-technology/
[33] https://ec.europa.eu/digital-single-market/en/broadband-europe
[34] http://m.koreatimes.co.kr/pages/article.amp.asp?newsIdx=199287
[35] https://www.weforum.org/agenda/2015/04/which-nations-are-top-for-digital/
[36] https://www.macrotrends.net/countries/FIN/finland/gdp-per-capita
[37] https://www.macrotrends.net/countries/SWE/sweden/gdp-per-capita
[38] https://www.macrotrends.net/countries/SGP/singapore/gdp-per-capita
[39] https://www.macrotrends.net/countries/KOR/south-korea/gdp-per-capita

Statements and opinions given in a work published by the IEEE or the IEEE Communications Society are the expressions of the author(s). Responsibility for the content of published articles rests upon the authors(s), not IEEE nor the IEEE Communications Society.

Comments

There are multiple issues. Digital literacy is a difficult issue to address but digital inclusion in the sense of enabling people to be connected 24x7 from anywhere is very simple. We just need to shift from the 19th-century idea of networking as a service (modeled on postal services and railroads) to using funding a common infrastructure funding more like roads and sidewalks. This is possible because the current system is designed around dumb endpoints. Now that we have intelligence the very concept of telecommunications must be revisited. We don't ask if people can afford to take a walk - it is just assumed that there is a path.

More at https://rmf.vc/IEEEBBToInfrastructure and other writings.

Bob Frankston, IEEE Fellow, BOG CTSoc.

Submitted by ieee@bob.ma on 18 September 2020

I have been trying to drive a meaningful conversation about the emerging future. Please consider, for example, the story “Intelligent leadership conversation vocabulary ( https://medium.com/@gmh_upsa/intelligent-leadership-conversation-vocabu… ),” whose introduction says:

Dear Jose I may agree, I may not.

You are using some phrases that I am unfamiliar with. They are clearly slightly different than similar sounding things, but I don't know how. #TheWealthOfGlobalization #BrightGlobalization #SystemicCivilization

Dear friend, thanks for your timely and useful doubt.

Please add, for example, #CulturalSingularity #GlobalState #Pragmaticism #HomoPragmaticist #APosterioriBehavior #MPPMethod that you and others need to become familiar with.

Unlike many other managers (not leaders) you are not afraid to ask. If we ask now, “What is leadership?” then we can legitimately claim that helping people engage in intelligent leadership talk, to foster a vocabulary for intelligent leadership conversations or dialogue, is in fact to promote leadership effectiveness. – [ http://www.pib.net/articles/anxiety.htm ]

In that context, please consider the post "Is #Drucker's Management Challenges for the #SystemicCivilization on the opposite side force field of academic privilege? [ https://grupomillenium.blogspot.com/2016/10/is-druckers-management-chal… ]" and its "First update. Provocation #1 about The Expertise Conundrum."

José Antonio Vanderhorst-Silverio, PhD.
System Leader
Creator of the EWPC-AF (also Value Added Electricity Architecture Framework)
Ex-IEEE Dominican Republic Subsection’s Interim Chair
IEEE Senior Life Member
BS ´68, MS ´71 & PhD ´72, all from Cornell University.

Submitted by javs@ieee.org on 18 September 2020

Because a person from 1800 would find 2020 most amazing does not equate to 20,000 years across the 21st C. Not even a tenth that. Hey, we're already 20 years in. Just consider trains, still crawling down the tracks. Or planes, not much faster since the 707! Any big changes in store here? Hot topics and occasional (yes, great) breakthroughs do inspire sci fi thinking, and perhaps Kurzweil's futurist inclinations. Yes, Kurzweil is probably chugging that tired aphorism "paradigm shift." He must be thinking shifts are going to be coming fast and furious. They haven't been lately. And history is not encouraging. [Yes, I know, he's maybe saying that history in this sense is becoming obsolete, but reports of the death of history have been exaggerated for centuries. Just 30 years ago, consider Fukuyama.] We have "fusion in 30 years," and it's always 30 years for 60 years. We have quantum computing "soon" since 2000, but quite possibly many times that before a practical machine. Now, there's no doubt that a readily-programmable quantum computer would razzle-dazzle Babbage, and be mind-blowing to, say, Archimedes, two millennia back. So maybe there's a one-tenth to grant? But no. [And I admit, these are not average folks!] If we get that useful quantum computer in the next 30 years, the results are going to be eye-opening, but certainly not mind-blowing, to power users from 2000, a 50-year gap. And certainly not magic, as it would be to a hunter-gatherer (apologies, hunter-gatherers, but we need some reference point). [Yes, we recall Arthur C. Clarke's three laws behind sci fi.] That 2000 era pop of quantum computing might compare to the eye-popping-ness from maybe large IBMs of 1970 to now, but far short of magic. Hmm, that would be linear change in eye popping-ness, not exponential. So is Kurzweil claiming we're at the (dreaded) Inflection Point, a linear-to-exponential discontinuity here in 2020? Breakthroughs do puncture the linear pace of wow. Little else does. But it's hard to predict breakthroughs. Always has been. They just arrive when they do, about all you can say. In my field, it took 30 years for the first really big breakthrough since sequential decoding of convolutional codes in 1962 to arrive, discounting Viterbi (sorry, Andy), iterative decoding of convolutional codes in about 1992. We are now almost 30 years down the road from that, and we have - on a log scale - almost no progress, just faster (and incrementally ever more slowly) processors, but no breakthrough! I'm not saying there won't be one, but Shannon rocks! So I would be surprised (but not shocked) if 2100 found, say, mobile communications rates doubling even every decade from now to then; or 2^8 times, suggesting 100 Gbps into your hand. You say miraculous (maybe you don't)! I say kind of boring because fiber is already there. No unimaginable-ness. Certainly not magic. Most certainly not hunter-gatherer to fiber change. It is more like what mobile carrier execs wish - even plan - for, and will probably not get in outdoor mobile by 2100. In other words, barring a really huge and hugely unpredictable breakthrough, log-linear is at the reasonable edge-of-imagining. [Yes, I know that beyond the (dreaded) IP, we don't even need data to our hands, that what AIs are for!] Finally, I do buy that DNA changed the world in under a decade. As did the Bomb. These two things happened, amazingly close to each other (that's memoryless random processes for you), but maybe you're still driving an old car, hoping it will hang in there another decade from 2000 at 20 MPG so you won't have to buy a hybrid at (the specious) equivalence of 50 MPG. Nothing better than a little more than linear for (much) more than 20 years now, ignoring the smog issue. And even better argument, you're still sliding windows, hoping your old A/C doesn't keel over this summer, forcing you to buy a 50% better SEER unit, when SEER from 1960 to now has only advanced less than five-fold, hardly exponential for one of the most important green house gas drivers in the world. And 21 SEER is beginning to look like an absolute plateau. Anyone for 210 SEER by 2100? I'll close my window here. But where's all this 20,000 years in the next 80 going to come from? Not from blue sky.

Submitted by kkumm@ieee.org on 18 September 2020

On the heels of my comment, I must add for clarity that discount of Viterbi is only to say which big change came first. It was the rise of convolutional above algebraic codes. Viterbi analyzed the limits, took aim at sequential decoding and found an advantage in complexity that sequential could not match: more logic versus faster logic, not to mention the trellis and the lower bound. More, of course, won the day, then as now, first from LSI to VLSI and unceasingly onward. And it can be argued that without Viterbi's algorithm, the next big change, turbo coding, would not have arrived as soon as it did, the algorithm being the virtual springboard for the latter. So hats off to you, Andy, even if your countryman Robert Fano got the earlier slap on the jump ball. I should ascribe to both of you communications theory's 1905 moment. But here I must go with the earlier break.

Submitted by kkumm@ieee.org on 18 September 2020

Hopefully last, I must add this. It is not unfair to say that we live in especially skeptical times, and so my line "reports of the death of history ..." is of course borrowed, with the common variation, as most folks who passed high school English in Regions 1-6, and maybe 7, know. Alas, when blogging needs such to avoid "that's plagiarism, so void the rest," it is sad times, as it was for Samuel Clemens as he penned the original. I just want the rest to stand.

Submitted by kkumm@ieee.org on 19 September 2020

A view from the other side may be useful here. In India the Covid pandemic has pushed us into a Digital Trap. Take education. With schools and colleges closed the government has been promoting online education. Two problems have surfaced. First connectivity. Many students who went home as the pandemic broke are in areas where you can get connectivity only if you go to a nearby town or may be climb on to the highest structure in your village. Connectivity does not translate into bandwidth. That is another story. Second problem is affordability. Even the cheapest smartphone costs around Rs 10,000 about $100 which is a big sum for an ordinary citizen. Imagine learning on a smartphone shared between you, your elder sister and your college going elder brother. In between will be the demands from your father and mother for online booking of gas cylinder, receiving calls and making calls.
Digital payments have taken off quite well in urban areas but for persons who are dependent on daily wages digital payments are not acceptable. Further, trips to the ATM to withdraw cash becomes a task with the danger of Covid infection.
Digital apps to detect proximity of Covid carriers have proved to be unreliable.
My conclusion: digital is not a panacea for our ills. It is not an amulet that can be worn to avoid danger. At best, it is an enabler provided the socio-economic context is also kept in mind while designing Digital Solutions.

Submitted by arup@ieee.org on 22 September 2020

To ieee@bob.ma:

Nick says:
Hi Bob,
You make some great points. Changing the view of networking to be a foundational and expected infrastructure would certainly make a difference to the digital divide. As Arup commented below, basic connectivity is still a challenge in many parts of the world.
What is less clear to me is how to apply the “make it infrastructure” approach to the broader Technology Divide we describe. For example, I could see a future where a basic cloud computing allowance would become part of the infrastructure. However, can that approach be expanded to things like life-extending therapies, personalized medicine, advanced hardware? For many years in the UK, the NHS provided prescription glasses to those that needed them (they probably still do). A hundred years ago, glasses weren’t considered a basic need. Today, that view seems unreasonable. Will we, at some point, see something like an AR headset as a basic need? The idea of a Gene Roddenberry-esque egalitarian society where technology is available to all who want it is quite appealing in some ways, but it seems very unlikely as a realistic future for humanity.

Sophia says:
The way we currently view and deploy capitalism seems like a big roadblock in pursuing technology as a basic infrastructure. However, there is some hope: libraries typically have computers for public use, and some are now offering maker spaces too. From an idealistic point of view, I think it’s a great idea to have technology as free to use as a path or roadway.

Submitted by nick@xmarklabs.com on 23 September 2020

To javs@ieee.org:

Nick says:
Hi José,
We agree that more conversation is important. One thing we have noted is that clarity of language is also very important. We tried hard in our article to avoid too many novel terms. The idea of a Technology Divide isn’t hard to grasp if you are familiar with the idea of a Digital Divide. However, we think terms like “CulturalSingularity” and “SystemicCivilization” can lead to a lot of confusion and should be used very sparingly. As the diversity of these comments show, the Technology Divide is a lot to take in.

Sophia says:
I agree. We could do with a lot more conversation, especially when it comes to this issue.

Submitted by nick@xmarklabs.com on 23 September 2020

To kkumm@ieee.org:

Nick says:
Hi kkumm,
Thanks for your comment. Respectfully, I must disagree with many of your points. Please allow me to apply a reductio ad absurdum argument here: “The knife in my kitchen is essentially identical to the knives we’ve used since the Iron Age. Therefore, there has been no technology development in knives, materials science or related areas.”.

This is clearly a nonsense argument. The knife has remained in similar form for many reasons, but its current form in no way represents a lack of technology development. For example, we now have an enormous array of materials that we could use, ranging from many different alloys to ceramics, plastics and composites. We have a wide variety of surface treatments that can be applied, such as diamond tipped cutting surfaces. We have wholly new methods of cutting (in comparison to the Iron Age), such as lasers, water jets and plasma cutters. There are also entire new fields built up around the process of cutting, from CNCs to automated assembly lines.

The fact that a single kitchen knife fails to embody any of these advancements does not mean that no technology development has happened.

If you talk to any researcher in the field of artificial intelligence, brain-computer interfaces, or genetics (just to pick a few that I’ve been exposed to recently) they will tell you that developments are in fact “fast and furious”. Just this past month NASA announced Lattice Confinement Fusion, a new form of small scale fusion that is vastly different from the tokamak approach. And in fairness to the proponents of Tokamak-based fusion, projects like ITER are pretty much exactly where they were expected to be 30 years ago. As an undergrad I toured the Joint European Torus (JET) and I distinctly remember the wildly futuristic dates projected for ITER’s commissioning. As for Quantum Computing, IBM just showed their roadmap that reaches 1,000 Qubits by 2023. Yes, the Quantum Computers we can use today are toys in comparison, but they do exist and they do work. Quantum Key Distribution and Quantum Communication are both deployed in the real world, and the advancements in Quantum Sensing (especially for the measurement of gravity) are astonishing.

As William Gibson once said, “The future is here, it’s just not evenly distributed”.

Furthermore, whether you buy Kurzweil’s argument of 20,000 years of development or not, our underlying points still stand.

1. As a society, we are increasingly dependent on a broad range of technologies.
2. The rate at which those technologies are being developed and deployed is increasing, not decreasing.
3. Those that are able to keep up with the latest technologies gain significant advantage over those who are not.
4. The net effect is a growing disparity where some will fall behind further, faster. The effect will compound over time, dramatically broadening the gap between the haves and the have nots.

Sophia says:
I agree with Nick. We used Kurzweil’s argument not as a hard fact, but as a quote that highlights the rapid advancement of technology. As someone born post-millennium, I have personally seen the rise of the smartphone and the tablet computer, widespread use of wi-fi and Bluetooth, mass broadband adoption in many parts of the world, and the almost complete replacement of landlines and dial-up internet. And that’s just digital technology. Outside of computers and communications, we’ve explored Mars, deciphered the Human Genome (and made that affordable for almost anyone), had countless advancements in materials. Even color has undergone advancement with the creation of “Vanta Black”. I enjoy a very broad range of subjects, and I cannot find any that don’t have some areas of rapid advancement.

Submitted by nick@xmarklabs.com on 23 September 2020

To arup@ieee.org:

Nick says:
Connectivity and affordability are prime examples of what we mean by foundational infrastructure. Without connectivity, a smartphone is useless. If it's too expensive, it might as well not exist. The challenges you highlight are very real and are a huge barrier to adoption for some. The end result, as you point out, is entire population groups being excluded from the advantages technology can provide. That, in turn, has a lasting multi-generational economic impact. As the rate of technology development increases, more people will fall behind further, faster.

However, this is not just a digital trap. We believe the issues you see with smartphones and connectivity will be replicated in many other fields. Our broader point is that this applies to all forms of technology, not just digital. Just as some groups are unable to take advantage of smartphones and the internet, many will be unable to take advantage of advances in medicine, robotics, materials etc.

Technology in any form is rarely a panacea. Based on our current trajectory as a society, our concern is that technology will increasingly become a benefit for the few, and not the many.

Sophia says:
Throughout history, we’ve seen this play out time and time again.

A new invention comes along, people who are better off take advantage of it, and those who cannot afford the new invention end up worse off. I think society as a whole can cope with some level of disparity as in the long run it tends to fix itself. For example, factory workers are generally much better off now than they were just a few decades ago. However, our concern is that the increasing technology divide with drive disparity to a point of no return.

If we as a society intervene now, we can mitigate some of the effects of the increasing technology divide. Based on history, it seems clear that this is in society’s best interest overall.

Submitted by nick@xmarklabs.com on 23 September 2020

Any predictions based on unlimited exponential growth for ever are completely non-physical, because they ignore the factors or processes that set in and eventually cause the saturation effects. Collected data for a very wide variety of types of variables, technological and others, show the prevalence of S-curve, not exponential curve. See a quarter-century old article for numerous examples:
P. Asthana, IEEE Spectrum, vol. 32, June 1995, pp. 49-54. DOI: https://doi.org/10.1109/6.387142
For a mathematical basis for the S-curve, see
A. Bejan, Jour. of Applied Physics, vol. 110, July 20, 2011, 024901, https://doi.org/10.1063/1.3606555

Submitted by m.gupta@ieee.org on 11 October 2020

Sign In to Comment