Home » Artificial Intelligence and Human Obsolescence

Artificial Intelligence and Human Obsolescence

Introduction

The concept of human obsolescence, particularly in the face of the rapid advancements in AI, has become a prevalent topic of discourse (Brynjolfsson & McAfee, 2014). While AI has demonstrated impressive feats in tasks such as writing, drawing, coding, and playing games (Silver et al., 2016), discerning between the tasks AI can emulate and those that remain uniquely human is crucial. This post aims to provide a comprehensive analysis of these capabilities and limitations, and also addresses the concept of asymptotic limits in AI model execution.

AI vs. Human Capabilities

What AI can do

AI has showcased an exceptional ability to perform tasks traditionally thought to be the exclusive domain of humans. For example, GPT-3, a language prediction model developed by OpenAI, can generate coherent text that often closely mirrors human-like writing (Radford et al., 2019). DALL-E, another OpenAI model, leverages a generative adversarial network to create intricate drawings from textual descriptions (Radford et al., 2021). Even in the field of software development, AI has made strides; DeepCoder, an AI developed by Microsoft, can write pieces of code, demonstrating a level of programming ability (Balog et al., 2017).

What AI cannot do

Despite the impressive strides AI has made, it falls short in several areas, particularly those requiring deep intuitive understanding and biological functions. For instance, AI lacks the human capability to deeply comprehend context, to possess empathy, or to make decisions based on moral or ethical considerations (Hofstadter, 2018). The AlphaGo AI, while capable of defeating a world champion Go player, does not understand the meaning or significance of the game beyond its learned patterns and strategies (Silver et al., 2016).

Furthermore, AI’s energy efficiency is significantly lower than that of humans. Biological systems, unlike AI, do not require a constant, high-energy external power source; they efficiently leverage the chemical energy from food (Milo, 2013). For example, a typical AI data center can consume massive amounts of energy, while the human brain operates on roughly the equivalent of a 20-watt light bulb.

Limitations in AI Model Execution

Asymptotic Limit

As AI models grow more complex, the cost of executing these models becomes prohibitively high. There is an asymptotic limit on how much one can improve an AI model’s accuracy by merely increasing its size (Hestness et al., 2017). For instance, the advancements from GPT-2 to GPT-3 saw a 116x increase in model size but did not yield a 116x improvement in performance. This limit poses a significant challenge to the development of AI systems that can truly rival human cognition.

Memory Constraints and Model Limitations

AI models face a critical limitation in their inability to recall previous interactions beyond a certain point (Sutskever et al., 2014). This amnesia results from the model’s design limitations, where increased memory capabilities would significantly inflate computational requirements. For instance, GPT-3’s transformer architecture utilizes a sliding window of context which only encompasses the most recent interactions. Longer conversations or context fall outside of this window and are effectively forgotten. While this limitation is partly a function of current technological constraints, it’s also a practical decision taken by developers. Balancing the trade-off between performance and cost is crucial in the deployment of large-scale AI systems, and maintaining a limit on a model’s memory capacity is one way to manage this trade-off.

Future Directions and Conclusion

The rapid evolution of AI technology presents both unprecedented opportunities and challenges. AI’s increasing encroachment into traditionally human-dominated domains, as noted by AI leader Geoffrey Hinton, has sparked fears of human obsolescence (CBC News, 2023). However, as we have examined, there are significant barriers to AI truly eclipsing human intelligence.

The asymptotic limit of model performance highlights that merely increasing model size is not a sustainable path towards achieving more advanced AI. Instead, future research should focus on improving model efficiency and exploring novel architectures. In parallel, efforts should be made to enhance AI’s capability to understand and generate context, which is a cornerstone of human cognition.

Moreover, while current AI models exhibit a form of ‘narrow intelligence’, excelling in specific tasks, they lack the ‘general intelligence’ humans possess, characterized by an ability to transfer knowledge across domains, understand complex emotions, and make ethically informed decisions. It remains an open question whether these traits can be fully realized in AI.

In conclusion, while AI has demonstrated significant capabilities, it remains fundamentally distinct from human cognition and energy efficiency. This post calls for continued interdisciplinary discourse and research to further our understanding of AI’s potential and limitations in the context of human obsolescence.

References

  1. Brynjolfsson, E., & McAfee, A. (2014). The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies. W. W. Norton & Company.
  2. Silver, D., et al. (2016). Mastering the game of Go with deep neural networks and tree search. Nature, 529(7587), 484–489.
  3. Radford, A., et al. (2019). Language Models are Unsupervised Multitask Learners. OpenAI Blog.
  4. Radford, A., et al. (2021). DALL-E: Creating Images from Text. OpenAI Blog.
  5. Balog, M., et al. (2017). DeepCoder: Learning to Write Programs. arXiv preprint arXiv:1611.01989.
  6. Hofstadter, D. R. (2018). The Shallowness of Google Translate. The Atlantic.
  7. Milo, R. (2013). What is the power consumption of the human brain? PLoS Biol, 11(6), e1001666.
  8. Hestness, J., et al. (2017). Deep Learning Scaling is Predictable, Empirically. arXiv preprint arXiv:1712.00409.
  9. Sutskever, I., et al. (2014). Sequence to sequence learning with neural networks. Advances in neural information processing systems, 27, 3104-3112.
  10. CBC News. (2023). Canadian artificial intelligence leader Geoffrey Hinton piles on fears that AI could eclipse human intelligence.