Billionaires want to use technology to improve our abilities: the results could change what it means to be human

    <clase abarcada=Kotin / Shutterstock” src=”–/YXBwaWQ9aGlnaGxhbmRlcjt3PTk2MDtoPTYwMg–/ 1512ffc7531″ data-src= “–/YXBwaWQ9aGlnaGxhbmRlcjt3PTk2MDtoPTYwMg–/ ffc7531″/>

Many prominent people in the tech industry have talked about the growing convergence between humans and machines in the coming decades. For example, Elon Musk has reportedly said that he wants humans to merge with AI “to achieve a symbiosis with artificial intelligence.”

His company Neuralink aims to facilitate this convergence so that humans are not left “behind” as technology advances in the future. While people with disabilities would be the short-term recipients of these innovations, some believe that technologies like this could be used to improve everyone’s abilities.

These goals are inspired by an idea called transhumanism, the belief that we should use science and technology to radically enhance human capabilities and seek to direct our own evolutionary path. Illness, aging and death are realities that transhumanists want to end, in addition to drastically increasing our cognitive, emotional and physical capacities.

Transhumanists often advocate the three “supers”: superintelligence, superlongevity, and superhappiness, the latter referring to ways to achieve lasting happiness. There are many different views among the transhumanist community about what our continued evolution should look like.

For example, some advocate uploading the mind digitally and settling the cosmos. Others think we should remain organic beings but reconfigure or improve our biology through genetic engineering and other methods. A future of designer babies, artificial wombs and anti-aging therapies beckons these thinkers.

All of this may seem futuristic and fantastical, but rapid advances in artificial intelligence (AI) and synthetic biology have led some to argue that we are on the verge of creating such possibilities.

divine role

Tech billionaires are among the biggest promoters of transhumanist thinking. It’s not difficult to understand why: they could be the central protagonists of the most important moment in history.

The creation of so-called artificial general intelligence (AGI), that is, an artificial intelligence system that can perform all the cognitive tasks that a human being can perform and more, is a current focus in Silicon Valley. AGI is considered vital in allowing us to take on the divine role of designing our own evolutionary future.

Anti-aging therapy.

That’s why companies like OpenAI, DeepMind, and Anthropic are rushing toward developing AGI, even though some experts warn it could lead to human extinction.

In the short term, the promises and dangers are probably overstated. After all, these companies have a lot to gain by making us think they are about to engineer a divine power that can create a utopia or destroy the world. Meanwhile, AI has helped fuel our polarized political landscape, with disinformation and more complex forms of manipulation made more effective by generative AI.

In fact, AI systems are already causing many other forms of social and environmental harm. However, AI companies rarely want to address these harms. If they can get governments to focus on potential long-term “security” issues related to potential existential risks rather than actual social and environmental injustices, they will benefit from the resulting regulatory framework.

But if we lack the capacity and determination to address these harms in the real world, it is difficult to believe that we will be able to mitigate the risks on a larger scale than AI could hypothetically enable. If there really is a threat that AGI could pose an existential risk, for example, everyone would bear that cost, but the gains would be largely private.

A family story

This question within AI development can be seen as a microcosm of why the broader transhumanist imagination may appeal to billionaire elites in a time of multiple crises. It speaks to the refusal to engage with grounded ethics, injustices and challenges and offers a grandiose narrative of a bright future to distract from the current moment.

Our misuse of the planet’s resources has triggered a sixth mass extinction of species and a climate crisis. Furthermore, ongoing wars with increasingly powerful weapons remain part of our technological evolution.

There is also the pressing question of who the transhuman future will be. We currently live in a very unequal world. Transhumanism, if developed in a context similar to the current one, will likely greatly increase inequality and may have catastrophic consequences for most humans.

Perhaps transhumanism itself is a symptom of the type of thinking that has created our unfortunate social reality. It is a narrative that encourages us to step on the accelerator, further expropriate nature, continue to grow, and not look back at the devastation in the rearview mirror.

If we are truly about to create an improved version of humanity, we should start asking some important questions about what it should mean to be human, and therefore what an improved humanity should entail.

If the human being aspires to be God, then he claims dominion over nature and the body, making everything docile to his desires. But if the human being is an animal immersed in complex relationships with other species and with nature in general, then “improvement” depends on the health and sustainability of its relationships.

If human beings are conceived as an environmental threat, then improvement is surely what redirects their exploitative ways of life. Perhaps becoming more than human should constitute a much more responsible humanity.

One that shows compassion and awareness towards other life forms on this rich and wonderful planet. That would be preferable to colonizing and expanding ourselves, with great arrogance, at the expense of everything and everyone else.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The conversationThe conversation

The conversation

Alexander Thomas does not work for, consult with, own shares in, or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond his academic appointment.

Leave a Reply

Your email address will not be published. Required fields are marked *