what it means to be human is elusive

<span>Photograph: John Walton/PA</span>” src=”https://s.yimg.com/ny/api/res/1.2/Jg1e94qepJgVV.CSAV6ntA–/YXBwaWQ9aGlnaGxhbmRlcjt3PTk2MDtoPTU3Ng–/https://media.zenfs.com/en/theguardian_763/75d4109628d4e101e15beeb3a 3ba066a” data- src=”https://s.yimg.com/ny/api/res/1.2/Jg1e94qepJgVV.CSAV6ntA–/YXBwaWQ9aGlnaGxhbmRlcjt3PTk2MDtoPTU3Ng–/https://media.zenfs.com/en/theguardian_763/75d4109628d4e101e15beeb3a3 ba066a”/></div>
</div>
</div>
<p><figcaption class=Photograph: John Walton/PA

Intelligent machines have been serving and enslaving people in the realm of imagination for decades. The all-knowing computer – sometimes benign, usually malevolent – ​​was a staple of the science fiction genre long before such an entity was feasible in the real world. That moment may now be approaching faster than societies can write appropriate rules. In 2023, the capabilities of artificial intelligence (AI) caught the attention of a wide audience far beyond technology circles, largely thanks to ChatGPT (which launched in November 2022) and similar products.

Given how quickly progress is moving in this field, that fascination is sure to accelerate in 2024, along with alarm over some of the most apocalyptic scenarios possible if the technology is not properly regulated. The closest historical parallel is humanity’s acquisition of nuclear energy. The challenge posed by AI is arguably greater. Going from a theoretical understanding of how to split the atom to assembling a reactor or bomb is difficult and expensive. Malicious applications of online code can be transmitted and replicated with viral efficiency.

The worst-case scenario—human civilization is accidentally programmed to fall into obsolescence and collapse—remains the stuff of science fiction, but even the low probability of catastrophe must be taken seriously. Meanwhile, harm on a more mundane scale is not only feasible, it is present. The use of AI in automated systems in the administration of public and private services risks incorporating and amplifying racial and gender biases. An “intelligent” system trained on data skewed by centuries of white men dominating culture and science will produce medical diagnoses or evaluate job applications based on criteria that have built-in biases.

This is the less glamorous end of concern about AI, which perhaps explains why it receives less political attention than lurid robot insurrection fantasies, but it is also the most urgent task for regulators. While in the medium and long term there is a risk of underestimating what AI can do, in the short term the opposite trend (being unnecessarily intimidated by technology) prevents acting quickly. The systems that are currently being implemented in all kinds of spheres, making useful scientific discoveries as well as sinister false political propaganda, use concepts that are tremendously complex at the code level, but not conceptually unfathomable.

organic nature
Big language model technology works by absorbing and processing large sets of data (many of it scraped from the Internet without the permission of the original content producers) and generating solutions to problems at astonishing speed. The end result resembles human intelligence but is actually a brilliantly plausible synthetic product. It has almost nothing in common with the subjective human experience of cognition and consciousness.

Some neuroscientists plausibly argue that the organic nature of the human mind (the way we have evolved to navigate the universe through the biochemical mediation of sensory perception) is so qualitatively different from the modeling of an external world by machines that the two experiences will never converge. .

That doesn’t stop robots from surpassing humans in performing increasingly sophisticated tasks, which is clearly happening. But it does mean that the essence of what it means to be human is not as soluble in the rising tide of AI as some gloomy forecasts imply. This is not just an abstruse philosophical distinction. To manage the social and regulatory implications of increasingly intelligent machines, it is vital to maintain a clear sense of human agency: where the balance of power lies and how it might change.

It’s easy to be impressed by the capabilities of an AI program and forget that the machine was executing an instruction devised by a human mind. Data processing speed is the muscle, but the animating force behind the wonders of computing power is imagination. The answers ChatGPT gives to difficult questions are impressive because the question itself impresses the human mind with its infinite possibilities. The actual text is often banal, even relatively stupid compared to what a qualified human could produce. The quality will improve, but we must not lose sight of the fact that the sophistication on display is our human intelligence reflected back to us.

Ethical impulses
This reflection is also our greatest vulnerability. We will anthropomorphize robots in our own minds, projecting conscious emotions and thoughts onto them that do not actually exist. This is also how they can be used for deception and manipulation. The better machines manage to replicate and surpass human technical achievements, the more important it becomes to study and understand the nature of the creative impulse and the way societies define and hold themselves together through shared experiences of the imagination.

The more robotic capabilities extend into our daily lives, the more imperative it becomes to understand and teach future generations about culture, art, philosophy, and history—fields that are called the humanities for a reason. While 2024 will not be the year that robots take over the world, it will be a year of growing awareness of the ways in which AI has already been integrated into society and of demands for political action.

The two most powerful engines currently accelerating the development of technology are the commercial race for profit and the competition between states for strategic and military advantages. History teaches that such impulses are not easily constrained by ethical considerations, even when there is an explicit declaration of intent to proceed responsibly. In the case of AI, there is a particular danger that public understanding of science will not be able to keep pace with the issues facing policymakers. That can lead to apathy and lack of accountability, or moral panic and bad laws. That’s why it’s vital to distinguish between the science fiction of omnipotent robots and the reality of brilliantly sophisticated tools that are ultimately taught by people.

Most non-experts struggle to understand the inner workings of super-powerful computers, but that is not the qualification needed to understand how to regulate the technology. We don’t need to wait to find out what robots can do when we already know what it is to be human and that the power for good and evil lies in the decisions we make, not the machines we build.

Leave a Reply

Your email address will not be published. Required fields are marked *