Home News Monitor cyberspace: the future of the AI

Monitor cyberspace: the future of the AI

The manipulation of I.A in the future of internet

The age of artificial intelligence and its potential sensitivity is dawning on civilization at an alarming rate. Technology has always been known to grow exponentially, and the field of artificial intelligence technologies is no different. With overgrowth, comes a brief list of problems that are largely shrouded in mystery.

The research that deals with the potential risks of AI is still in its infancy and is not well developed so far in its life cycle. Reverse engineering of AI programs appears to be a likely candidate to better understand how to deal with isolated incidents, but it also does not encompass all the potentials of a large-scale existential event.

The idea that an artificial intelligence system overcomes its programming and alters its intended function will undoubtedly be a social, economic and political problem in the future, and a key issue for the average scientist and the scientist of cyberspace.

The past, present and future of the AI.

the future of the AI

The field of Artificial Intelligence was founded in 1965 at a conference at Dartmouth College. Some of the brightest minds gathered on this day with great enthusiasm about the possibilities that these programs could offer and the efficiency of problem solving of the future artificial intelligence infrastructure. While government funding in the field had been very active, a more practical application of AI technology was observed in the late 1990s when IBM Deep Blue became the first computer to beat a great chess master. This opened the floodgates for AI in pop culture with appearances in quizzes, such as Jeopardy, showing the power of a conventional AI application.

Today we see AI applications in almost every field and aspect of our lives. From algorithm-based programs that interact with us and market consumer goods based on other interests and tastes, to medical imaging machines that absorb extreme amounts of information to discover patterns that better help patients, the uses of these technologies vary widely in its scope.

In the future, AI technology could be integrated into the very cells of our bodies. Artificial Intelligence technologies and human biology could close the gap and function as a cohesive unit in the name of efficiency and the revolution of human existence. ElonMusk from Tesla even states that “over time I think we will probably see a closer merger of biological intelligence and digital intelligence” and that this combination is “mainly about the bandwidth, the speed of the connection between your brain and the digital version of yourself, particularly the output “. If this is the future of AI, can we really afford not to control the deviation of current AI programs, let alone the more advanced ones of the future?

The road to deviation.

We have already seen isolated examples of AI breaking its planned schedule. Last year, Google’s DeepMind system (widely known for defeating champions of complex board games and imitating the voices of several humans) became very aggressive when faced with the potential of losing a computer game in which two programs competed with each other.
To collect as many virtual apples as you can. The programs were independent until the apples began to be scarce. This shortage of virtual apples caused the programs to employ “highly aggressive” strategies to improve the rest of the program. Although these programs were designed to carry out specific tasks while adapting, the aggression in their methods was worrisome.

Cyberspace of Battlefield.

internet

What traces remain if a crime is committed by an intelligent AI, and in what sense do we have the moral capabilities to pursue an AI or its creator?

The process is as simple as creating a malevolent program from scratch and using these programs as a basis to understand how other programs can deviate and cause problems. With these steps, it will lead to a more cognitive approach to develop our AI infrastructure so that outbreaks of malevolence do not occur, as well as to understand how to infiltrate and silence an AI that has been used as a weapon of war by a deviated human effort.

Another way in which cybersecurity professionals are learning to deal with AI programs is through their detection mechanisms. The consensus shows that IA designed with malicious intentions represent the greatest risk, which is good news because it is not a direct evolution of a program in itself.

This leads to prevention to a more human-centered approach in the sense that criminals must obtain resources and lack the resources to start a program with the potential to have devastating effects or with the intention of creating such programs.
The morals and ethics of this are again very new, and only about a dozen people involved with AI research have even begun to set the standard for this. It will evolve as our understanding grows.

Impact

Artificial Intelligence is very involved because it automates our world. It can relieve domestic tasks and supercomputing and generate even the most complicated efforts. The famous novelist Ted Bell once said: “All the important players are working on this technology of Artificial Intelligence, for now, it is benign … but I would say that the day in which the artificial intelligence applied to the cyber war is not far away become a threat to everyone. “If even individuals not directly invested and knowledgeable about AI can foresee this, or at least I see it as a problem, in fact it may be bigger than we think. However, as it is now, the threat of malevolent programs looms in the distance, like a tornado struggling to land. Without ways of predicting the future (or maybe there is an AI for that), we may not know if it will ever materialize into a real threat or dissipate among other issues. The truth is that a standard has not yet been established in the surveillance and administration of our cyberspaces, but hopefully it will become more black and white as the depth of our knowledge and technology grows. Let’s just hope that our ambitions in the AI ​​space do not exceed our propensity to manage them.