War and cyberwar without 'hackers' or soldiers

The beginning of the mechanization of war, beyond firearms, took place in the second half of the 19th century with the use of railways for troop transport during the American Civil War and European conflicts of the 1860s and 1870s, especially the Franco-Prussian War. World War I represented a major leap forward in the use of self-propelled vehicles, including airplanes, armored cars, tanks, automobiles, and motorcycles. The conflicts that followed throughout the 20th century greatly refined warfare mechanization, adding flying bombs, missiles, and nuclear weapons. All of this allowed killing and destruction on an unprecedented scale.In our century, armies have been equipping themselves with weapons that allow soldiers to avoid the front line, and to fight from the second line or even further back. Drones have been the star innovation, as they replace manned aviation at a much lower cost and without putting at risk the life of a soldier as specialized as the pilot. These devices are remotely controlled by a pilot and have proven very effective in the wars in Ukraine and Iran.The latest addition to conventional warfare is artificial intelligence (AI). It involves automating something that until now seemed impossible without human intervention: fighting each other. There are now autonomous military drones, which do not need any pilot to remotely control them, but can fly all by themselves thanks to AI. They are equipped with a multitude of sensors that allow them to navigate without human intervention, including inertia sensors, GPS, altimeters, air sensors, lidar, ultrasound sensors, stereoscopic cameras, electro-optical cameras for daytime operation, infrared and heat sensors for nighttime operation, radars, etc. All the information obtained is fused and processed in real time with AI techniques, including computer vision. This gives them a reliable and real perception of the battlefield, which feeds other AI algorithms that decide what to do in unexpected and unpredictable emergency situations, and choose targets in complex environments.

Cargando
No hay anuncios

The Ukrainian army is also testing “ground drones”, that is, robots that replace infantry on the front line. This is an attempt to alleviate the shortage of soldiers and reduce casualties. The tasks these robots perform include evacuating the wounded and the dead, distributing food and medicine, planting mines, transporting ammunition, launching aerial drones, firing projectiles, building barricades, recovering damaged vehicles, and conducting intelligence operations. Recently, ground robots and aerial drones have even recaptured a position held by the Russians. Since moving in a ground environment is more complicated than flying, most of the robot soldiers are remotely controlled, but some autonomous capabilities are being added.

Cargando
No hay anuncios

In the digital world, cyber warfare and cybersecurity were until recently very human activities. This may come as a surprise, but computer programming – and even more so cyberattacks and cyber defenses – has been very artisanal. A programmer had to write the code from the first instruction to the last, as well as ensure that it compiled without errors and performed the desired function. On the other hand, becoming a hacker required years of training, and to find security vulnerabilities, it was necessary to spend a lot of time experimenting. If ethical hackers communicated vulnerabilities to software manufacturers so they could fix them, the not-so-ethical ones, often employed by states or companies, hoarded "zero-day" vulnerabilities (not yet discovered by anyone else). All this is changing with generative AI. Increasingly, large language models have the ability not only to write programs, but to identify vulnerabilities. Already in 2024, GPT-4 was capable of exploiting 87% of the vulnerabilities presented to it, which represented a great advance compared to 0% for GPT-3.5. Anthropic's Claude Opus 4.5 model was capable of finding more than 500 exploitable zero-day vulnerabilities, while Claude Mythos, the most recent, seems to go further, although many experts say its effectiveness has been exaggerated. Perhaps for marketing, Anthropic has said that it will only sell Mythos to some large software companies, because it would be dangerous for everyone to have it (incidentally, this places these companies in a position of superiority over the competition). 

Cargando
No hay anuncios

In summary, we are witnessing an automation of conflicts in both the physical and digital worlds. Where does the human factor fit in? In my opinion, neither attackers nor defenders can leave everything to the hands of AI: if my adversary and I have the same AI, but he also has specialist personnel, he will win. It is worth mentioning, however, that the attacker only needs to find one loophole, while the defender must cover all loopholes.