Today, every possible gap in our professions is examined with regard to the real or imagined potential of artificial intelligence. Our technological appetite, which leads us to spend the equivalent of a minimum wage for the last apple phone, is fading when we read the obituary of the affected professions. It shrinks when we discover the ability of this science to produce, in the United States, a system that allows your little one’s face, caught in a picture, to be instantly identified. It completely dries up when we think of a Russian prototype of an autonomous tank where Man may no longer be in charge. For our future shared with machines, the ethical framework we must impose on ourselves is based on the triptych: social justice, freedom and responsibility. Holding this compass in hand, would artificial intelligence allow us to find the long-lost path of trust in the field of cyber security?
The media uproar about major attacks on certain computer systems, for vicious or geopolitical reasons, was followed by the discovery that a local company had lost several months of activity due to a ransomware attack. We went from “it only happens to others” (by definition, reckless and poorly protected), to “it happened to my neighbor!”
There are three causes for this uncontrolled escalation.
First, the explosion of the volume of data and the devices used to access this data, in other words, the attack surface. In 1992, only 100 gigabytes of data were generated per day; in 2018, 50,000 gigabytes were created per second. If the figures are too abstruse, imagine that in 1992, as a young student, you were renting and therefore protecting your magnificent 20 m2 studio in Paris. A quarter of a century later, you have magnificently succeeded in your career or the family has grown a little too much, since your new apartment approaches in surface area the equivalent of… Paris and its inner suburbs. As an attentive parent, you obviously have the same care and responsibility to protect your property.
The second phenomenon is our defense capability. In 1990, the most widespread protection products were anti-virus such as Norton, McAfee, etc. Today, many companies continue to use the same products that, even if they have evolved, still rely on threats already known that will be confronted sequentially to each of your files. If in 1992, you chose to use a peephole to check that the person visiting your studio was the one you expected, today, despite your immense fortune, you hesitate to install millions of digital peepholes in your apartment-department, the most up-to-date version of your old bull’s-eye. Your myriad of children are still reluctant to monitor on screen, day and night, the movements of your visitors. Ungrateful youth.
Finally, the last subject is the democratization of the threat. My dear sir, in the good old days, the virus creator was a gentleman of the code, when the ants took up his sword and at the end of the sending, he touched, it was only speckled. He chiseled his code into an assembler with the pleasure of each instruction dancing to the eye and singing to the ear. Today, the first smart teenager finds on the Internet the manual, with illustrated appendices, of directly usable tools allowing him to examine, in an industrial way and without effort, each of your potential flaws. No doubt he’ll find some. The glory of a Cyrano hacker is no longer possible because his anonymity is guaranteed. To use one last metaphor, your oversized home protected like a Middle Age bastion by your vigilance, certainly zealous, is attacked every millisecond by a mobile army, tireless, with disparate motivations, equipped with a low-cost Kalashnikov weaponry from the first cellar. It arrives, with commercial drones slightly tampered and everything that can constitute a weapon by destination, where you were waiting, on a dark night, for the Monsignor’s pliers and the hydrox torch narrated by Boris Vian. At this point, I can only recommend that you move your eye away from the peephole.
The artificial intelligence was born after the World War II and is a data consuming science. The larger the volume of data, and the more meaningful it is, the more effective it will be. Its emergence can be seen on a graph parallel to the increase in the surface area of our data. The principle of learning-based forms is that, by repeating examples, the data and the expected result are combined. This models a set of parameters that averages a path between all your data and the result you expect.
With our compass on, let’s illustrate the theory. Our dataset corresponds to everything that circulates on your computer network; we are talking about gigabytes of data. A user has received x e-mails, at x time. This computer has sent or received a certain amount of data. Certain network equipment worked in a certain way, with certain sources using it, certain destinations for certain volumes. The circulation of this data with all the equipment and users is carried out according to established logics and routes that correspond to a normality. We therefore associate a snapshot of these data with an understood situation. The success of your artificial intelligence will be to detect the needle in this data bundle. The abnormal situation. A peak in email reception at a certain time of the day. A computer saturated with transmission data. A piece of equipment bottled with data travelling via an unusual route.
This intelligence will be extremely effective and will not replace you, since you have never been able, and you will never be able, to analyze this volume of data. It completes us. Whether for emails or any type of content, it is not the content itself that is viewed or analyzed: it is of no interest. We are interested in the detection of weak signals among this mass of data. It does not infringe on our freedom. Its mission is to inform us, not to make a decision for us. It leaves us the responsibility for any decision.
Artificial intelligence is a solution to protect us from this ubiquitous threat to our companies. It requires the mobilization of cybersecurity actors, our best scientists in the field and public actors, operators of vital importance (OVI according to the State classification) to appropriate the resulting products. Our country may have to write a unique page on this subject, better protecting our common good, strengthening our research and licornizing our security software manufacturers.