Artificial intelligence (IA): a fascinating (re)discovery
Artificial intelligence (AI) has been very popular in recent years. The realization of its potential has made it a first choice tool in anti-aging research.
The principle of what we call AI is not actually new. It was Alan Turing, who in 1936, stated that everything that is calculable is calculable by a logical machine[2]. This is the common foundation of all current informatics.
Later, in 1943, McCulloch and Pitts described the working of neurons via electrical circuits. This is where the idea of “neural networks” takes shape. The perceptron was born in 1957 and is the first system that could be described as artificial intelligence, it could learn from experience. This computational pathway has been neglected for a long time due to technical limitations and skepticism.
Today, we use AI in almost all areas that require a statistical approach. Big Data, i. e. large amounts of data, constitutes the pantry of AIs.
In short, an AI is a statistical analysis tool that works via neural networks among other techniques. Neural networks gather data from a variety of sources. They apply modifications to them, before outputting a result or transferring it to the next layer of neurons when it is a multi-layer network.
Talking about AI “in general” doesn’t necessarily make sense. An AI is in fact an algorithm, and its capabilities will depend entirely on the programming of the algorithm. There are many methods depending on what you are trying to achieve[3]. First, “deep learning” is part of automatic learning that implies large data quantities and treatment capabilities. Then, once the information has been assimilated, we can influence the type of neural networks programmed at will and thus change the processing of this information.
There are supervised learning neural networks, and unsupervised learning networks.
The big difference is that the unsupervised network (reinforcement learning) does not need any indication from the programmer. The AI will try to extract a “meaning” from the data (which may have none).
We know that the amount of data generated by the Internet is huge. The Google search engine processes more than 40,000 searches per second, every minute about 4,150,000 videos are viewed on YouTube, Instagram users post 46,740 photos per minute: all this to tell you that we have no lack of data. They must also have an interest.
These examples underline a crucial truth: more and more people are using IT tools. As a result, more and more data is being generated and stored.
This also applies to hospitals and health centres, which are finally becoming more computerized.
Approaches such as microfluidics in biology can also be very effective in producing biological data. This technology allows the study of cell response on a microscopic scale on tiny replicas of our tissues, biotechnology devices called organs-on-chip. Organs-on-chip are a good way to reduce the range of possibilities offered by artificial intelligence in order to take over after data analysis. They make it possible to test a proposed molecule in vitro quickly and efficiently, without the need for animal testing. The results of the data analyses are thus tested on the organs-on-chip, and at the end of this step, only a quantity of information remains that the human brain is now able to process, with the certainty that they are relevant, because how could we understand the volumes of data on which we feed AIs?.
These masses of data make no sense as they are, we are not able to comprehend such large amounts. Like the list of biological interactions and mechanisms that make our body work: we cannot hope to predict anything by looking at this gigantic amount of information and interactions. The extreme complexity of the workings of living organisms requires a very large amount of data generated during analyses. Not to mention the data we don’t even have….
Managing and organizing data for future use is a major challenge for Big Data and its integration into AI analysis[4]. Could artificial intelligence help us to solve these challenges?.
In the field of aging and medicine, the potential of artificial intelligence is very interesting. It allows us to make connections that we would not otherwise have seen, to identify invisible trends, to propose models, generalizations, determine causalities… And all this can be extremely beneficial for preventive and regenerative medicines[1, 6].
As we have mentioned, the data we produce can no longer be processed by our primate brains. Our brains are obviously conditioned by our evolutionary history and are not capable of everything. To analyze a phenomenon as complex and dependent on so many factors as aging, we need help to process the resulting amounts of data.
We have the statistics to help us extract meaning from a lot of information. Neural Networks, originating from Artificial Intelligence research are one of these statistical tools, and it is the most efficient to date: we need the computational and synthesis capabilities of these algorithms. In addition, they sometimes reveal human cognitive biases.
This is the case, for example, of Word2Vec: an AI from Google whose objective is to recognize and determine the meaning of words, semantics. For W2Vec, “Doctor” + “Woman” = “Nurse”. This result may seem unfair, even sexist, but artificial intelligence does not make morals, it makes statistics. What the algorithm reveals is not its bias, but our own bias that is reflected in the statistical sample.
AIs may therefore represent our best chance at this time to make sense of the data collected. This is the challenge undertaken by BioViva, a company founded by Elizabeth Parrish, which specializes in the analysis of medical data and the development of custom treatments. Since 2015, data management has evolved enormously and bioinformatics tools allow BioViva and its doctors to offer treatments adapted to each patient, with personalized medicine as an ultimate goal. The aim is to provide fast access to effective treatments, especially in the field of gene therapy against aging.
The AI has succeeded in moving from irrational enthusiasm to proof of concept, to be massively supported by all fields, and in particular the medical field. The Mouseage project, for instance, aims to photograph mice every day so that the analysis algorithm (Deep Neural Network) can produce an “aging clock” of the mice. After testing the algorithm developed on a mobile application, the extraction of potential aging markers precedes the storage of the data. The next step is to apply these extracted markers to other model organisms and then to humans; it would thus be possible to use proven algorithms to assess human aging from photographs, once the equivalent markers are identified in our species.
The first “mainstream” deep learning algorithms were image analysis algorithms. Today, they have gathered enough data to be much more effective than a dermatologist in front of a picture of our skin. Another example is Arterys Cardio DL, an assistant artificial intelligence for the analysis of radiological images of cardiology approved by the FDA (US Food&Drugs Administration) and which is widely used today, limiting human errors.
Many companies have developed biomarkers of aging using multiple factors. These range from blood tests or microbiota sequencing to voice, or retinal scans. The multiplication of factors for decision-making increases the robustness of calculations and estimates, as long as we remain able to apply its fair value to each element in relation to the others.
Another approach is to compare species with each other. The objective is for AI to determine recurrent patterns in populations that would modulate the aging curve in different species, i.e., find biomarkers. This could also help to identify evolving trends related to aging.
Beyond these applications, artificial intelligence is able to analyze therapeutic treatments, develop new ones and predict their effects[6]. They can also find better and better the conformations of proteins, i.e. their organization in space. A very complex task, which required the participation of many participants on the Foldit collaborative application for example.
The creative aspect of AI seems to be more and more accepted with the example of AlphaGo’s 37th move[8] which surprised professionals before proving decisive about a hundred moves later. Or the recent advances of AlphaStar[9] on Starcraft II and DeepMind in general (including protein conformation).
These developments make it possible to understand the creative, counter-intuitive and yet so effective aspect of machine learning that it is now possible to code.
Artificial intelligence is like all “new” disciplines: today it is the subject of debates about its use, its scope, its capabilities, even. These debates are very much linked to data, because as we have seen, the learning phase requires a lot of data. The problem of the use of personal data is complex and involves many web actors, as we have seen with the DGPR regulation on data protection in Europe.
AI can be used to make predictions, to target behaviour. For example, some algorithms have been able to guess that women were pregnant because of their Internet history before they even learned it themselves, by performing predictive analyses[5]. Let us not forget that AIs are bound by the code we write and the data to which they will be exposed. The results can therefore sometimes be surprising at first sight.
Several scientists and philosophers (Ray Solomonoff) are interested in the “ethics of AIs” and ways to program them to limit the potential negative impact[7]. But the morality of AIs is only that of developers according to their needs and desires.
At a time when genomic medicine, behavioural analysis and suggestions (videos, products, buzz) are being made, some are worried about the use that will be made of health data. In particular, they fear that insurance companies may claim the right to refuse to cover people with genetic predispositions for certain diseases.
Will recommendation algorithms such as YouTube’s continue to maximize simple viewing time without taking into account the videos themselves?
Lê of the Science4All channel suggests the possibility for YouTube to modify its recommendation algorithm[7]. This would allow users to be directed towards more qualitative or critical content, for example. If the suggestion algorithm favours content aimed at tackling climate change, one might think that this would be generally positive. However, this raises problems of legitimacy, influence and also of YouTube’s authority over its users. Most recently, YouTube announced a change in the recommendation algorithm to reduce the visibility of conspiracy theories. But how will the algorithm succeed in discriminating among so much content?
Artificial intelligence is therefore a field of study that can be extremely powerful and useful. Like with any tool, there is a significant risk of problematic uses. It is therefore necessary to reflect on these subjects in order to anticipate, and thus to be able to make optimal use of the power of artificial intelligence in research and the fight against aging.