The human minds behind the artificial minds | Coffee and theorems | Science
is the headline of the news that the author of WTM News has collected this article. Stay tuned to WTM News to stay up to date with the latest news on this topic. We ask you to follow us on social networks.
In the last century, the scientific community has speculated on the possibility of building machines capable of performing tasks smart, until now, reserved for human beings. Although, at the moment, we are not close to reaching something similar to the so-called general artificial intelligence, the use of what are known as weak or narrow artificial intelligences (NAI) is very common in social networks, recognition systems facial and natural language analysis, for example. The Princess of Asturias Award for Scientific and Technical Research recognizes this year some of the architects of the formulation and development of the algorithms that make this type of intelligence possible.
NAIs are algorithms specialized in certain tasks in which they can achieve performances much higher than those of human beings. However, they are unable to perform tasks other than those for which they were designed and trained. Currently, the NAI perform tasks that were intractable for machines until just twenty years ago. These include both image recognition and natural language processing.
This milestone was possible thanks to the appearance of neural networks. Originally formulated in the 1940s, it was not until the 1980s that they began to show their great potential. Today, the term neural networks designates a family of algorithms with a high capacity for adaptation and performance, based on its mathematical formulation and also on the incredible increase in computational capacity experienced since its formulation. These algorithms were born with biological inspiration: their aim was to emulate the human brain. Its most fundamental structure is made up of artificial neurons They play a role analogous to neurons in real nervous systems. These simple units are arranged in parallel, forming layers processing. Learning in neural networks consists of building a system with several of these layers stacked and to train each neuron individually.
The number of layers in the system defines the depth of the network, and it is only when we have two or more of these that the term deep learning or deep learning is used. deep learning. These types of deep networks can be used to perform, even more accurately than single-layer networks, various types of tasks and data sources: from text analysis and comprehension, to finding elements in an image and describing the scene in real life. a photograph, and even suggest new music based on our preferences.
The work of Geoffrey Hinton, Yann Lecun, Yoshua Bengio, popularly considered the fathers of deep learningas well as Demis Hassabis —CEO and one of the founders of DeepMind, a company behind some of the most important milestones in artificial intelligence—, has been crucial in developing the capabilities of deep neural networks and in our current understanding of them .
Hinton, Lecun and Bengio used the idea of backpropagation (backpropagation) to expand the original mathematical formulation of neural networks and allow the training of networks with more than one layer. Hinton introduced this concept in 1986, first allowing deep neural networks to perform properly a task from a set of data. These techniques make networks correct themselves, causing groups of neurons learn to recognize the relevant characteristics of the input data, combining a custom design for each task with a series of common principles for training. This allows them to exploit intrinsic characteristics of the data and perform specific tasks with high precision. In addition, they can be made so that pay attention to the spatial arrangement of data—such as pixels in an image—as well as models that take into account the temporal succession of them —such as the meaning given by the order of words in a sentence—.
For his part, Demis Hassabis is one of the most influential modern figures in artificial intelligence research. Among other milestones, his company, DeepMind, is the creator of AlphaGo —an artificial intelligence capable of beating human champions in Go—, of AlphaFold —an algorithm that, in its latest version, allows predicting the three-dimensional structure of a protein from its amino acid sequence—, or Cat—a general agent which, thanks to the power of its formulation, is capable of performing more than 600 different tasks, including chatting with users, playing games, describing images and manipulating robotic arms—. The latter, while still far from being a truly intelligent agent, is a promising step in the direction of the holy grail of artificial general intelligence.
Although it is difficult to predict whether DeepMind, or any other company or research center, will eventually succeed in creating artificial general intelligences, the impact of all these techniques has been enormous in the last 20 years and will undoubtedly be even greater in the immediate future. Specifically, the contributions of these four scientists to the field of artificial intelligence have been and will be crucial in shaping modern industrial societies and also in confronting some of the great problems of the 21st century, such as climate change.
Simon Rodriguez He is a postdoctoral researcher at the ICMAT.
Timon G Longoria Agate is coordinator of the Mathematical Culture Unit of the ICMAT.
Coffee and Theorems is a section dedicated to mathematics and the environment in which it is created, coordinated by the Institute of Mathematical Sciences (ICMAT), in which researchers and members of the center describe the latest advances in this discipline, share meeting points between mathematics and other social and cultural expressions and remember those who marked their development and knew how to transform coffee into theorems. The name evokes the definition of the Hungarian mathematician Alfred Rényi: “A mathematician is a machine that transforms coffee into theorems”.
Edition and coordination: Agate A. Timón G Longoria (ICMAT).
You can follow MATTER in Facebook, Twitter and Instagramor sign up here to receive our weekly newsletter.