Indio San @ Envisioning
Machines have been part of our lives for a long time, whether as a tool or something else. If you have ever thought that machines are completely different from us, maybe it is time for you to reconsider that. From Alexa to Siri and industrial settings to automated farms, AI will impact —and it is impacting — everything and everyone, including you. Will AI take over our jobs? Yes. Will it become an inseparable part of ourselves? Yes. Actually, AI is already doing it already, while you read this piece. Robots have been for decades interacting with and easing human activity
Artificial intelligence is implanted in our imagination. The plots created by science fiction were mistakenly taken as far-fetched, dystopian scenarios that would never become reality; and this, as we see now, is not true. These very same robots, embedded with an intelligence sometimes comparable to our own, are rising and composing their own narratives, independent of human control.
In fact, AI is making us realize that the human reign of the world — a constant for thousands of years— might currently be at stake by something that artificially mimics human consciousness. The unique human faculty of learning from mistakes, readings, and experiences, memorizing math formulas, recognizing speech and objects, and copying body movements from a dance motion is, no longer, exclusive to us. Machines, more and more will reach human neural and cognitive functions. They will possibly outperform on any tasks that humans can do; it may be as insignificant as beating us at chess, or as complex as composing music and creating art, or as serious as replacing politicians to rule a whole nation.
AI, in essence, gathers all the algorithmic information programmed by data scientists and continuously learns from these algorithms, just like a child learns from their parents. Even if the limits of human memory and mental capacity are still impracticable to measure, humans get old and all the space we have inside our heads may one day yield to diseases, fatigue, and age. AI instead surpasses the boundaries imposed by organic bodies, and is able to fully realize all the assignments it is expected to complete, without an expiry date; machines never die.
Creator ft. Creature
This establishes a crucial matter of whether machines are more capable of surviving in this world where issues such as climate crisis, social inequality, and hunger are still some of the challenges humans have been endlessly struggling to solve on their own. No matter how many advances humans make in tackling cancer, predicting weather disasters, or getting to the Moon —and soon enough, Mars—, these advances were not granted to humans because of their intelligent efforts alone. AI was there, even if barely. And it has been for a while.
It is here, right now, while I am writing these sentences by sometimes predicting the words I am about to say, correcting my spelling, and suggesting better ways to express my ideas. However, AI has not done this autonomously, we taught it to do so. The intelligence of AI is an extension of ours, and it will do what we tell it to. Ontologically, we influence and modify one another as time goes. If ideas of racism, sexism, and prejudice as much as equality, freedom, and democracy are implanted in these machines, they will replicate these behaviors.
Will machines control us at some point? Well, only if we keep saying they will.
What Lies Ahead
Artificial Narrow Intelligence (ANI)
ANI is goal-oriented, designed to perform singular tasks -i.e. facial recognition, speech recognition/voice assistants, driving a car, or searching the internet— and is very smart at completing the specific task it is programmed to do. While these machines may seem intelligent, they operate under a narrow set of constraints and limitations, which is why this type is commonly referred to as weak AI. Narrow AI doesn’t mimic or replicate human intelligence, it merely simulates human behavior based on a narrow range of parameters and contexts.
Artificial General Intelligence (AGI)
AGI refers to artificially intelligent systems that perform intellectual and cognitive tasks with similar capabilities to human beings and possibly similar general levels of perfection. It has cross-domain abilities, capable of improving its own architecture by using self-healing software. Together with instant access to big data, it could be very useful in working out complex unresolved problems such as optimal energy usage in industries and new material development.
Artificial Superintelligence (ASI)
ASI does not just mimic or understand human intelligence and behaviour; ASI is where machines become self-aware and surpass the capacity of human intelligence and ability. In addition to replicating the multi-faceted intelligence of human beings, it would, theoretically, be exceedingly better at everything we do; math, science, sports, art, medicine, hobbies, emotional relationships, everything. ASI would have greater memory and a faster ability to process and analyse data and stimuli. Consequently, the decision-making and problem-solving capabilities of superintelligent beings would be far superior to those of human beings.
In the intersection of artificial intelligence, computer science, and linguistics, this method makes machines understand human speech in text or audio format by evaluating the meaning and significance of words while completing tasks involving syntax, morphology, semantics, and discourse. By using statistical inference algorithms, it is possible for the machine to automatically learn rules through the analysis of large sets of documents. Since human speech is not always precise and often ambiguous, NLP is key in the progress of human-machine interaction applications such as virtual assistants, automatic speech recognition, machine translation, question answering, and automatic text summarization.
A method through which a computer digitizes an image, processes the data and takes some type of automated action. Machine Vision allows systems to understand and interpret the environment using live or recorded images, tag their content, and enable programs to perform automated tasks that previously required human supervision. By using one or more video cameras with analog-to-digital conversion and signal processing, a computer or robot controller receives the image data. This also allows systems to perceive their surroundings beyond regular electromagnetic wavelengths and might include infrared, ultraviolet, or X-ray frequencies to enhance image processing precision.
This machine learning method uses different algorithms to learn at multiple levels of representation and abstraction, helping to make sense of complex data such as images, sound, and text. It uses artificial neural networks to progressively extract higher-level information from the raw input. Learning can be supervised, semi-supervised or unsupervised. Deep Learning is applied in computer vision, machine vision, speech recognition, machine translation, and other areas to increase machine comprehension for inputted data.
A method of image analysis and classification based on an algorithm that classifies each pixel in a raster with the maximum likelihood of corresponding to a class. This method is widely used in remote sensing. When satellites collect information from Earth, the data is organized in a raster, or a matrix of cells, with rows and columns containing values representing information. For instance, land-use and soil data can be crossed with continuous data such as temperature, elevation, or spectral data. After performing a maximum likelihood classification on a set of raster bands, a classified raster is algorithmically created as the output according to pre-trained data. This helps to improve multi-layer remote sensing imaging and supports spatial analysis.
A data processing method that efficiently processes a large collection of satellite images to produce actionable information for decision-making procedures. By consolidating spatial, temporal, spectral, and radiometric resolution data gathered from open-access satellites, this method aims to follow land-use management in a timely and accurate manner. This is done by applying sequential image processing, extracting meaningful statistical information from agricultural fields, and storing them in a crop spectrotemporal signature library.
A 3D model-based process mounted on AI-assisted software that enables simulations from physical structures. The 3D model enables the design, simulation, and operation of what-if scenarios through virtual representations. Building Information Modelling depends on the physical and functional characteristics of real assets to provide nearly real-time simulations. The data in the model defines the design elements and establishes behaviors and relationships between model components, so every time an element is changed, visualization is updated. This allows testing different variables such as lighting, energy consumption, structural integrity, and simulating changes on the design before it is actually built. It also supports greater cost predictability, reduces errors, improves timelines, and gives a better understanding of future operations and maintenance.
This method attempts to fool machine learning models by supplying misleading inputs with the main goal of inducing a malfunction in a machine learning model. For this reason, it is deliberately used to train machine learning models in order to make them more robust and secure. Some known employed strategies are: evasion (samples are modified to evade detection), poisoning (adversarial contamination of training data), and model stealing (a black box machine learning system is used to reconstruct the model or extract the data it was trained on.)
One of the most used ways to define, quantify, and characterize the within-field variability in crop production. It is accomplished by combining geospatial data with real-time information from yield monitors mounted on combine harvesters. Variables such as mass, volume, and moisture are collected, allowing to observe both spatial and temporal yield variation within a field, helping farmers decide which parts of the area need more water or fertilizers.
A branch of Artificial Intelligence focused on developing applications or models that learn from data and enhance their performance and accuracy over time without being programmed to do so. Algorithms are trained with large amounts of data with the goal to make decisions and predictions by looking at new data. The training data is labeled to bring attention to features and classifications that a certain model will need to identify. Then, unlabeled data is presented to the model for autonomous classification, which helps to improve its accuracy. Machine Learning is widely used on digital assistants, personalized recommendations, autonomous vehicles, and robots.
A complimentary layer of machine learning that aims to achieve abstract thinking as a computational system. Machine reasoning systems are composed of a knowledge base that contains declarative and procedural knowledge and a reasoning engine that employs logical techniques such as deduction and induction to generate conclusions. The process starts with sensory and measured inputs, which gradually transform into different abstraction levels: from perceptual unstructured data, such as sensor measurements, to semi-structured and linked information, amounting to contextualized categorical descriptions of the data. This information is then transformed and fused with declarative and procedural knowledge.
Computing approach that allows machines to make sense of the real environment, translating concrete spatial data into the digital realm in real-time. By using machine learning-powered sensors, cameras, machine vision, GPS, and other elements, it is possible to digitize objects and spaces that connect via the cloud, allowing sensors and motors to react to one another, accurately representing the real world digitally. These capabilities are then combined with high-fidelity spatial mapping allowing a computer “coordinator” to control and track objects' movements and interactions as humans navigate through the digital or physical world. Spatial computing currently allows for seamless augmented, mixed and virtual experiences and is being used in medicine, mobility, logistics, mining, architecture, design, training, and so forth.
A machine translation method that applies a large artificial neural network to improve the speed and accuracy of probability predictions of a sequence of words, often in the form of sentences. Different from statistical machine translation, neural machine translation trains its parts in an end-to-end basis, performing the analysis in two stages; encoding and decoding. In the encoding stage, source language text is fed into the machine then transformed into a series of linguistic vectors. The decoding stage transfers these vectors into the target language.
The process of creating a three-dimensional representation of any surface or object (inanimate or living) via specialized software. 3D modeling is achieved manually with specialized 3D production software that allows for creating new objects by manipulating and deforming polygons, edges, and vertices or by scanning real-world subjects into a set of data points used for a digital representation. This process is widely used in various industries like film, animation, and gaming, as well as interior design, architecture, and city planning.