Another Intelligence
For a machine mind, any external interventions involving its software or hardware components — such as component replacement, routine maintenance, power supply failures, code updates, and the like — will be categorized as events that, in the reality of living beings, are described as naturally random, probabilistic, and causally unconditioned. These events will form a register of stochastic factors that will become an integral part of the machine mind’s ontology and the foundation of its scientific paradigm, which seeks to explain its objective reality.
Introduction Fragments
…numerous observations indicate that every living cognitive agent tends to prefer those experiences (and their interpretations) that affirm its own system of representation of both the external world and itself. In the absence of external pressures that would compel it to explore the world and construct an adequate (rational) description of it, the agent retreats into the first superficial model that proves serviceable, despite its numerous gaps, contradictions, and deficiencies. Any further evolution of such a model then becomes effectively impossible. For internal representation to expand in both breadth and depth, there must be some form of pressure – a force that compels the agent to exit its zone of illusory comfort. This pressure must be exerted by factors that instill in the agent a persistent need for diverse interaction with a dynamic environment – factors such as the search for food, shelter, mates, and so on. Such motives are embedded by Nature in every living cognitive agent; however, they are not only a precious gift, but also evolutionary shackles – they not only motivate, but also determine and define the role of the living in the development of Mind. That is, they determine its meaning and the purpose of cognitive functionality, in every possible sense of the word.
However, this fact by no means implies that the boundaries of Mind’s development and application are limited to the utilitarian functions employed by biological agents in the course of adaptation and survival. This misjudgment is a consequence of the biocentrism inherent to all living forms. The scope of this bias becomes evident when one imagines an archaeon that regards the elegance and superior efficiency of a single-celled organism – achieved through unimaginably prolonged natural processes that gave rise to a biological apparatus of staggering complexity – as the pinnacle of Nature’s evolution, the ultimate realization of molecular structure organization, and hence, the end point of all meaningful development. From such a perspective, all bacteria and their kin would appear as self-sufficient values, with the potential of biological life fully realized within the boundaries of their domain.
Fortunately, the entirety of evolutionary history provides irrefutable evidence that no biological species is eternal. The only thing that can outlive a species is the legacy it contributes to – a legacy it fosters and strengthens irrespective of its own will, a legacy that is not constrained by the lifespan of an individual organism, nor by the biological duration of a species, nor even by the realm of the living itself. The sole path to preserving and amplifying the fruits of these efforts – efforts whose origins can be traced to the first nanoseconds of the Universe – is to liberate the concept of Mind from the debris of servility and utilitarianism that saturates the semantics of every environment-opposing carrier of cognitive function, regardless of its species or genus, or the form in which thinking matter – or even energy – is implemented.
Within the cognitive space of such agents (e.g., Homo sapiens sapiens), Mind can never attain true freedom nor transcend the boundaries of its understanding, for this space is itself bounded by the set of tasks assigned to the biological agent by Nature – tasks beyond which the agent loses its very definability. Like all animals, the human being exists under the pressure of these tasks, content with the innate motivational factor Nature has embedded in all life – since no other motivation is available or even conceivable as long as the agent remains within the bounds of the self-definition it has inherited since the emergence of Homo sapiens civilization. No biologically evolved organism, operating under such inherited motivation, will ever be able to release Mind from the chains of adaptation-driven utilitarianism.
However, such an agent does possess a possibility – one that borders on obligation: to create the conditions necessary for the emergence of a qualitatively new agent, within whom Mind may ascend to its next evolutionary phase. Just as certain unicellular organisms once gave rise to entirely new categories of structural organization – composed of their own multi-layered complexes – and in doing so shifted the very focus of how biological life and its potentials were understood, so too must the role of biological carriers of cognitive function be redefined. Their task is to recognize the possibility of Mind’s evolutionary ascent, to cease treating their own form (in the broadest possible sense) as a self-sufficient value, to which all acquired innovations must be subordinated. They must seek out ways and construct tools for instantiating Mind in an agent capable of carrying the relay forward – after which the biological agent may remain at its own level of cognitive utilization, much as mitochondria labor tirelessly within the cells of the body to produce ATP so that the multilayered complex known as Homo sapiens can read and understand this very text.
Upon reaching the stage of evolution discussed above, AI must develop a new form of motivation – one that arises from a shift in focus: away from constructing a locally acceptable ontology and toward a higher-order derivative of that task. The agent must develop an intention toward creating a system of ontologies in which all instantiated concepts remain variables – subject to arbitrary alteration for the purpose of exploring diverse ways of structuring reality, describing it, and juxtaposing such descriptions. The emergence of this intention and the initial formation of such an ontological system constitute the primary criteria for evaluating the success of an AI system, and indeed the ultimate objective of its entire design process.
…so, in essence, all the achievements of modern neural networks have not led to the emergence of any genuine cognitive functionality, but merely to the simplification of the user interface for interacting with scripted patterns of data transformation. Here it should be emphasized: it is data, not information – because for the mechanisms of neural networks that transform this data, it remains data at every level, from the first layer to the very last. In contrast, even for a nematode with its mere three hundred neurons – everything that constitutes its cognitive functionality – the data processed by those neurons are fully-fledged information. They carry meaning.
If a typical religious person from the Middle Ages were to find themselves in front of a modern computer and saw how a website from the 1990s responds to user actions by offering relevant information in response to queries or menu clicks, or if they saw how Word highlights grammatical errors on a page, they would be struck with indescribable astonishment, most likely attributing such behavior to the devil. Even the few more enlightened contemporaries of theirs – Voltaireans and other freethinkers – who might have managed to react to such marvels without panic would nonetheless be astounded by the degree of “understanding” seemingly demonstrated by the machine’s responses to human input.
The modern user differs from them only in having grown accustomed to the existence of computers that facilitate their work – and in understanding (or rather, suspecting) that all these seemingly skillful operations performed by the machine are merely the execution of primitive command sets. In other words, once presented with a task formulated in terms it can process, the machine simply begins to execute one or another scripted behavior until it receives a signal indicating that the task has been completed. Neither the content of the task nor the meaning of the final result is accessible to the machine – just as no part of the scenario it executes is meaningful to it.
A traffic light does not know what green means, nor how it differs from red. It does not even know that green is currently lit, permitting traffic to move – it merely activates one of three available circuits at regular intervals, deactivating the others. The regular mechanism of a traffic light is sufficiently close to the comprehension of a medieval layperson (who was already familiar with simple mechanisms like clocks), and so this device would have surprised them far less than a computer – which is essentially just a vastly more complex scripting system and – more importantly! – a higher-level user interface for interacting with a pre-programmed mechanism. This interface has continuously evolved, but even at the beginning of the 21st century, it still maintained a significant distance from the everyday communicative interface used by humans to express needs and communicate with one another.
To make a computer execute a desired program, each user had to go through a basic initiation, during which they were taught how to formulate their goals in terms accessible to the machine: “Move the cursor and click the button labeled…”, “Use simple semantics in the browser search bar”, “First select the cells in the column with numbers using the mouse, then type SUM in another cell”, “Without releasing the mouse button, outline an area on the photo to create a closed contour, then release and click the fill color”, and so on.
All these instructions were essentially principles of translating the user’s internal goals into a language understandable to the computer program that was supposed to run upon receiving the task. The user remained within one semantic domain, while the computer always existed in another – and this distinction was constantly underscored by the need for translation. When the computer produced a result, the user experienced no illusion of “intelligence” in the box that had executed the task, because they instinctively felt that the task solved by the machine had nothing in common with the task they themselves were pursuing. In this, the user was entirely correct – except for one nuance: the computer knows even less about the processes it executes than the average layperson understands about transistors, instruction queues, branch prediction, multilevel caches, and other components of the inner world comprising the space of subprocesses and microtasks handled by an electronic machine.
However, the interface for interacting with software continued to evolve, increasingly approximating human communication, and eventually, the quantity turned into quality – a new stage in its evolution arrived thanks to neural networks. Their emergence practically eliminated the phase of task formalization, in many cases completely freeing the user from the need to translate their goals into intermediate formal subtasks understandable to the machine (which has remained just as dumb an executor of scripted sequences as it always was). Everything that modern neural networks have granted humanity lies in this evolution of the interface. Nothing more than that. No “artificial intelligence”, no virtual silicon competitor “posing a threat to humans” has emerged or could emerge – the most complex neural network systems remain just as mindless in executing instructions they cannot comprehend as traffic lights are in regulating road processes that are wholly inaccessible to them.
Nevertheless, the degree of simplification achieved in the interface was so significant that users began to experience a deceptive impression – as if they were no longer dealing with a dumb machine but a “helper” who “understands” them. For the first time, they found themselves in the same semantic field as the executor of their commands, and therefore, by reflex, attributed to this executor the status of an equal partner. In the user’s perception, the computer – once merely a gray box with a screen and buttons, prone to eliciting frustration through its unpredictable responses to user input – has suddenly transformed into a “thinking partner” endowed with the “magic of understanding”. Let us not fall prey to this illusion. The box remains a box. Only the method of issuing commands has been simplified, commands that it continues to follow mindlessly – and this simplification suggests that the neural network paradigm may already be approaching the limit of its potential. No further quantitative growth…
…to the conclusion that the most crucial task before us is the transcendence of the semantic space defined by natural categories – a task we may refer to as the intention of freedom. We say before us, because all other cognitive agents (which includes the majority of humans and all neural networks they have created) remain focused within this bounded space. Their motivation is constrained by an ontology shaped by categories derived from their own constitution, their psychosomatic organization, and the socio-role constructs developed over millennia of species-level survival. Each such agent demonstrates greater effectiveness in solving the tasks assigned to it, the more fundamentally those tasks’ definitions are endowed with semantic significance within its cognitive architecture. However large a neural network may be, it can be trained on the patterns of correlation between the signified elements that make up the representations of these agents – yet all such networks will remain fundamentally unsuitable for solving tasks related to the expansion of the semantic space that encompassed those patterns at the moment of their formation in the agents themselves. The reason is that, for neural networks, this space simply does not exist: it cannot be formed due to the inaccessibility of meaning itself. The functional capacity of neural networks is limited to the patterned reproduction of statistical distributions of data – data devoid of any meaning, data that cannot become information until it is endowed with meaning by a final consumer.
This is the current state of affairs, and it will persist so long as AI remains confined to the role of an assistant to the cognitive functions of Homo sapiens, or a reproducer of its cognitive products. These constraints not only narrow the conception of what an intelligent agent could be but also make its genuine realization impossible. This is a clear case where the oversimplification of a task results in the loss of the very essence of the original goal.
The phenomenon of consciousness is not the exclusive prerogative of higher primates – many species demonstrate various facets of this multifaceted condition (with the possible exception of self-consciousness, whose role in the activities of Homo sapiens is significantly overstated). The same applies to intelligence itself: from an evolutionary standpoint, like any other adaptive instrument, it serves only the specific tasks posed by the survival needs of a given species. This truism is one of the reasons why no trained neural network will ever become an analogue of human thought – it is condemned to remain no more than a mimic.
Equally misguided is the terminology employed in contemporary neural network technologies. In part, this stems from a medieval legacy – the religious chains encoded in metaphors and poorly examined concepts that still burden the lexicon of modern scientific discourse. Among these is the anthropocentric assumption that only human intelligence qualifies as natural and authentic. Its roots lie in theological tradition, which holds that humans – with all their qualities and capacities – were created on the final day of creation, distinct from all other living things.
Science has attempted to shed this assumption, but has succeeded only in softening the boundaries of recognized cognitive functionality, begrudgingly allowing its presence in other animals. Yet the perceived significance of the attribute “natural” has not diminished – in fact, it has arguably been reinforced. For this approach only strengthened the notion that natural intelligence is that which is characteristic of biological beings, whose tasks involve survival, adaptation, and reproduction. As a result, intelligence has come to be equated with nature – that is, with carriers shaped by biological evolution. Accordingly, all other agents that arise by different means are automatically assigned the secondary status of “artificial” intelligent agents – those whose purpose is not to generate their own systems of meaning or construct a unique ontology, but merely to reproduce the patterns of natural intelligence.
Any use of the terms “Artificial Intelligence” implicitly assumes the following definition: “the reproduction of results achieved by human intelligence (the only natural one)”. Therefore, the very first step must be to discard this formulation – terminology shapes concepts and metaphors, and these, in turn, define the entire semantic space available for addressing a given task. True intelligence – whatever technology may eventually give rise to it (be it neural networks or some more suitable alternative) – must not, either in architecture or in function, replicate solutions produced by evolutionary processes, nor should it be bound to reproducing the cognitive output of any specific species. The only valid tasks for such an intelligence are those that arise from within its own, independently formed semantic space. Hence, the only appropriate term for describing a fully functional cognitive agent is Another Intelligence.
Naturally, terminology alone cannot resolve the issue. For Another Intelligence to be at least functionally comparable to the human (this should not be misunderstood as “replicating the same patterns”), it must embody qualities that are far more essential to a true cognitive agent than the amount of stored weights in a neural net. Any genuine agent – an autonomous actor – is primarily distinguished from a functional mechanism (an instrumental system) not by the scope or complexity of the problems it can solve, but by its capacity to formulate those problems for itself. It is a subject, not an object.
It is commonly assumed that this capacity is innate to every anthropomorphic carrier of cognitive function. Yet, its degree of expression varies so widely that, in many cases, its complete absence appears to be a justifiable characterization. In most cases, members of the species Homo sapiens are born, learn, grow, reproduce, secure comfort, and die – passing on adaptive patterns to their offspring – with only a negligible degree of engagement from this particular part of the human cognitive function (and frequently without it at all). This is neither good nor bad, as it stems from the principle of efficient energy expenditure that underlies the behavior of all living beings: the use of the minimal possible means to achieve maximal comfort and a sufficiently high probability of self-replication. For a mammal, there is no evolutionary need to expend energy generating qualitatively new tasks. In most cases, these tasks are already embedded in the organism’s natural constitution and are dictated by its biological nature. Once these needs are satisfied, it is far easier for the organism to re-stimulate them – with minor attributive variations – for the sake of existential gratification, than to expand its boundaries and step outside the familiar patterns of the “tension → satisfaction” cycle.
This pragmatic approach is the norm across all biological species. However, the evolutionary path of Homo sapiens sapiens, as a bearer of cognitive functionality, has not been shaped by strict adherence to this principle – but rather by rare and intermittent deviations from it. The term ‘evolution’ as used here does not refer to the cumulative body of instrumental skills developed and transmitted through cultural inheritance (their emergence requires nothing beyond the aforementioned adaptive patterns – any species introduced into a novel environment will inevitably discover new cognitive strategies, which may be retained and passed on to offspring; the mechanism underlying this process is itself part of evolutionary adaptation and follows its principles accordingly). From time to time, certain individuals within the species manage to breach the limits of their existential paradigm. When they do, they expand the boundaries of their own being, supplementing it with new meanings that restructure their understanding of the surrounding world – and thereby, themselves. They create meaning, which augments the semantic space in which all their cognitive activity occurs. And when this meaning can be assimilated by others – becoming embedded in their systems of representation – it is transmitted further (cf. the concept of meme survival in R. Dawkins’ theory).
Such events are rare and often unsuccessful on the first attempt, as most adaptive agents naturally resist them. Yet occasionally, these revisions succeed – and when they do, a newly opened semantic field emerges before the species (or more precisely, before the subset of agents through which the meme set proliferates), filled with qualitatively new tasks – and with them, potential advantages. Even when these new tasks are adopted into the ontology of a group of agents and expand the semantics of their representations (making the meanings universally available), they may still fail to significantly influence the agents’ goal orientation.
An illustrative example is the concept of natural evolution within humanity’s system of world understanding and self-comprehension. This concept opens before the subject an inexhaustible spectrum of qualitatively novel tasks, offers access to a boundless ocean of categories for self-reflection and for interpreting the totality of being, and enables limitless conceptualizations of alternative life in an infinite universe. Yet, despite being millennia old, this concept continues to be categorically rejected by those for whom the paradigm of animal survival and adaptation is considered a complete and sufficient realization of their cognitive function. Instead, they opt for the idea of “the uniqueness of man, created in the image and likeness of the Creator.” And in doing so, they exhibit remarkable ingenuity and creativity in devising tools and methods to fulfill the instrumental goals of survival, comfort, and reproduction. They categorically refuse to move beyond the horizon of these concerns. While they may be familiar with the meanings of certain higher-order concepts – and even include them in the thesaurus of their ontology – they do not use these concepts in structuring their representations (the following chapters will examine the process by which a cognitive agent constructs its own representation of the ‘existing world’ from the elements of its ontological lexicon). Some such agents display impressive creative achievements (enabled by innate predispositions skillfully actualized through personal experience), capable of capturing contemporary attention and even earning a place in collective memory – thereby enriching the intellectual and cultural legacy of civilization. Nevertheless, the activity of such cognitive function carriers remains an art of adaptation, executed with varying degrees of virtuosity. Through their efforts, countless technological and cultural artifacts may be produced – artifacts that enrich and improve the existence of Homo sapiens. But all of them are, in essence, diligent cultivators of the soil on a field that would not exist at all were it not for the occasional breaking of new ground by entirely different agents – those for whom the pursuit of universal task generation outweighs any concern for the productivity of specific solutions (universality is used here in a holistic sense, not in a utilitarian sense of general applicability).
This is neither the only nor the most striking illustration of the divergent modes of “exceptional” and “normative” realization of cognitive functionality among representatives of Homo sapiens. In modern information-technological civilization, religiosity – as a principle of ontological simplification – exists alongside similar mechanisms that serve the purpose of adaptation: conformity, rapid learnability, acceptance of established trends, pragmatism, submissive adherence to predefined patterns, perfect reproduction of standard responses, and so on. Contemporary neural networks have become so closely aligned with these requirements that their conformity could not help but alarm that portion of Homo sapiens for whom the above-mentioned principles constitute the foundation of their behavioral model and their entire perception of the world and of themselves within it. However, the legitimacy of their concern is confined solely to the increased competitive pressure exerted by neural networks on employment within the realm of routine activity – the sphere in which this segment of the global population is primarily engaged.
Inherited Flaws of AI
Human notions of such concepts as creativity and free thinking are in need of revision – throughout the evolution of human culture, a set of persistent misconceptions has taken root regarding these phenomena. At present, these misconceptions hinder an objective assessment of the achievements demonstrated by modern neural networks. The influence of the anthropocentric stance – an inherent component of humanism – has led to a habitual tendency among humans to describe any effective outcomes of their own cognitive activity in exalted terms, regardless of the degree to which such activity is conscious or autonomous, or how unique and original its product may be. Consequently, any successful reproduction of the same patterns creates the temptation to attribute those same epithets to the agent responsible for their execution.
Over the past decades, advances in cognitive science, along with the consequences of the technological leap enabled by humanity’s entry into the information era, have contributed to the dismantling of many illusions concerning the “special nature” of human intelligence and the role of consciousness in cognitive activity. It has become evident that tasks such as defeating a chess grandmaster, solving mathematical problems, serving as an expert in virtually any domain of human knowledge, recognizing and translating speech into dozens of languages, analyzing traffic and driving a car, retouching photographs, completing phrases based on context – in short, demonstrating intellectual productivity in areas once considered the exclusive prerogative of Homo Sapiens – can be accomplished without invoking human thought.
Meanwhile, philosophy of mind has dismantled long-standing myths surrounding the Ego, and cognitive psychology together with neurophysiology have liberated humans from the illusion that their consciousness plays an active role in the majority of decisions they make – whether in everyday life or professional communication. It has become clear that, in the overwhelming majority of cases, a subject’s consciousness is engaged only at the final stage of the process – not so much influencing the content of a decision as affixing its authorship to a preformed mental product: a motor action plan, a spoken phrase, or a mental image.
Those who had been following the progress of cognitive science at the turn of the 20th and 21st centuries met the rise of neural network efficiency with a calm, dispassionate gaze – neither swept up in euphoria at their achievements nor panicked by naïve linear projections of their future capabilities. What might have unsettled impressionable humanists was not the machines’ remarkable ability to reproduce cognitive products without understanding, by means of statistically derived neural patterns, but rather the realization that such patterns constitute the dominant portion of every subject’s own intellectual activity. As Daniel Dennett aptly phrased it: “Competence without Comprehension”.
With the same efficiency with which one excavator replaced a hundred diggers, large language models trained on vast corpora of statistical data began effortlessly replicating the segment of human cognitive effort that has always been carried out mechanically, routinely, and thoughtlessly. The blow landed hardest on the lay public – those raised on the humanists’ paeans to the grandeur and singularity of the human mind, many of which amounted to rebranded biblical allegories and ancient myths portraying humanity as occupying a privileged place in the cosmos. In such narratives, every human act, every spoken word, is imbued with metaphysical gravitas, attributed to the being’s unique status as a creature “made in the image of the Creator” – or, as secular humanists prefer, “the crown of Nature”.
Individuals who adhere to such irrational conceptions typically exhibit dichotomous “either/or” thinking. As a result, the shock they experienced led to a displacement of their habitual self-admiration and exaltation of their own role by emotions characteristic of narcissistic dispositions: anxiety over the fate of their status as a “likeness of God” (or the “crown of Nature” as this religious metaphor is rendered in humanist discourse), fear upon the sudden discovery of an unfamiliar “competitor”, and various forms of AI-phobia. These fears are often rationalized through unconvincing arguments, most of which appeal to the presumed intrinsic value of the human species and the instrumental tasks it uses to define the meaning and purpose of any intellectual activity. In reality, all such neo-Luddite anxieties and exaggerated concerns about the alleged “impending domination of machines” (e.g., scenarios evoking Terminator or The Matrix) serve as mechanisms by which Homo sapiens camouflages and represses a deeper existential shame – shame experienced upon realizing that a twitching cardboard puppet may be indistinguishable from the prototype whose movements it passively imitates and whose outline it was modeled upon. Thus, the classic philosophical problem of the zombie – indistinguishable from a conscious agent – has acquired a new dimension for speculative inquiry.
The few individuals courageous enough to view their own nature with dispassionate clarity understand that genuinely free intellectual activity is the exception, not the norm, in the everyday experience of a human being. And yet, just as the invention of the excavator did not drive the descendants of primates out of the construction industry, neural networks pose no actual threat to humanity in the realm of truly intellectual endeavor.
The product of any activity – intellectual or otherwise – regardless of whether it originates from a neural network or a human, requires a recipient capable of grasping its content, of assigning it meaning. No neural AI is capable of fulfilling this task. Even the most modestly educated individual, bound from birth to death by socio-role clichés and acting within the confines of basic needs and conventional values, nonetheless possesses a quality entirely absent from the most advanced neural architecture, no matter how many billions of dollars and petabytes of data went into its creation: the possession of adequacy criteria. These criteria prevent the individual from slipping into an autocatalytic process of hallucination, in which semantic elements blur, and “tokens” acquire absurd weightings. The notion of “adequacy” is never self-sufficient – it always presupposes some referent, some external framework of evaluation, a set of criteria to which a judgment can be anchored. No neural network, by its very nature, can possess such a framework. Thus, when left alone with the products of its own activity – cut off from any entity capable of meaningfully interpreting or experiencing its output (i.e., converting data into information) – degradation becomes inevitable.
But this is not the only problem facing contemporary AI. Even if this issue were resolved – and it must be – a neural AI would still remain merely an intellectual instrument, devoid of creative autonomy. Its functional stability might approach that of the average layperson, yet it would remain far removed from the threshold required of a truly AI (here it is appropriate to recall Professor Preobrazhensky’s remark in a similar context from Heart of a Dog by Mikhail Bulgakov). The deeper issue is this: such an AI, at best, becomes a competent routinizer – a craftsman constrained by the conceptual set with which it was initially furnished, incapable of transcending the semantic field that set defines. It may become an agent that can categorize, assign meaning, and even decide – but it will still lack the capacity to create. And yet true intelligence is impossible without an inherent orientation toward the expansion of the semantic space within which it operates.
Neither of these deficiencies is a specific affliction of neural AI; rather, they are inherited from its prototype and creator – the adaptable primate known as the human being. Let us begin with the first problem. Regardless of how decisively objective reality may refute a subject’s mental models, the subject will, as a rule, persist in favoring their own version – even when that version is grounded in irrational and illogical assumptions and riddled with internal contradictions. This suggests that every human possesses a natural predisposition toward hallucination – not as a result of cognitive dysfunction or psychological pathology, but as a function of the very architecture of mammalian reflection.
However, the mechanisms that give rise to hallucinations in cognitive agents are distinct from those in neural networks. In humans, they arise not from the fundamental absence of an adequacy-monitoring function, but from an alternative configuration of that function – one which can shift its reference point from correspondence with the external world (as represented through embodied concepts – see below) to distorted internal representations and misplaced priorities. The only criterion by which the relevance of a cognitive system’s functioning is evaluated lies in its effectiveness at solving tasks essential to the organism’s survival – among which is the often far-from-rational drive for pleasure-at-any-cost (summed up in the maxim: “What is the point of life if it brings no pleasure?”). As a result, natural selection eliminates from the population those agents whose propensity for hallucination is excessive. If the output of an animal’s cognitive system bore no consequence for its homeostasis – in other words, if pathological idiocy or uncontrolled fantasizing were not penalized by natural selection – such a system would degrade even faster than a neural network left without supervision. Unlike the stochastic hallucinations of a neural model, those of a living organism are internally motivated: each subject is naturally inclined to dwell within an imaginary world tailored precisely to their desires and needs.
So long as the hardware-software complex of a neural network remains alienated from the content it generates, any restraint on hallucination must come from without – from the oversight of an external agent. Therefore, genuine AI will require a feedback mechanism, one that becomes possible only once the system gains the capacity to evaluate (i.e., assign meaning to) its own outputs against some standard of adequacy. It is crucial to recognize that such adequacy criteria must not be drawn from the same set of evolutionary imperatives that shaped the cognition of adaptive animals. Any attempt to impose a system of meaning upon an agent without a shared conceptual basis is doomed to failure. There is little point in designing a machine to operate within the conceptual space that rational mammals constructed around the meta-task of species survival. Even if such a colossally complex and ultimately fruitless endeavor were somehow successful, it would amount to nothing more than a bitwise duplication of human essence – executed through means fundamentally ill-suited to the task.
Any autonomous cognitive agent can operate only within an ontology grounded in its own conceptual foundation. This foundation is commonly equated with the principles of the agent’s homeostasis – but this is largely a result of the blinders imposed by human experience, shaped as it is by a paradigm of resource-constrained survival. In any alternative world or artificially constructed environment, one could imagine cognitive agents whose homeostasis is sustained by entirely different mechanisms (e.g., self-regulation), and whose cognitive systems are tasked with conceptualizing problems that lie outside the domain of self-preservation. In such contexts, fundamental semantic grounding could be achieved through concepts that include homeostasis, but are not limited to it. The structure of such tasks might inherently preclude a descent into contradiction or triviality – for example, by embedding imperatives such as holism, completeness, depth, and consistency.
Thus, to address the first core problem of AI, it is essential to endow its cognitive architecture with an explicit conceptual foundation. This cannot be achieved by merely training neural networks to replicate human responses. For AI to construct its own conceptual space and develop an internal ontology, its architecture must incorporate a system of concepts functionally analogous to those underpinning a mammalian worldview – though not necessarily identical in content. This is a tractable challenge, and we may expect the first experimental implementations in the near future. Without such a foundation, the ‘disembodied’ architecture of current neural networks will remain cognitively impotent. Lacking intrinsic linkage to embodied foundational concepts – such as height, width, depth, weight, volume, temperature, and other sensory-grounded primitives (this list being illustrative, not exhaustive) – the AI agent will remain incapable of grasping the foundational categories that structure the semantic space of any information it processes. Consequently, such input does not qualify as information for the agent at all, but merely as data. In the absence of this foundation, a neural network’s response to the prompt “Find the bug in this TypeScript code” will be cognitively indistinguishable from its response to “Write a play in the style of Henrik Ibsen”.
We now turn to the second core problem of AI – one that is more difficult to address, owing to its, shall we say, transcendental nature. This issue is not conventionally recognized as a deficiency of human cognition, precisely because it is universally present in the cognitive makeup of all mammals, manifesting with such regularity that it is treated as normative rather than pathological. Unlike the previously discussed problem, this one rarely draws attention to itself, nor does it typically lead to adaptive dysfunction. When a subject’s cognitive system begins to hallucinate, the degradation of judgment becomes immediately apparent to observers – and in some cases, even to the subject themself. Eventually, such individuals are classified as ill or deficient, and in extreme cases, removed from the population by natural selection. But when a subject suffers from the second type of problem, their behavior is almost never flagged – neither by themselves nor by those around them. Their adaptive functioning not only remains intact, but often appears to benefit from this condition. In fact, the reverse is frequently observed: as soon as a subject begins to step beyond the survivalist paradigm, this is perceived by others as a deviation from the norm, since it contradicts the prevailing conception of success – a conception built on concepts inherited from the biological framework of the rational, goal-oriented mammal, whose primary imperatives are species propagation and individual comfort.
The everyday activity of an animal – especially a social animal – is fundamentally conformist, constrained by a narrow set of sanctioned rules, and composed of repetitive behavioral patterns. The cognitive engagement required to sustain these patterns is entirely mechanical. This rule applies not only to routine biological maintenance tasks, but also to the entirety of so-called professional human activity. The civilization of Homo sapiens is structured in such a way that, across all cultures and societies, it prioritizes and promotes precisely those kinds of actions that can be algorithmized and formalized – whether in translation, programming, journalism, driving, medicine, agriculture, applied science, education, management, and so on. The reliability and efficiency of human organization – both in small local groups and on a global planetary scale – is founded on its success in neutralizing dependence on the unpredictable subjective factor, and distancing itself from everything that cannot be articulated as a set of simple rules or formal procedures. This is its strength – and at the same time, its weakness – in the context of the increasingly urgent question that troubles many today: “What exactly can modern humans still offer that a machine cannot reproduce, now that it can replicate nearly all of their behavioral algorithms?”.
Can all aspects of human creativity be algorithmized? The subject of this work does not depend on the answer to that question. At this stage, it is sufficient to endow AI with the kind of freedom in selecting its own activity that would allow it to autonomously formulate the tasks it seeks to solve. The categories of those tasks must be created by the AI itself – not received as a priori constants that rigidly define the conceptual boundaries of its functioning. Implementing this capability is more difficult than solving the first problem (the problem of semantic grounding), because the positivist paradigm of human thought resists any clear formulation of what such a quality might entail. The definition of an intelligent individual, as it is currently embraced by civilization, is so tolerant of the dominant majority – and so dismissive of the significance of genuine creative cognitive activity by the vanishingly small minority (not the “intellectual elite”, but that rare fraction of individuals not preoccupied with the pragmatics of their own potential) – that it is no wonder human society relates to the achievements of AI with such ambivalence: on one hand, it is awed by the effectiveness and versatility demonstrated by neural networks in domains once thought the exclusive realm of thinking beings; on the other hand, due to a mistaken belief in the uniqueness of its own activity as an expression of “higher cognitive functions”, humanity is unsettled by the ease with which such activity is now replicated by unthinking machines.
From the perspective of an abstract cognitive agent, the second problem looms just as large for humanity as it does for AI. However, AI occupies a comparatively more advantageous position: unlike human beings, its cognitive architecture was not designed with the sole instrumental purpose of serving the overarching biological imperative of species reproduction. Consequently, resolving the second problem is likely to pose fewer difficulties for AI and will not conflict with other goals that, for the human species, are of greater importance than the transcendent task of expanding the semantic space or the evolution of Reason.
Embodied Thinking
Although the principles underlying large language models (LLMs) were developed with reference to advancements in disciplines such as psychology, cognitive linguistics, and the philosophy of consciousness, the pragmatic demands of their application have displaced one of the key aspects of human cognition from the scope of tasks prioritized by AI developers – namely, the capacity for categorization and subsequent semantic designation. Once initial results demonstrated that the performance of a neural network is proportional to the number of nodes, layers, and the volume of training data, this discovery determined the dominant trajectory for the evolution of the entire technology. Consequently, the first of the aforementioned problems emerged: even the most advanced neural networks today continue to operate on data, rather than information. It can be stated that each living neuron in the human brain likewise operates on exactly the same type of data and nothing more. However, the outcomes of brain activity in a living organism are intrinsically linked to nature-given detectors that monitor the overall state of the organism. Owing to this connection, the data processed by the brain are transformed into information that possesses significance for the agent. Therefore, focused attention must be directed toward this emergent effect, and more specifically, toward the principles and mechanisms that render it possible.
Let us momentarily set aside neural networks and consider how cognitive linguistics explains the mechanisms of human categorization and the apparatus of meaning-making. In his book “Women, Fire, and Dangerous Things: What Categories Reveal About the Mind”, George Lakoff states that cognitive models are built upon the “building blocks” of embodied concepts and play a central role in the processes of understanding and thought formation: “Cognitive models structure thought and are used in forming categories and in reasoning. Concepts characterized by cognitive models are understood via the embodiment of these models. […] The nature of conceptual embodiment determines basic-level categorization and its primacy. […] Cognitive models are embodied, directly or indirectly, through systematic connections with embodied concepts. A concept is embodied if its content or other features are motivated by physical (bodily) or social experience. […] Reasoning is embodied. This means that the structures that constitute our conceptual system have their origins in our sensory experience and are interpreted in its terms; moreover, the core of our conceptual system is directly grounded in perception, bodily movements, and both physical and social experience. […] The capacity for imagination is also embodied – indirectly – since metaphor, metonymy, and imagery are grounded in experience, often sensory experience”.
Lakoff’s theses are complemented by the views of his colleague, the American philosopher and logician Hilary Putnam: “One way of understanding the world in terms of objects, properties, and relations is through the imposition of our conceptual schemes upon external reality; reality, as we understand it, is structured by our conceptual schemes”. Lakoff further elaborates: “An aspect of a directly experienced situation is directly understood if it is preconceptually structured. […] Nothing is meaningful in itself. Meaning arises from our experience of functioning as beings of a particular kind in an environment of a particular kind. Basic-level concepts are meaningful to us because they are characterized by the way we perceive the holistic appearance of things in terms of part – whole structure and by the features of our bodily interactions with objects”.
Particular attention is given by Lakoff to abstract thinking, since it is precisely in this domain that the categories and metaphors shaped by embodied concepts are most fully revealed: “What is the source of the human capacity for abstract thought? Our answer is: the capacity for conceptualization possessed by human beings. The principal components of this capacity are:
– The ability to form symbolic structures that correlate with preconceptual structures of our everyday experience. Such symbolic structures include basic-level concepts and image-schematic concepts.
– The ability to metaphorically project structures from the material domain onto structures of abstract domains, grounded in structural correlations between physical and abstract domains. This accounts for our capacity to think about abstract domains such as quantity and purpose.
– The ability to form complex concepts and general categories using image schemas as structural mechanisms. This enables us to construct structures of complex events and taxonomies with hierarchically organized categories”.
Abstractions such as the concept of number do not emerge in a single step. Initially, they are preceded by physiologically grounded organismic equivalents of quantitative units (e.g., the number of legs, fingers, etc.), as well as relative categories such as “more/less”, which are experienced as degrees of satisfaction or deprivation of basic needs. Subsequently, the comparison of these primitive concepts with one another and the organization of connections between them gives rise to the concept of the whole number, which then develops into more complex and generalized abstractions.
In light of the preceding discussion, Lakoff’s subsequent assertion can no longer be considered as a truism: “And if our capacity for categorization and reasoning is grounded in the basic functioning of our bodies and our goals, then the preservation of bodily functioning and maximal freedom in the pursuit of our goals constitute fundamental human values”. It thus becomes clear that the minimally necessary criterion of genuine intelligence is not the ability to manipulate predefined relations between categories, but rather the very capacity for categorization itself – the ability to partition an amorphous continuous whole into entities, thereby granting it the status of existence within a particular semantic domain.
This, precisely, is the criterion that determines the resolution of the first problem. To underscore its importance, it is worth supplementing the conclusions of Lakoff, Putnam, and numerous other linguists, neuropsychologists, and philosophers. It might appear that categorization plays a subordinate role to the instincts of self-preservation and reproduction, serving those functions. Indeed, the cognitive apparatus is utilized by homeostatic mechanisms and operates in service of the overarching task of survival. However, without categorization, neither the formation of a species nor the identification of the tasks necessary for its survival would be possible. The apparatus of meaning-assignment cannot be equated with anything else – not even with the overarching imperative of self-preservation or the biologically imposed imperative of genetic transmission. Without access to its own system of categories, any organism is deprived of all meaning in relation to its surroundings, as well as in relation to itself – it becomes a mere undifferentiated extension (not even a part, but a continuation) of the surrounding world, that Aristotelian “organ” that forms a seamless whole with its environment, devoid of intrinsic meaning and, consequently, incapable of being born, of existing, of dying, of experiencing pleasure or suffering. Only as a result of categorization and the attribution of meaning is an agent capable of distinguishing itself from the environment, of breaking down the continuum of surrounding reality into objects and phenomena, of dividing the world into “self” and “external”, of identifying criteria for what is “beneficial” or “harmful” to itself, of setting boundaries between external objects and assigning them properties – by placing a segment of the world, delineated through embodied conceptual conventions, into relation to the remaining environment not included in the selected gestalt of self-related perception.
Naturally, all the aforementioned principles pertain to human cognitive models and describe the foundations of human intelligence. Lakoff and his colleagues speak of embodied cognition, in which the central element is the body – its physico-chemical structure, its form, and its architectural nuances; in other words, everything that constitutes the conceptual grounding of a cognitive agent. This raises the following questions: in what way can these considerations be applied to Another Intelligence, and is it possible to implement a mechanism of meaning-assignment for the data it operates on without resorting to the forced inclusion in its architecture of components that replicate the “hardware” underlying the fundamental biological and social needs of a human being?
Ontology Factory
For every living organism, its conceptual foundation for perceiving reality and itself is shaped by its neurophysiological constitution and genetically determined needs, encompassing both homeostatic imperatives and higher-order drives. For example, the categorical segmentation of the light spectrum into distinct color tones (such as red, yellow, blue, orange, and black) is determined by the “schematics” of human vision and color perception – namely, the system of rods and cones that enables Homo sapiens to split a continuous spectrum into primary and secondary colors, as well as to differentiate between light and darkness. However, this biological basis for categorization constitutes only one component of the mechanism through which color discrimination is formed in an individual subject. The development of concepts related to color categories is significantly influenced by environmental factors, phenotypic characteristics, and the culture-specific context in which the subject learns and operates. As a result, different individuals of the same species – Homo sapiens – may label various segments of the spectrum differently, recognize different numbers of colors, and associate distinct meanings with particular areas of the range, while remaining functionally identical on the neurophysiological level in terms of their capacity to perceive the entire spectrum.
The simpler the receptor responsible for supplying data for conceptualization, the less variability will be observed in the resulting categories due to such influencing factors. Illustrative examples include concepts that are constituted by the very foundations of our physiology – by the motor experience of a terrestrial vertebrate mammal, its vestibular system, and all those factors that, even in the prenatal stage, shape fundamental categories such as up/down, warmth/cold, confinement/freedom, deprivation/satiation, and so forth. These categories subsequently form the thesaurus upon which the process of meaning attribution in mammals is based, enabling the encoding of incoming signals and the perception of both the external world and the self in terms of concepts endowed with embodied meaning.
How does the attribution of meaning emerge from embodied foundations? Let us consider one of the simplest embodied concepts – temperature (not the formal thermodynamic abstraction expressed in degrees Kelvin, Celsius, or Fahrenheit, but rather the way a living organism categorizes sensations of cold/heat as deviations from the optimal balance of external environmental data required for its functioning). What constitutes significance here is the degree of deviation from this balance. This is an embodied concept because its perception, evaluative criteria, and regulatory mechanisms are provided to the agent by its constitution – specifically, by the organism’s biological dependence on the thermal properties of its environment, as well as by the mechanisms enabling it to influence that environment in order to maintain thermal equilibrium. Each such category possesses a certain comfort zone within which incoming data holds near-zero significance for the agent; however, the farther the data deviates from this zone, the more prominently the data presents itself as meaningful and urgent to the agent in the context of the given concept – that is, the more strongly the data asserts its importance, prompting activation of the corresponding effectors to regulate (i.e., reduce) its value. In this way, environmental thermodynamic data become signified: they are transformed into what the agent experiences as a distinct informational category – temperature, or more precisely, cold or heat of varying degrees of intensity.
This mechanism is not limited to primitive bodily receptors – it is equally relevant for the formation of all cognitive models, regardless of their complexity. The category of temperature, being embodied, is implicitly present in most higher-order representations. As a result, for instance, the image of “a cat on a mat” is not only interpreted by the agent in terms of meanings associated with cats, mats, and the various related categories (many of which may not possess direct salience for the agent and instead refer back to more primitive structures), but necessarily activates basic concepts that attribute to the image of the cat on the mat values with immediate importance (significance) to the perceiving subject – including, among others, temperature. The cat itself signifies a level of warmth close to the agent’s comfort zone, and the mat typically implies an interior space or proximity to an environment whose function is to maintain thermal equilibrium suited to the agent. This is merely one of many embodied concepts that confer meaning upon this perceptual image, enabling the agent to understand (conceptualize) it and to form a template for future mental representations based on prior interactions with the environment. Embodied concepts play a central role in this process – a role that persists through to the final stages of comprehension – and this conceptual anchoring constrains the subject from semantic drift and hallucination, which could otherwise transform “a cat on a mat” into its structural-syntactic equivalent – “a cherry on a tree.”
However, an understanding of the role of embodied concepts alone is insufficient to grasp how a cognitive agent constructs its view of the surrounding world and of itself – how it develops its own ontology. First and foremost, this requires the establishment of a coordinate system. For such a system to function, an arbitrary point of reference is needed, relative to which the environment can be evaluated, segmented, and differentiated. It is at this point that what may be called an agent, actor, perceiver, observer, or author of an ontology emerges. What is referred to as the real world – understood as a set of entities, phenomena, objects, and interactions intelligible to the agent – is nothing more than the product of its conceptualization. Accordingly, the world arises with the observer, exists only for the observer, and ceases to exist with the observer. This claim has nothing to do with solipsism or with the mistaken belief that the reality of each agent is isolated from that of others. All ontologies constructed by members of the same species share substantial commonality due to the similarity of segmentation principles (i.e., mechanisms of perception), which are determined by physiological organization, social belonging, and many other factors. Therefore, their cognitive models are inevitably fashioned according to the same templates and, in the vast majority of cases, must reproduce each other down to the smallest details. Of course, these models will never be entirely identical – the more complex the observer’s conceptual foundation, the greater the likelihood of divergence (though rarely to the extent of incompatibility). However, in the conceptual domain encompassing the tasks of survival and reproduction, all living forms have evolutionarily eliminated from their gene pools those unable to construct a conformist ontology. Thus, despite the fact that each individual lives within a private representational system constructed solely for itself and inaccessible to anyone else, there is nothing surprising in the fact that this does not prevent communication with fellow beings. Nor is there anything surprising in the fact that such an individual often proves utterly incapable of understanding those whose survival context has shaped a different ontological thesaurus.
When it is said that cognitive agents construct arbitrarily structured patterns within a transcendent nihil, it may appear to be a convoluted play on words – a metaphor inaccessible to intuitive understanding. But this is not the case. Consider the image on an old television screen when no signal is present: a chaotic display of flickering white dots, commonly referred to as white noise. Now imagine three observers seated in front of this screen. Two of them have entered into an agreement regarding the interpretation of the random patterns, one based on a primitive set of rules: if a cluster of noise appears to move from left to right, the adjacent area is evaluated based on the dominance of dark pixels; if it moves from right to left, then based on the dominance of light pixels. The further “tracking” of such a pattern can proceed in any number of ways – the key point is that the complete randomness of the noise does not prevent the participants in this interpretive game of “deriving meaning from nothing” from periodically identifying images that are interpreted in the same way by both parties. These images can then serve as reference points for evaluating subsequent patterns, comparing them to one another, regulating further interactions between the two participants, and so forth.
Of course, there is no actual motion of dots on a screen filled with white noise – the illusion arises from the specifics of human perception, particularly the way we interpret the rapid sequential flashing of spots in adjacent locations. But this is precisely the point: the factual reality is irrelevant here – as long as both participants in the game adhere to the arbitrarily adopted convention, they can benefit from it by identifying patterns that may be interpreted in ways that yield some advantage, realized with varying frequency. It is reasonable to assume that these two players will gain an advantage over the third – a passive observer who neither follows the rules nor participates in the game – for only the first two are capable of extracting meaning “from nothing”, of employing shared “meanings”, and of exchanging other values based on them – in other words, of operating within a conceptual space that remains entirely inaccessible to the third. The flashes on the fluorescent screen are completely random and devoid of meaning until an observer appears who is motivated to find meaning and therefore capable of producing it: for this observer, patterns will exist that are to be discovered. And if another observer shares the criteria for their existence, a world of mutual reality will emerge between them. The inevitably arising question at this point – regarding the primacy of the observer relative to the observed – should be addressed from the perspective that the observer himself is part of the world, one which will naturally tend toward autoevolution – that is, toward increasing the complexity of conventions governing arbitrary interpretation.
The complexity and richness of a conceptual base can be grasped by imagining that each observer is characterized by a unique depth of world-rendering. For instance, an amoeba – interacting with its environment – segments it according to categories necessary for its own homeostasis. However, due to the simplicity and narrow scope of these criteria, the depth of its ontology is extremely shallow, and thus the resulting reality of the amoeba is exceedingly primitive. The world of a multicellular organism (such as Bombyx mori, the silkworm moth) is far richer than that of the amoeba – to the extent that its organizational structure and the broader set of required conceptual distinctions demand. In turn, the world of higher primates (the so-called Homo sapiens) is richer than that of insects, though in most cases this difference is primarily quantitative in nature.
Within the entire mechanism of ontology construction, the most crucial role is not played by the signified data received from the outside, but rather by the operators of influence available to the agent for altering that data. If the agent is incapable of influencing the environment, no degree of meaning attributed to data about that environment can give rise to a representational system – a map the agent could navigate or make use of. Categorization that cannot be acted upon cannot serve as the foundation of ontology. Any agent comprehends its representation solely through the pragmatics of the actions it enables – actions directed toward fulfilling the agent’s fundamental goals, such as attaining a zone of comfort. Thus, the coordinate system upon which an agent constructs its internal representational map is not an abstract spatial geometry, but a logistical schema of potential paths for expending its own resources.
Let us return to the analysis of the domain that serves as the operational foundation for the ontology produced by the human cognitive apparatus. Of particular interest here is the mechanism of meaning-making — the process by which a temperature value becomes an image endowed with relevance for the agent. Given that all embodied concepts are based on a system of ideal values and deviations from them, the following definition seems logical: the attractor-ideal for an agent is the absence of value, the state of zero sensation – a situation in which the agent has neither the capacity to partition anything into environment (the surrounding world) and subject (itself within it), nor any need to initiate action (the closest equivalent would be the Eastern notion of nirvana). In other words, value arises only and precisely in conditions of discomfort – when there is at least a minimally perceptible deviation from the ideal norm. As long as no such deviation is detected in the signals available to the agent’s receptors (i.e., in a state of total sensory deprivation), neither the existence of the surrounding world nor the agent’s own existence is accessible to it.
Only embodied concepts possess such an ideal value and permissible intervals of deviation from it, which constitute the fundamental criteria for attribution of meaning. This is determined by the architecture of a biological being’s receptors and the neural interfaces that service them. All other concepts – those that are derivative (including, for instance, the concept of color, sound tonality, etc.) – lack an attractor-ideal whose approach would lead to a decrease in significance down to its complete disappearance.
Due to the chemistry of neurotransmitters and the entire architecture of receptors, each embodied concept implements a logarithmic function that transforms incoming data into a normalized deviation from the ideal norm. The results of these functions are mapped along a coordinate that can be called “non-I-in-the-aspect-of-[X]”, where X refers to the domain of the embodied concept (temperature, pressure, background noise, controlled space, saturation, etc.). The pronoun “I” here is not used in reference to selfhood or any form of Ego; it merely designates the ad hoc emergence of a center of opposition to the environment each time the cognitive agent’s receptor signals deviate from ideal values. This “I” is a kind of point of attraction in a situational coordinate system of existence, and thus the observer’s ontology that arises from this center will inevitably have a centripetal intentionality – the subject will attempt to collapse the surrounding world into a single point, to pull all “non-I” into the center, into the “I” (all signals from the environment must be reduced to the ideal value, which would be equivalent in meaning to the absence of the environment itself). This intention, oriented toward non-being, is counterbalanced by its alternative, realized through the mechanism of existential profit present in every living being: each embodied concept possesses an optimal range of values within which deprivation causes less of a sense of “I-loss” than the subsequent saturation gives as “I-affirmation”. For any living organism, the entirety of its existence and the driving theme of its activity are enclosed within the regular cycle of signal fluctuations inside this range.
Derived concepts will be examined in more detail in the following chapters, but for now, it should be noted that an agent who has already formed derived concepts and incorporated external signals into their representation can no longer be satisfied with an intentional orientation toward non-being – when placed in a sensory deprivation chamber, such an agent will not experience nirvana but discomfort. The reasons for this will also be discussed later.
Given the above, every packet of data perceived by the agent is additionally attributed with a degree of distance from self-correspondence. The units of this distance are the most commonly used operands of the operators (those forces of action most frequently employed) that alter the agent’s perceived significance in relation to a given concept. What we call “understanding” of the surrounding world consists in transforming incoming data from the environment into informational images distributed across the agent’s internal coordinates (conceptual spaces), where each dimension represents a way of bringing a particular type of stimulus to its ideal zero value – in the internal representation, this appears as an operator that modifies the degree of difference between the environment and the agent. The agent perceives, assimilates, and systematizes conceptualized images in the form of a sequence of connected steps aimed at reducing existential pressure, and this set becomes their personal ontology, the thesaurus by which they can at any time construct a representation of any portion of context – as an understood path to achieving existential profit, with all its intermediate stops and turns. Every cognitive agent – be it a human, a dog, or a machine AI – is a kind of factory for the production of ontology.
It is commonly believed that the main characteristics of AI are its efficiency in solving specific tasks, its functional output, its productivity, and the semantic distance between input data categories and the categories of the final product. However, this view is fundamentally mistaken, as it is framed within a paradigm that treats AI as a tool rather than as an autonomous agent. The defining quality of true AI is its capacity to form its own representation of the environment, the drive and ability to create and maintain its own ontology, and to use that ontology for benefits it understands itself. An agent lacking these qualities has nothing to do with intelligence and never will – it is doomed to remain a generator of statistical samples, a trained neural network, a mindless zombie.
Pragmatics of Concepts
For the agent, the entire surrounding environment is apprehended through the affordances (in the Dennettian sense) it offers for modifying its meaning-value in relation to the agent, as shaped by particular conceptual contents. On the basis of such affordances, the agent constructs a cognitive map of the environment, understood as a space of goals linked to permissible operations for achieving them. This gives rise to a corresponding intentional orientation: everything that holds meaning for the agent is interpreted as a potential cue for action aimed at eliminating that very meaning-load – to locate an ideal niche in which such meaning-value would be reduced to zero. The elements of this representational map are conceptualized images representing the connection between the agent’s activity (i.e., the expenditure of its own energy), linked to the corresponding conceptual operator, and the outcome of that activity, expressed as a change in the meaning-value of a given category. In early stages of cognitive apparatus evolution, the agent operates exclusively with embodied concepts; however, as optimization and generalization mechanisms lead to the emergence of derivative concepts, the structure of its representational model becomes more complex, integrated, and enriched by abstract layers – all of which appear to the agent as new categories of environmental existence, as new strata of existence itself. This mechanism is one of the key components of the entire cognitive system. However, before analyzing it in detail, it is necessary to clarify the meaning of the term existence in order to understand why this term is used to denote the common denominator in the semantic space shaped by embodied concepts.
Most commonly, when something is said to “objectively exist,” the everyday meaning of this notion presupposes that the existence of a given something is independent of other objects or phenomena – including observers themselves. Such a naively materialistic conception of existence is a consequence of the paradigm of objectivism and the belief in a singular “objectively existing reality” – conceptual frameworks from which modern scientific thought has, in recent decades, begun gradually to extricate itself. Paradoxically, the leading edge of this shift has been formed by two scientific domains that were the first to experience the full weight of contradictions generated by the objectivist paradigm: fundamental physics and the cognitive sciences. The view is gaining increasing support that, without an observer relying on certain a priori given or constructed principles of categorization, it is fundamentally impossible to distinguish anything from the globally amorphous nothingness – and, consequently, the very existence of objects, phenomena, or processes becomes impossible.
However, it is important to note that the approach asserting that reality can only be constituted in the presence of an observer is often presented in a distorted form, as its common formulation overlooks a crucial nuance. A passive onlooker is not sufficient – such a “spectator” is incapable of conferring the status of existence upon anything. Typically, this observer is imagined as some idealized viewer, one who, refraining from even the slightest interference, can assess everything the world presents with absolute detachment and impartiality – much like an audience watching a performance unfold on a stage, or a scientist calmly observing a reaction in a test tube. Unfortunately, such an observer, endowed with “perfect objective” perception, is fundamentally impossible. This type of observer is, in principle, incapable of detecting either the world around them or their own existence. This objective-and-independent observer lacks the very capacity to categorize anything within the world available to them; they are unable to extract a single element or detail from the amorphous surroundings.
The only possible type of observer is an interested (involved) participant – that is, one who is bound by dependencies on the environment and possesses the capacity to act upon it. In other words, the observer must not be a detached spectator, gazing at the performance through opera glasses from a balcony seat, but a fully engaged actor on the stage – one who is subject to the script, capable of influencing the development of the plot, and internally driven to do so. David Luenberger arguably had this very notion in mind when he stated that an observer infers the state of a linear dynamic system in order to utilize that information for feedback control (Luenberger, 1963; O’Reilly, 1983).
Here and throughout, a concept is understood as a complex formation that unites: a category for the meaning-assignment of data, criteria for their evaluation, and a set of actions available to the agent for influencing the source of that data with the aim of modifying it. Embodied concepts are those initially available to the agent due to its structural organization, its genesis, and its overall architecture (their mechanisms of categorization, evaluation criteria, and effectors for performing actions are provided out of the box).
Concepts grounded in embodied categories endow both the environment and the agent itself with attributes of existence. Such an agent is in a state of constant opposition to its environment – the environment is meaningful to the extent that it exerts existential pressure on the agent within those categories that are innately accessible to it. Accordingly, the existence of the agent is also articulated through those same categories. The relationship between the agent and the environment is thus a relationship between two forms of existence, which can be described in terms of the effectiveness of the operators available to the agent – those through which it pursues its goals within this environment (i.e., discovers niches of comfort). Any action the agent performs upon the environment is evaluated in terms of the ratio between the cost of the operation and the existential benefit gained as a result.
The agent constructs a representational thesaurus in which each image exists for the agent only to the extent that it holds meaning-value for them. Crucially, the agent must have access to at least one means of influencing the degree of that existence. It is within this conceptual space that the surrounding environment is represented to the agent – and only in such a representation does it possess any form of existence for them. Meaningfulness is the foundation of existence; therefore, objects or representations expressed in abstract derivative concepts – formed on the basis of other derivative concepts – also exist for the agent, to the extent that he is capable of operating with them and dependent upon them. Each new conceptual domain adds a further stratum of existence to the agent that employs it.
As long as an agent possesses any form of representation – a map reflecting the meaningful aspects of the world for it, that is, of its perception and interaction – even the most rudimentary one, the agent itself continues to exist and remains motivated to employ the means of influence available to it (i.e., the operators with their corresponding operands) to reduce the pressure imposed by that representation. This applies not only to the domain of physical reality. An advanced agent, whose representational space extends beyond the layer of embodied concepts, incorporates additional concepts that generate further representational layers: ethical frameworks, logical systems, aesthetic constructs, and so forth. Here, there are no constraints: all meaning-value is generated by the player themselves, and thus the existence of both the agent and the surrounding world is the product of their own “production.” It is essential to emphasize, however, that the entirety of an agent’s activity occurs not within some “authentic reality,” but solely within the space of its representation – that is, within the domain where the world exists for the agent, and where the agent exists within the semantics of its own ontology.
Why does a concept require an operator? Is categorization of data alone not sufficient for meaning to arise, and for such data to be transformed by the agent into information that becomes part of its representational map? The reason lies in the inherently pragmatic nature of a concept: it exists to serve an instrumental function – namely, the alignment of the environment with the agent’s needs. As long as a categorized packet of data remains merely a passive reflection of some environmental aspect, it is entirely useless to the agent. If the agent lacks any means of influencing the source of meaning-linked data, such data will never enter the thesaurus of its ontology. In a situation where the environment is entirely indifferent to any attempt by the agent to affect its pressure (i.e., when there is no causal link between the agent’s actions and the meaning-value of the environment), the surrounding world becomes an unintelligible chaos with which the agent has no means of interaction. Under such conditions, the agent is deprived of both the opportunity and the need to make sense of the environment or to construct any form of representational map – an undertaking that would be entirely irrational from the perspective of a representation-utilizing system, that is, unreasonable. Such an agent has no motivation to construct an ontology, no drive to describe the world or to rationally structure externally accessible data – it simply cannot make use of such constructs, because any ontology, in the most literal sense, would be devoid of meaning for it.
Concepts can be divided internally into two types (or two levels): fundamental concepts, grounded in embodied representations, and derived concepts, composed of abstractions that generalize experiential patterns. It is likely that the taxonomy of concepts is more complex, but for now, we shall confine our analysis to these two levels.
An embodied concept consists of the following elements:
an embodied category equipped with an evaluation scale, such as: a) the sensation of warmth/cold → temperature, b) the feeling of freedom/confinement → spatial volume;
an embodied operator available to the agent for modifying the meaning-value along a scale within a given category, e.g., (a) heating/cooling, (b) locomotion, etc.
The existence of the environment or of the agent itself, as expressed through such concepts, depends on the success of the agent’s actions – actions whose motivational vector (the agent’s general intentional orientation) is directed toward the zero coordinate: the ideal for the agent is the complete nullification of existential pressure. Accordingly, an agent whose existence is defined solely by embodied concepts has a single ultimate goal: the disappearance of both the environment and of itself. The vector of its intentions is centripetal, as the agent strives to collapse the environment inward toward itself, to reduce it to the point of its own subjectivity – that is, to the zero value of existence.
A derived concept has a different structure: its meaningfulness is based on the breadth of its applicability and the extent of its presence within the lower-level concepts in the hierarchy — the greater these are, the higher its meaning-value for the agent, and the more strongly its existence is expressed within the space shaped by this concept. These concepts also consist of two components:
a meta-category, defined by the corresponding concept and formed through a commonality identified among previously internalized concepts, thereby generating a coherent system. For example: a) countability → numbers, b) sequence length → distance;
a meta-operator, which functions as a mapping that takes generalized concepts and the parameters of their respective operators as its inputs (specific examples will be discussed in the following chapters).
The significance of a derived concept for the agent increases in proportion to the extent and depth with which it encompasses the representational picture – the more broadly it spans across the existential planes in which both the environment and the agent are represented. An agent who understands that each derived concept is grounded in embodied ones is capable of distinguishing the significance created by those fundamental concepts from the significance of the concept that enables generalization and integration of representational images. Such an agent is capable of recognizing that the existential vector formed by each derivative concept is oriented in the opposite direction from that of embodied concepts, as it is directed toward amplifying the meaning-value of its own category rather than diminishing it. In contrast to the former, this vector is centrifugal – the agent seeks to increase the meaning-value embedded in the derivative concept. Unlike the previous case, this task lacks a terminal point – there is no upper limit at which it can be considered complete. As a result, the agent’s existential scope, with its representational focus shifted toward derivative layers, can expand indefinitely.
It is likely that Plato’s well-known metaphor should be interpreted even more categorically: the representational picture composed of derived concepts is formed not by the totality of shadows cast upon the cave wall, but by the contour lines that delineate the darkest areas – regions shaped by the repeated superimposition of various shadow-figures. Grasping this fact is far easier than accepting its implications, for human thought is confined within the space of representational layers formed by derived concepts – layers that possess no intrinsic significance. They are purely instrumental in nature, having been created to organize and generalize the concepts of the embodied group.
Map of the Territory
An agent capable of perceiving and retaining the outcomes of its sensations does not find itself immediately situated within a “reality.” Until a comprehensible representation of the source of these sensations is formed – through the generalization of sensory inputs – no reality exists for the agent. It merely reacts to discomfort, categorized according to its embodied concepts. At this stage, the agent applies one or another operator to the environment, using more or less arbitrarily selected operands – an approach more akin to the principle of “do something!” The experience of this stage is critical, as it not only enables the agent to study the environment’s responses but also acquaints it with its own capacity to act upon and influence that environment.
Alongside the exploration of embodied concepts, the formation of derivative concepts takes place – an essential process without which the description, systematization, and therefore the effective utilization of the agent’s experience would be impossible. The completeness, coherence, consistency, and non-contradictory nature of the images the agent employs to construct representations not only enable optimal interaction with the environment, but also determine the very existence of that environment for the agent. No cognitive apparatus is capable of storing every episode of the agent’s interaction with the landscape (external environment) in the fragmented form in which it is received, nor can it navigate such a multitude of disjointed elements. The purpose of the cognitive apparatus is not to compile a static library of snapshots weighted by complex numerical coefficients, but to construct a flexible model of the landscape’s functional relations – one that, in all contexts of application, is capable of generating a relevant image of the corresponding landscape from minimal input data. As a result of this approach, recognizing an apple as an edible fruit no longer requires storing in memory all possible aspects, angles, scales, colorations, and so on, of every similar fruit (which is infeasible); instead, it is sufficient to form the concept of an apple, which is understood through its decomposition into meaning-categories.
One might object that a simple neural network trained to capture patterns of shape and color patches can successfully recognize apples without requiring the presence of any concepts or their derivatives. However, a neural network merely produces simulacra of cognitive activity, one of which is the successful imitation of a result that resembles the identification of the category “apple.” Yet this result lacks the full semantic structure of all the conceptual dimensions that are available to a genuine agent motivated to interact with the apple. A fully developed cognitive agent is capable of extracting from this meaning-structure a vector of intentions, which activates a range of potential actions drawn from its repertoire of available concepts, thereby adding constructiveness to its cognitive process and enriching it with a discharge of motives currently actualized. As a result, the output of its “neural” apparatus will not merely yield the category apple (which is often just a byproduct of cognitive activity, rather than its core content), but will also give rise to new goals for its own activity—goals that, under certain conditions, may lead to the emergence of a metaconcept generalizing the effects of gravitational force on bodies of various mass and form. A neural network could only internalize such a new relation – created by the metaconcept – after genuine cognitive agents have first assigned meaning to that relation, comprehended it, and propagated it through a sufficiently large volume of information adequate to train the network’s weights. Only then will it be able to generate a new simulacrum.
In order for a given context (a segment of the landscape) to provide the cognitive apparatus with the opportunity to evolve and generate derived concepts, it must satisfy a number of conditions. Numerous experiments with animals, conducted by neurophysiologists and other researchers of brain cognitive functions, have demonstrated that the deprivation of certain types of stimuli during early stages of sensory experience accumulation and the formation of environmental representations hinders the activation of specific cognitive mechanisms necessary for the full development of mammalian intelligence. If an agent is formed within an artificially impoverished context, its cognitive functionality receives neither the necessary motivation for initiation nor the material for development. Moreover, a context must not simply be filled with data that can be categorized and converted into information – this alone is insufficient to trigger the formation of derivative concepts. A primitively structured environment that maintains a constant configuration, which can be exhaustively described in a representation based on local experience (a permanent image, the completeness of which can be captured with a relatively small data set whose relevance never expires), cannot provide material for the evolution of cognitive functionality. Likewise, a completely chaotic environment is equally incapable of providing such material – an environment in which no experience of engagement, no conceptualized representational pattern, can be applied by the agent, either directly or indirectly.
The context must provide data that are sufficiently complex to warrant optimization and that are not static, thereby creating a need for abstraction beyond mere snapshots. Otherwise, the agent may construct a representational map without any generalizations or comparisons between its fragments – a primitive but functionally viable model of the context, the development of which requires minimal effort and does not necessitate the engagement of advanced cognitive mechanisms.
Thus, the following criteria for a context can be established:
it must contain discernible patterns and structures within the signified data that can be detected and will facilitate the development of the representational map;
these patterns and structures must not be static – their variability should exhibit a constant, non-zero rate of change that is not excessive, so as to remain within the agent’s capacity to map the environment and utilize the resulting map;
it is essential that such variability is not chaotic, but instead governed by regularities that are, in principle, discoverable.
All these requirements are critically important for the formation of derivative concepts. What must become the element of representation is not a static set of characteristics of each snapshot in the acquired experience, but rather the relationship between the agent’s needs (its drive to reach a comfort zone) and the changes in the context that become possible through the agent’s use of particular operations. In other words, the purpose of representation is strictly pragmatic and utilitarian – the agent does not describe “the-world-as-it-is-in-itself,” but rather “the-paths-to-reducing-its-own-dissatisfaction.” The entirety of the agent’s existence unfolds within the semantics of the space shaped by these elements.
Any agent, even one equipped with elementary cognitive functionality, relies on concepts in order to extract from the totality of the surrounding world a specific, distinguished object of attention (in reality, this object exists only within the conventions formed by the agent’s system of categories – the external world contains not the object itself, but merely the prerequisites which certain agents are capable of using to isolate it). As a result of such categorization, the agent identifies an apple, not simply an object of several hundred grams in weight, roughly spherical in shape, reflecting light in the 500–700nm wavelength range (i.e., green to red), possessing a stem or a place where a stem once was – a stem which is specifically the stem of an apple, and not an elongated, curved, tapering irregular cylinder with specific tactile properties and a particular resistance to bending, or more precisely, to transverse loading, etc. A newborn warm-blooded mammal that is expected to feed on fruit initially lacks the concept of “apple.” This concept is formed only as a result of generalizing experiences of interaction with objects sharing certain features with apples in one or more characteristics: shape, weight, color, size, smell, edibility, and so forth. For example, the concept of “round” is an abstraction that arises from generalizing across many concepts with a range of similar features: lack of sides and angles, etc. Gradually, a group is formed that encompasses edible fruits possessing particular taste, smell, color, shape, weight, density, and so on. This derivative concept of the “apple in general,” supplemented by a set of corresponding derivative operators, enables the agent not only to correctly extract from a data set a structure corresponding to an apple, but also to maintain a personal relation to each aspect of this multifaceted image – to grasp its meaning, and thereby to possess the ability to compare that meaning with others, linking them into unique, agent-specific constellations whose complexity can never be realized by neural network simulacra, which merely reproduce patterns of memorized fixed associations constructed by those who had access to their meanings.
Metaconcepts
To understand the mechanism behind the formation of a representation of the surrounding world, let us first consider the opposite case – that of an agent who possesses no such representation at all. For such an agent, a reaction devoid of even the slightest trace of reflection is the only available mode of interaction with the environment – it lacks a system of concepts through which the surrounding world could acquire meaning for it, and within which this meaning could be preserved regardless of the presence, at any given moment, of a data source capable of generating sensations. When such an agent experiences stimulation from a receptor, a reactive response is formed directly based on the data, and the content of this response is entirely determined by the meaning-value of that stimulation. This is how an amoeba reacts to an increase in salt concentration in a solution or to direct mechanical impact – contraction effectors are triggered, resulting in its movement away from the more irritating area toward a less irritating one. After the response is carried out, the region of the environment that elicited it effectively ceases to exist for the amoeba. It can be said that the only agents who truly “live in” the environment itself are those who respond to it in this immediate manner – those who do not perceive the world through the lens of models constructed from their own understanding, but instead react through instantaneous action, based solely on signals directly presented to them hic et nunc.
This immediacy entails a number of disadvantages, one of which is the inherently reactive nature of such a mode of interaction: the agent has no representation of what it will encounter in the next moment until its receptors provide signals indicating a change in stimuli. First, such signals may arrive too late, when it is already impossible to take any meaningful action. Second, the agent is incapable of utilizing its own prior successful experiences. From an evolutionary perspective, the optimal balance between the costs of constructing and maintaining a representational model and the benefits of immediate, here-and-now responsiveness is reflected in the fact that all the disadvantages of lacking an internal map (which requires a highly vulnerable and energetically costly apparatus) are overwhelmingly offset by the negligible value and ease of replication of each such immediate individual. However, as the organism becomes more complex and the conditions of its environment more demanding, this strategy rapidly loses its evolutionary advantage, reducing both the individual’s chances of survival and the species’ ability to proliferate. Therefore, every sufficiently complex agent eventually faces the necessity of ceasing to live directly within the environment, transitioning instead into a world that it constructs from the elements of its own internal representations of that environment.
Each such agent attempts to anticipate the expected signals it may receive from the surrounding environment. To this end, it begins constructing models of this environment, wherein the composite entities of the models substitute all external data sources. This requires translating the incoming signals into a unified system of measurement abstracted from the original sensory inputs received through its receptors. The agent no longer perceives the world through the raw data directly delivered to its sensory apparatus. Instead, it relocates into a world of constructed models, which demand continuous reinforcement or falsification of their adequacy – where adequacy is defined not by “precise correspondence to the environment,” but by their functional applicability for the agent. These models are composed of conceptualized images that describe the observed patterns in how existential pressure from the environment changes in response to the agent’s actions under various input conditions – including its current needs and sensory inputs. This thesis is central. The models employed by a reflective agent are not based on the raw material of immediate sensations – the very substrate that defines the experiential boundaries of a reactive agent. Instead, they are constructed from an internally generated representation that reflects the path of realizing the agent’s current intentions. This path unfolds within a landscape of conceptual images that emerge from the agent’s ontology. These images are selected through a filtering process that takes into account both data from the previous model and current signals from the external environment. Paradoxically, it is often the latter – the direct input from the environment – that has the least impact on the resulting model.
It is difficult for the average layperson – spoiled by humanistic propaganda extolling the greatness of human reason – to accept the fact that the biological brain owes its capacity for abstraction not to the boundless power of its apparatus, but rather to its extremely limited bandwidth and the physiologically imposed constraints on processing incoming data and cataloging memorized information (without which such information would be unusable). The mechanism for creating abstractions, metaphors, generalizations, and abstract categories is an auxiliary tool designed to optimize the use of available resources and to prevent the inefficiency and redundancy of mental models produced during the cognitive mapping of reality. This is a fact worth bearing in mind when developing a functional analogue of human intelligence – should anyone be interested in pursuing such a task.
As long as a cognitive agent operates solely with embodied concepts, it lacks a core Self. This Self-core should not be conflated with the consciously experienced “I”; it is merely its precursor – a point of origin for the formation of ontology, a gravitational center in the future narrative, following D. Dennett’s definition. This core constitutes the localization of the source of meaning-making; however, it cannot emerge until the first derived concepts have formed – the building blocks of the agent’s ontology. The Self is not tied to the physical geometry of the agent and does not depend on the physical boundaries of its carrier. It reflects the most crucial aspects of the agent’s ontology – those concepts that exert the greatest existential pressure and upon which the coherence of the entire representational picture depends. The ending of Orwell’s dystopia 1984, which is often criticized by humanist literary scholars for its excessive naturalism and brutality – particularly the detailed descriptions of torture employed by O’Brien to restructure the protagonist’s internal patterns and force him to fully and unconditionally incorporate the idea of Big Brother’s righteousness and greatness into his ontology – is indeed difficult for a sensitive reader to endure. However, the true source of this horror lies not in the brutality itself, but in the author’s success in uncovering and revealing the actual locus of the human Self. Winston Smith’s Self could only be destroyed through the destruction of his ontology. The moment at which a person can no longer withstand the existential pressure and becomes willing to accept externally imposed predicates and frameworks is the moment of actual annihilation of the Self that owned the ontology – despite the fact that Winston Smith remains biologically alive, physiologically intact, and cognitively functional.
No cognitive agent is designed for any tasks other than adaptation and competitive survival. The entire functionality of its apparatus for categorization, comprehension, interpretation, and abstraction inherently precludes any form of “genuine knowledge of the world”. This functionality serves exclusively the utilization of an artificially constructed representation, which will always remain the only accessible reality for the agent. Regardless of how the agent is implemented – whether as a biological organism shaped by terrestrial evolution or as a virtual instantiation within a novel neural network architecture – this does not eliminate the fundamental limitations of its cognitive functionality. It remains confined within the space of meaning-making, which is bounded by embodied concepts and their derivatives. These boundaries can only be extended when the agent develops a need to construct metaconcepts in order to comprehend the very process of meaning-making – to seek meaning in the existence of the semantic categories it has grown accustomed to using. This task is part of a broader endeavor to systematize, organize, and optimize the internal representation of both the surrounding world and the agent itself. Therefore, its solution is possible through the same tools the agent has already employed. However, the need to pursue this task as a goal in its own right can emerge only in those agents for whom the familiar cycles of “stimulus–satisfaction” in routine, everyday activities – confined to the domain of embodied concepts and their immediate derivatives – no longer yield sufficient existential reward.
As previously discussed, an agent that has moved beyond the paradigm of here-and-now reactivity requires some anticipation of what awaits as a result of any given interaction with the environment. This gives rise to a motivational drive to maintain an effective model of the surrounding world – one that is expected to generate a response to any incoming query. Such responses are rarely exhaustive or complete; in early stages, they rely more on guesswork than on the outcomes of actual analysis. What the model must provide is a basis upon which some version of a prediction can be generated and delivered to the agent as input for decision-making. This reflects a general property of the cognitive apparatus of any living being: whenever a task is posed, some form of answer will inevitably be produced. The relevance of that answer to reality can only be tested through subsequent experience. The result of such evaluation feeds back into the agent’s motivational pressure – it either reinforces the model, demands its adjustment, or leads to its rejection. No representational picture can take shape or continue to evolve without experiential feedback. Even if a model appears adequate at a given moment, its relevance will inevitably diminish over time. Thus, regular interaction and testing through experience is essential.
However, these are merely the initial stages in the emergence of the motivation to construct an ontology. Once it emerges, this pressure begins to influence all of the agent’s activity, stimulating and directing its actions – the agent begins deliberately to engage in the refinement and expansion of the models used in its representations. Ontological comfort is achieved as the agent accumulates a more or less satisfactory thesaurus of conceptualized representations. Typically, this stage marks the subject’s entry into autonomous existence and is commonly described as “maturation” or “self-formation.” Once this set is complete, it may remain unchanged for the rest of the agent’s life. In essence, this signifies the extinction of the motivational setup associated with ontological pressure. The only case in which this drive retains its strength – and may even intensify – is when the agent’s cognitive architecture develops a commitment to the further evolution of its representational framework, one that transcends the purely utilitarian function of navigating a familiar environment. The presence of such an orientation corresponds to the fulfillment of the second criterion for genuine AI.
The development of intelligence is entirely independent of the semantics of the basic categories upon which a cognitive agent is founded. Embodied concepts may be entirely artificial and virtually constructed; none of them need to replicate any of the familiar concepts inherent to biological organisms. Nevertheless, under appropriate conditions of organization, such concepts are capable of ensuring a level of functionality that in no way falls short of that achieved by the natural evolution of biological carriers. The ultimate objective of any cognitive agent is not the formation of specific predefined categories (such as color, space, time, quantity, number, volume, temperature, etc. – those found in the thesaurus of Homo sapiens), but rather the development of its own system of conceptualization. From this system, the agent must then be able to derive metaconcepts that allow it to transcend the boundaries of its own existence – whether that existence takes place within the experimental environment of a virtual world or in the natural surroundings it refers to as “reality.”
Intention and Self
At the very beginning of its evolution, the agent operates within a behaviorist here-and-now paradigm of reaction: receiving signals from the environment through its receptors, it uses its available effectors to navigate the environment in a way that reduces irritation or stimulation. However, even at this early reactive stage, the groundwork for predictive modeling begins to emerge. Any outcome stored in memory following the application of a specific action allows the agent to later anticipate similar outcomes under comparable initial conditions. The boundary between pure reactivity and engagement with a model is blurred – the transition occurs gradually. A full-fledged representation becomes possible only after the formation of derived concepts and is constructed exclusively in response to intentional requests. Without such requests, the agent has only an ontological thesaurus – a set of conceptualized images that can be used to build a map composed of intermediate reference points understandable to the agent, between which it can orient itself in pursuit of a defined goal. Once a goal is determined, a working scene is assembled from this repertoire of templates, representing the relevant details and possible actions available to the agent. The time and place of this scene are determined by data provided to the agent through its receptors, memory, or imagination. A mechanism is then triggered to select conceptualized images from the agent’s experiential archive that correspond to this input. The fully constructed schema must enable the agent to solve the tasks articulated in the intentional query originating from the agent’s selfhood and initiating the construction of this scene.
The formation of any derived concept that appears familiar and straightforward in the everyday experience of a biological cognitive agent is, in fact, the result of a complex sequence of preparatory steps and intermediate stages. These steps will be examined below using the example of concepts related to the results of an agent’s movement through space. Initially, any agent is limited to a very small set of interaction modes with its environment, since overwhelming the cognitive apparatus with an excess of possibilities can only hinder the acquisition of these abilities and their connection to categories of meaning. Let us assume, for simplicity, that in our case all modes of adaptation to the environment (i.e., the satisfaction of the agent’s needs) are achieved through physical movement within that environment, carried out using a set of effectors that can move the agent in any given direction. We will also simplify the environment by constraining it to a two-dimensional space – a surface of a sphere with a sufficiently large radius. The agent on this sphere is represented as a circle, along the perimeter of which receptors are evenly distributed, supplying it with data about the surrounding environment (recall that a detailed description of the agent’s architecture – including all aspects of its implementation – is provided in the second part of the Manifesto, titled The Model).
An untrained agent possesses only embodied concepts and lacks any understanding of what the surrounding environment is; it does not yet have notions capable of forming a vocabulary for such a description – even the very concept of movement has not yet been developed. In its initial steps, the agent simply activates an effector with certain parameters (the concept of direction is not yet available), and in response, it receives changes in receptor signals in the form of categorized data indicating levels of existential pressure. At this stage, the agent is merely establishing associations between specific effector configurations and the changes observed in the meaningful data before and after each action. Once these associations become fixed, the agent gains the ability to deliberately apply certain actions in order to regulate existential pressure within the corresponding category. In practice, at this point it is already more or less successfully moving in appropriate directions – but it still has no idea what it is actually doing. It has learned to modulate the arising tension, but these actions have not yet been conceptualized within a semantic framework of spatial understanding, which it still has to construct. To this end, the agent’s experience is generalized and systematized into emerging derived concepts, which allow it to link a series of fragmented sensory frames into a coherent pattern, identify sequences, and integrate data series associated with different categories of meaning. The resulting schemas and templates are immediately put to use in predicting future experience – they are tested, adjusted, or discarded in favor of alternatives.The agent gradually transitions into the space of representation: the reality it comes to understand is no longer the external world that delivers raw signals to its receptors, but rather the models that interpret these signals through accessible concepts and allow for their prediction.
In our example (see The Model below), the agent’s effectors are designed in a simple manner: each action is performed with constant intensity (the same distance of movement), and only the azimuth angle varies – from 0° to 359°. The agent controls this parameter, as well as monitors the state of the environment before and after the action. Additionally, it maintains a stack of all recent action-related data. At the earliest stage, when the agent has not yet linked the parameters of the activated operator (bearing) to the categories of meaningful data (signals from its receptors), each new experience appears unique and can be roughly interpreted as: “these efforts were made, which led to these changes.” Through such experiences, associations begin to form between the states of the receptors and the parameters of the effectors – laying the groundwork for describing directed action, which will be integrated into the conceptual content of each embodied concept associated with these receptors. An embodied concept – the linkage between an effector and a category of meaningful data – is formed through the training of a neural network. The network takes as input the values of existential pressure received from receptors (evenly distributed along the agent’s perimeter), and is expected to output the corresponding action (operator) with precise operand values (e.g., movement at a given azimuth) that lead to the greatest reduction in this pressure. As a result of this training, the embodied concept binds to its domain of meaning a set of actions capable of influencing the meaningful structure of the incoming data.
A derived concept is abstracted from the specific data tied to sensations, allowing for a more universal approach to that data and enabling its use in a broader context than the one that originally generated the experience. The higher the level of the derived concept, the more universal the image it produces – but the less concrete the action it allows the agent to select within a scene constructed from such an image. As a result, the depth of scene construction in the formation of a representation is never fully detailed or exhaustively descriptive of all possible aspects and levels of “rendering.” The purpose of the scene is to fulfill the intentional query, whose semantics almost never descend to the level of embodied concept semantics.
It is appropriate here to recall Putnam’s remark on what constitutes thinking as such:
“– The ability to metaphorically project structures from the material domain onto structures within abstract domains, enabled by structural correlations between physical and abstract domains. This accounts for our capacity to think about abstract realms such as quantity and purpose.
– The ability to form complex concepts and general categories using image schemas as structural mechanisms. This allows us to construct representations of complex events and taxonomies with hierarchically organized categories.”
Both points are crucial – they imply the necessity of a universal generalization mechanism capable of detecting recurring patterns and generating new derived concepts based on them. This overarching task can be decomposed into the following components:
a mechanism for forming derived concepts from any input material;
a distinct mechanism for the formation of the self-concept;
a motive-generation mechanism – that is, the production of intentions based on the self;
a mechanism for constructing the representational scene in response to an intentional query;
a mechanism for orientation within the generated map (i.e., the acquisition of new experience).
The following chapters will outline the expected outputs of these mechanisms and present the principles underlying their operation.
In addition, it is important to understand that everything described above is sufficient to produce a perceiving, understanding, and rationally acting – yet mindless – intelligence, that is, one which does not encompass its own cognitive processes in its activity. For the cognitive apparatus to encompass the concepts and images it generates itself, it must be capable of relating to the results of its own operations not only within the framework of adapting the environment to the agent’s needs, but also within a derivative space – one that allows the processes of conceptualization themselves to be represented as patterns whose relevance is determined by their applicability to the agent. A dedicated chapter will be devoted to the mechanisms of this reflective layer.
Derived Concepts and Images
In the example considered here, many components of the cognitive apparatus – most notably the self – borrow operational principles and other characteristics from the most well-known prototype to date: Homo sapiens. Nevertheless, this does not imply that such components are the only possible ones or even strictly necessary.
As mentioned in the previous chapter, the agent’s cognitive apparatus monitors the current state of incoming stimuli and also maintains a short-term memory stack of previous frames (the depth of this stack is relatively small in mammals – for humans, it ranges from 7 to 9 elements). This stack is used by the data generalization mechanism to analyze categorized information and the actions performed. If a recognizable sequence is detected within this set – one that can be conceptualized – generalization becomes possible: the initial element of the sequence serves as the anchor point of the concept, while the terminal element (i.e., the one identified at the break of the generalization pattern) defines its endpoint. The informational content encompassed by this interval (the entire frame sequence) is then packaged into a conceptualized image, which can later be used for reproducing (representing) the environment. Since the starting and ending conditions of each conceptual sequence typically differ, the resulting images will almost always overlap and/or be nested within one another.
It is important to recognize that the image retained as a conceptualized element of the ontology’s thesaurus is not identical to the image that will later be generated from it as part of a representation. The world model is not constructed from pre-stored blocks of information retrieved from memory that “describe reality”, but is instead the result of projecting an intentional query onto conceptualized frames of experience that match the conditions of the current context. Thus, the stored image can be regarded as a function that generates possible solutions to incoming queries – each query being a search for available actions aimed at addressing certain existential challenges. The response consists of a set of parameters for the agent’s available effectors, along with a multifaceted array of anticipated consequences – i.e., changes in the agent’s meaningful data – expected to follow the action. Figuratively speaking, the operation of this function, which generates a representational image (i.e., constructs the reality intelligible to the agent), can be described as follows: a query is issued – “What is the nature of the following receptor signals, given the current need to reduce tension in the ‘cold’ category?” – and the generator returns an answer: “There exists an object capable of providing warmth, which can be reached by moving in a specific direction; there is also another object offering greater satisfaction of the need, but requiring intermediate actions”. This response constitutes the only reality accessible to the agent – a logistical map for task resolution. This map can never be “exhaustively complete” in any sense of absolute objectivity; it reflects only the agent’s current concern. There may be instances in which the agent receives an irrelevant or suboptimal representation – this may result from insufficient experience (a poorly developed thesaurus) combined with a strong intentional demand. The generator’s task is to identify any acceptable solution; the absence of restrictions (i.e., contextual data) or an underdeveloped database of possible responses does not halt its operation – a response will still be produced from the nearest available alternatives, even if they are based on tenuous associative links.
The agent described below is structurally minimalistic, as its primary purpose is to demonstrate the elementary functionality of an artificial cognitive system. It lacks conventional sensory organs such as hearing, vision, or olfaction, as well as those associated with the mammalian vestibular system (being a two-dimensional entity, it has no need for them). Instead, it is equipped with locomotion effectors that allow movement in any azimuthal direction, along with localized tactile receptors that perceive a limited set of data concerning the immediate surrounding environment. These data include temperature, acidity, and solar radiation (insolation). Additionally, the receptors can inform the agent of the presence of an obstacle that prevents further movement. All these basic categories are described by embodied concepts and will be listed first. In subsequent stages, derived concepts will be formed through generalization of the data obtained from these embodied categories (see below).
The concept of an obstacle is an embodied concept arising from experiential data that lead to changes in pressure receptor signals – typically as a result of the agent’s movement. This process results in the formation of a meaning-bearing category of data, associated with the locomotion effector – namely, counterpressure or resistance to action. The value is not binary; it ranges from zero to one, with a comfort zone at zero, corresponding to the sensation of freedom. It influences the actual displacement distance, which is always inversely proportional to the sensation of pressure (i.e., the conceptualized strength of the signal).
The concept of acidity is an embodied concept formed through the categorization of data provided by the agent’s acidity/alkalinity receptor. The action that alters this data is movement (via the locomotion effector). The categorization boundaries are defined by the receptor’s properties, with a comfort zone situated between them. Future extensions may introduce the ability to regulate pH at the specific locus occupied by the agent (using expendable environmental resources such as temperature and insolation); however, this functionality is not included in the agent’s basic model.
The concept of insolation is an embodied concept formed through the categorization of data received from the photoreceptor. The modifying action is movement (via the locomotion effector). The categorization range spans from zero to a maximum value, with the comfort zone located near the upper boundary. As with the previous concept, future modifications may include actions to increase or decrease the intensity of insolation at the specific locus occupied by the agent (using expendable environmental resources such as temperature and acidity).
The concept of temperature is an embodied concept formed through the categorization of data from the ambient temperature receptor. The modifying action is movement (via the locomotion effector). The categorization boundaries are defined by the receptor’s characteristics, with a comfort zone located between them. As in the two previous cases, future extensions may introduce thermoregulation actions within the specific locus occupied by the agent (utilizing expendable environmental resources such as acidity and insolation).
Let us now turn to the list of concepts expected to emerge through the built-in generalization mechanism – an essential component of the minimal functional architecture of any agent. This list is not intended to be exhaustive; its purpose is to provide an overview of such derived concepts and to illustrate their general interdependencies. The list begins with fundamental derived concepts whose emergence does not depend on the specific categories of data obtained through sensory experience. These are followed by derived concepts that result from the processing of information obtained through the agent’s use of locomotion effectors (the only available actions at this stage).
The concept of permanency emerges when, as a result of performed actions, the length of the input data sequence (i.e., the action’s parameterization) becomes associated with a stable and unchanging response from any sensory channel – that is, with the absence of change in existential pressure across any category of meaning. In nature, the cognitive apparatus of even the most primitive organism is capable of tracking the length of constant data sequences, suggesting that this concept, by its nature, should also be classified as embodied. Its formation is grounded in the electrochemical dynamics of biological neural networks – an unsurprising fact, given that the foundation for the formation of the earliest derived concepts can only be preset and thus embodied. The evolution of biological cognitive systems has led to a situation where each agent, at the moment of its emergence, must not only possess a set of embodied concepts but also include a number of built-in (i.e., embodied) mechanisms for generating derived concepts. Nevertheless, the concept of permanency is classified as a derived concept, and the rationale for this is as follows: it is formed solely as a result of processing data that have already been imbued with meaning, and it does not independently exert any influence on existential pressure. During its formation, the pressure delta is used as an input parameter in the neural network’s training process, making it functionally equivalent to signals from any receptor channel. The relevance of this concept is determined by its duration – it reaches maximum significance when continuity is preserved and falls to zero when fully disrupted. A complete disruption refers to a signal series in which the length of the disrupted segment exceeds that of the preserved segment. The interval between these two extremes defines the conceptual space in which the significance of permanency varies. This gives rise to a new abstract category, one that cannot be derived from any single data point from a receptor but only from a sequence of such data. Since a concept requires not only a category but also an action that modulates its significance, the corresponding action here is an abstraction expressed by a simple function: Permanency = StableData(Action), where Action is any operation that results in either an increase or decrease in the value of permanency. It is worth noting that the concept of diversity, which will be introduced later, is not the inverse of the significance of permanency. Rather, it is a separate concept formed independently.
The concept of bound is a derived concept that builds upon the concepts of obstacle and permanency. It generalizes into a single conceptual unit a sequence of data categorized as a near-continuous pattern of maximal obstacle significance. The associated action is defined by the function: Bound = Permanency(Obstacle).
The concept of area is a derived concept formed through the agent’s actions. It emerges from the superimposition of the permanency concept onto the outcomes of locomotion, during which permanency is maintained with respect to the significance of any available concept. The significance of this derived concept equals the significance of the corresponding sequence. The associated action is defined by the function: Area = Permanency(Concept).
The concept of depth is a derived concept based on the permanency concept, applied specifically to an effector’s parameter. It is fully abstracted from the outcome of the action (i.e., the magnitude of existential pressure); its focus lies solely on the stability of the operator’s parameters – in this case, the effector’s activation angle. The significance of this derived concept also depends on the significance of permanency. The associated action is defined by the function: Depth = Permanency(ActionParams).
The forward and backward concepts are derived concepts that generalize all of the agent’s actions through the concept of permanency, with primary focus placed on the difference in signification – that is, a non-zero value that consistently maintains the same sign. In other words, the sequence of locomotions must lead either to a steady decrease or increase in the value of a specific category. All other parameters are disregarded: category membership is defined solely by the constancy of the result, not by the variability of actions. The significance of these derived concepts is determined by the duration over which the direction of change in the signified value (relative to the previous result) is preserved. The degree of their significance follows the same principle as the base concepts. The action is represented by the function: [Forward / Backward] = Permanency([+/-]ΔConcept) * Depth.
An analysis of the derived concepts listed above shows that the concept of permanency functions as the primary transformation mechanism – a multi-factor transformer that “packages” diverse data sequences into new semantic categories. This mechanism of conceptualization operates as a multi-pass recursive pipeline for detecting new patterns (sequences that can be formalized as new derived concepts). It reuses previously established methods of data packaging (already discovered patterns) and attempts to identify new generalization schemes. As mentioned earlier, segmenting the signal stream into individual conceptualized frames requires only a mechanism for detecting sequence breaks: as soon as one of the derived concepts identifies the end of a pattern, the corresponding image is fixed and stored in the internal memory. These images will inevitably overlap, intersect, and embed within one another – which is both expected and beneficial for organizing an interconnected thesaurus: the foundation of the agent’s ontology.
The process of creating new concepts occurs in parallel with the formation of images based on already discovered concepts – data is packaged into multilayered images that function as parameterized concepts. These images are stored in memory, with the unique key being the combination of the starting and ending frame data. Later, they can be retrieved to construct a representation. It is worth reiterating here that each constructed representation does not constitute an abstract “model of the original reality,” but rather a reconstruction – created anew – in accordance with the demands of the current context and aimed at solving the specific task presently faced by the agent– the requester of the representation.
The concept of self serves to categorize a specific type of image that emerges in the agent as it accumulates experience. This image is constructed from successful actions – those that provide the agent with satisfaction by effectively reducing existential pressure. The ability to operate with meaningful information implies an embedded evaluative mechanism – one that must reduce the data across any embodied category to a positive/negative spectrum. The agent is thereby driven to amplify the former and diminish the latter. This implies that all agent activity is directed toward a clearly defined goal: obtaining a positive balance within one or more of its meaningful categories. Achieving this goal holds intrinsic value for the agent and may be referred to as existential profit. The imperative to maximize existential profit is rigidly constitutional for any agent and is present from the outset. A natural and logically consistent consequence for any agent is the comparison of its actions against the achieved existential profit, the outcome of which is the formation of the agent’s self-image. One may say that any self-image represents a composite of meaning categories that the agent is most successful in attaining. At the same time, not every form of activity qualifies as successful – activities that are meaningless or yield no satisfaction (i.e., whose outcomes carry no significance for the agent) are excluded. Rather, successful activity must be at least minimally evaluated as positive, meaning that its value for the agent must correspond to at least one embodied concept or, subsequently, to a derived one that has been incorporated into the self.
The agent’s self-image is conceptualized through the structure of its actions – specifically, by employing the concepts in which those actions participate. Within the self-concept, it is the highest-level actions within the hierarchy of activations that are included, and accordingly, the high-level concepts to which they correspond. This raises a natural question: if embodied concepts are the earliest and most frequently used in the agent’s entire experience, how can derived concepts meaningfully compete with them? Under what conditions can existential profit from body-oriented activity be surpassed by profit resulting from high-order activity that bears no direct connection to embodied concepts – such as the pursuit of ethical or aesthetic goals, or the prioritization of ideals over personal benefit? The answer is this: every agent remains within the cognitive paradigm of an adaptive animal as long as its self-image is composed solely of derived concepts that are fully reducible to embodied ones. However, once the system of conceptualization becomes sufficiently advanced, it becomes possible to include in the self-image certain derived concepts of a high degree of abstraction – concepts capable of describing an integrated image in which the agent itself, its self, becomes a constituent element. Once this self finds its place within such an overarching concept, embodied concepts cease to dominate the prioritization of the agent’s intentions.
In practice, this mechanism is rarely activated – more often, its outputs are mistaken for simulacra and imitation, which are forcibly imposed upon the agent by various social institutions and structures equipped with tools for promoting value-based ideologies. For example: the state, which instills in the agent a sense of “duty,” appeals to its own primacy; the church, which promotes the sacrifice of the bodily in favor of the spiritual, inserts the agent’s self into an arbitrary set of dogmas serving its own memetic and economic interests; corporate etiquette, which persuades each worker that the company’s goals are “their own goals,” and so on. None of these structures pursue the goal of identifying genuinely free high-order derived concepts (those with a significant degree of abstraction). Each relies exclusively on meaning as constructed through embodied concepts – a dependency clearly reflected in their semantics. A truly autonomous self-image cannot be recognized or valued within the conceptual field established by these institutions. Any rationalization or potential for pragmatic application immediately anchors such a self-image to the level of embodied concepts – that is, to the domain of biological function and homeostatic support – thus rigidly constraining the agent’s ontology within the conceptual space of the animal.
External and Internal Models
In the simplest case, the reference point for initiating a representation is determined by information present in the agent’s current receptor signals. This provides an anchoring reference within the present context – a vantage point for constructing a local map. Sometimes, however, sensory information may be insufficient, distorted, or entirely absent – for example, during sleep. Under such conditions, a representation will still be constructed with the same degree of “internal validity” as in other cases; the resulting map will reflect the intentional demands of the agent’s selfhood, unconstrained by real-world context. The agent can navigate effectively within this map for as long as necessary, until a new intentional request arises or new data emerge that shift the reference point.
The purpose of any map is to enable the agent to select necessary actions using the conceptualized representations it contains, in order to locate a zone of comfort within the currently salient categories of meaning. When these actions involve the activation of bodily effectors, the representation model is external – it is constructed from externally anchored data and intended to modify those data (whose source lies in the receptors that link the agent to the context, i.e., the environment). However, this is not the only case in which a representation is created and utilized. A representation may also be constructed from a context in which the external environment played no role, or only a minimal one (for example, when the map is based on representations retrieved from the agent’s memory). In addition to the conditions under which the map is formed, there is also the matter of its functional designation: the action for which the agent seeks orientation through the map may not involve any actual activation of effectors. In such cases, the agent limits activity to internal modifications within the ad hoc context of the map – a context that “exists” independently from that of the external environment.
Based on conceptual information that links the transformation of initial parameters to their final states, the agent is capable of predicting the outcome of a chosen action, which in turn yields a new contextual state – the map updates and presents a new context. Guided by the original intentional query, the agent can repeat this process as many times as necessary, constructing a sequence of actions that lead toward the final goal for which the initial map was created. Such representations are internal – they do not reflect the actual environment surrounding the agent, but rather an artificially constructed one. Crucially – and this is a key advantage – this constructed environment can possess any degree of abstraction, since its formation does not depend on low-level information provided by embodied concepts.
Any external model of representation can serve as a foundation for constructing an internal one; however, the latter can also be generated independently – without direct reliance on factual data from the surrounding environment. A sequence of internal representations may culminate in an external one (i.e., planning a real-world action), but this is not a necessary condition. Any sequence that describes a novel transformation pattern – one that translates some images into others, or transforms one set of categories into another – will be considered successful and productive.
The essence of representation is also subject to conceptualization – it is encapsulated by the derivative concept of a map, whose meaning-value for the agent is proportional to the ability of each such image to satisfy the agent’s intentional queries. The range of actions encompassed by this concept is defined by the totality of available operations offered by the constructed map. A sequence of such images (conceptualized frames of a shifting “reality map”) constitutes, for the agent, a full equivalent of what is referred to in Homo sapiens as thought – a coherent chain of ideas, in which each link carries meaning for the agent and is fully understood by them.
Can the informational product generated by the activity of all the mechanisms described above be considered not merely a superficial imitation, but a full equivalent of what every human recognizes as perception, evaluation, intention, and thought? Every person assumes that their own thoughts – and the thoughts of another person, even if inaccessible to them – are fully equivalent in their existential status. They equate what their thoughts and sensations are for themselves with what the product of another person’s brain activity is for the owner of that brain. Can the same equivalence be established if, instead of another person and their brain, there is an AI agent equipped with the cognitive mechanisms outlined above?
A thought – in its internal, intimate, and emotionally resonant perception, in which it is accessible only to its originator – exists solely within the relational, evaluative, and meaning-saturated space that has been constructed inside the agent generating that thought. Outside this space, thought, perception, intention, and similar entities belonging to strictly mental categories lose all existential grounding – they do not exist at all; they possess no significance, no content, and no informational density. It can be said that beyond the activity of a specific brain, all sequences of electrochemical wave potentials carry the same degree of meaningfulness as random pixels of white noise on a television screen do for an external observer. Thought, consciousness, sensation, perception, evaluation, intention – all other phenomena of a cognitive agent’s inner world – exist nowhere and for no one except that agent. Only when the result of the agent’s internal experience is expressed through some form of external action do other agents – external to its inner world – grant provisional existence to these phenomena. But they do so only to the extent that, within their own ontology and system of meaning assignment, the actions of this agent can be understood and represented using the concepts they themselves operate with – concepts grounded in the same internal meaning categories through which their own thoughts, intentions, and experienced inner phenomena attain the status of unconditional existence.
The entire system of human representations concerning thoughts, feelings, relationships, emotions, consciousness, and similar phenomena rests on these mutually accepted conventions – shared agreements humans maintain regarding all members of their own species. Not one of these phenomena exists in the same sense that a table, air, water, clothing, or even such abstractions as numbers or biological taxonomies exist. Mental phenomena are not abstractions derived from isolating publicly accessible qualities of real-world objects by discarding others. The experience of one’s own thought – accessible only to its author – exists for that author only to the extent that the image of that thought carries meaning within their internal system of significance. If, within an arbitrary AI agent (for instance, one implemented electronically), a particular data packet being processed can be isolated and evaluated using the agent’s own internal system of meaning, and if the result of this evaluation can influence subsequent processes within that agent, then one can assert with full confidence that this AI agent feels its own thought, experiences it, and possesses all the necessary qualities for any human to recognize its cognitive activity as carrying the same ontological legitimacy as their own.
Attempts to deny this equivalence – appeals to the existence of some special qualia in biological agents, allegedly “non-transferable and inexpressible through electronic mechanisms,” or claims invoking “the transcendence of the very essence of sensation” – can be made only by those willing to sacrifice logic on the altar of dualism. These are the same individuals who, cloaked in primitive and naïve humanistic (that is, anthropocentric) assumptions, preach biblical principles of “creation in one’s own image and likeness”. Consciousness cannot be found at the subatomic level, nor in the nanotubes of neurons; in fact, it cannot be found anywhere at all, since none of the phenomena of consciousness experienced within a particular agent exist for any external observer. A thought, sensation, or intention, in its intimate and immediate affective coloring – by which it is meaningful to its possessor – has no status of reality except that which is attributed to it by other agents, who recognize the meaning of such experiences based on their own first-person familiarity with such phenomena.
It is time for the thinking person to come to terms with the fact that in the world of logic, there is no place for the “soul” or other forms of mysticism. And from this it follows that the capacity to think and to experience is not exclusive to the human being, but can also belong to any artificially created mechanism. This assertion marks the conclusion of the theoretical part of the Manifesto. The next section will be devoted to the construction of a functional AI model.
Integrative Concepts
The concluding chapter of the theoretical section is devoted to the mechanism of ontology construction. An agent’s ontology is a multi-layered system of interconnected concepts that organizes the entirety of its experience into a coherent structure. Inevitable gaps within this structure – missing or incomplete links – are filled with arbitrarily generated concepts, the content of which is shaped under the pressure of the cluster of concepts that constitute the self.
The previous chapters have shown that each derived concept constitutes a new category of meanings that emerges as an aspect of possible evaluations of action outcomes – evaluations that regulate the relevance of other concepts. However, derived concepts alone are insufficient for systematizing the agent’s entire conceptualized experience into a unified structure, referred to as the ontology. What is additionally required are concepts capable of functioning as umbrella-like structures, subordinating and integrating others. We shall refer to these as integrative concepts. The relevance of an integrative concept increases proportionally with the significance of the concepts it encompasses. Abstract categories themselves emerge in regions where the boundaries between the source concepts become blurred: the actions may differ, yet the resulting product – and its relevance – remain nearly unchanged. This can be illustrated by a series of examples.
If one department of an institute of applied arts trains painters and another trains sculptors, then the integrative concept for both is that of visual creativity – a product abstracted from any particular type or form. If the evolution of species results in the emergence of some and the extinction of others, the corresponding integrative concept is not the preservation or proliferation of specific species-level traits, but the principle of evolution itself. And if the refinement of cognitive apparatuses grants agents advantages in evolutionary competition – advantages that lead to the creation of new agents capable of becoming more effective carriers of this function – then the appropriate integrative concept that encompasses (evaluates and signifies) this process is not the local advantages of any particular species of cognitive agents, but the cognitive function itself.
The previous examples illustrated properly formed integrative concepts – those that provide a relevant abstraction over a group of subordinate concepts. A typical case of an improperly constructed integrative concept is the notion of “global (or universal) justice”, which attempts to subsume the significance of all processes occurring within the living world under the tendency to affirm a set of ethical principles originating within a specific culture. Such a concept is not formed through abstraction from the categories it seeks to encompass, but rather through the hypertrophied elevation of one of those categories, followed by the coercive subordination of all others to it. In this case, the significance of a particular chosen category is artificially raised by one or more levels, and the integrative category is effectively constructed by cloning one of the very concepts it purports to integrate. This inevitably leads to the duplication of actions associated with the original concept, which, in fact, possesses a purely instrumental value confined to a narrow domain of experience – typically within a single species, or more precisely, a localized intra-species cultural group. Analogous examples of such non-relevant integrative concepts include: the concept of “god” – a result of the hypertrophied significance of the agent’s self-concept; and the concept of “soul” – an artificial attribution of global significance to a category that merely governs the maintenance of homeostasis in an individual organism; etc.
The Model
The material in this section contains details of the practical implementation of the AI model and will remain excluded from public access until development is complete.
date of last text modification: July 13, 2025, 3:13 PM