Engineering speak: prolegomena to ideal technical discourses

May 17, 2025

Common Intelligence Runtime project explores the future of software engineering in the post-AI era, blending philosophical inquiry with technical design to develop a shared runtime for intelligence, biological and artificial.
  1. From physical toil to cognitive burnout: modern labor and the ethics of software design
  2. Internal states, external worlds: concurrency, consensus, and sociotechnical challenges in computing
  3. Engineering speak: prolegomena to ideal technical discourses
  4. Cognitive mirror: how LLMs compress language and challenge our sense of self

2b or not 2b, which engineering

Merriam-Webster
engineering
noun
en·​gi·​neer·​ing ˌen-jə-ˈnir-iŋ
  1. the activities or function of an engineer
    1. the application of science and mathematics by which the properties of matter and the sources of energy in nature are made useful to people
    2. the design and manufacture of complex products
      software engineering
  2. calculated manipulation or direction (as of behavior)
    social engineering

Two fundamentally opposing views of engineering emerge when we set aside superficial definitions: the tautological “what engineers do” and the reductive notion of engineering as mere manipulation or behavioral control. The first, more rigorous conception frames engineering as applied science, focused primarily on problem-solving. Here, the resulting artifact is incidental—simply a means to achieve an end. Its true emphasis is on leveraging mathematical and scientific invariants to produce predictable, efficient outcomes. Engineering, in this sense, is the disciplined extension of reason into the material world.

The second, more superficial conception fixates primarily on the construct itself, emphasizing form, aesthetics, or complexity rather than the problems it solves. This approach risks turning engineering into a stylized practice, absorbed by complexity for complexity’s sake, detached from clarity or necessity. The sharpness of purpose—the vital clarity of intention—becomes obscured, transforming engineering into a spectacle of elaborate systems lacking meaningful rationale.Why do words sometimes mean their own opposites? Why can sanction mean both to permit and to punish, or oversight both careful supervision and a failure to notice? Linguists refer to these as “contronyms” or Janus words—terms that have drifted across contexts and acquired contradictory meanings. This isn’t just a curiosity of the dictionary; it reveals something deeper about how language works. Modern linguistics has long acknowledged this. Words do not carry fixed payloads of meaning. Instead, their sense arises relationally—through contrast, association, and usage. A word only means what it does because of how it differs from other words, how it has been used, and in what context. But this still leaves open a deeper, more unsettling question: if the basic units of language can split, how do we ever know we are speaking about the same thing?

Both senses of engineering—the rigorous scientific discipline and the aesthetically oriented construct-driven approach—are prevalent and distinct enough to be individually definitive. Rather than collapsing one conception into the other, perhaps we must acknowledge that both views coexist clearly and meaningfully. Engineering, therefore, is not a singular concept, but inherently dualistic, spanning rigor and creativity, discipline and style, simplicity and complexity, each orientation valid in its proper context.

This divide manifests in how engineering teams approach tools, syntax, and architecture. One orientation chases what is easy—what feels intuitive, familiar, or close at hand—often leading to constructs that are tangled, brittle, and opaque in their runtime form. The other seeks what is simple—structures that are objectively disentangled and conceptually clear, even if they are less familiar or more difficult to achieve upfront. In the long run, it is this commitment to simplicity—not ease—that enables systems to be comprehensible, testable, and robust under pressure.

The difference is not just aesthetic. It is philosophical. One mode assumes that complexity is inevitable and aims to manage it with tooling. The other believes that complexity is a design failure—avoidable through discipline, restraint, and clarity of thought. Engineering, then, is not just about building things that work. It is about choosing which difficulties to accept: the up-front difficulty of doing things simply, or the downstream chaos of having done them easily.Rich Hickey emphasizes the distinction between simple (from “sim-plex,” meaning one fold or braid; objectively unentangled) and easy (from “adjacent,” meaning near or familiar; relative). He argues we often mistakenly pursue easy constructs (familiar tools/syntax, programmer convenience) which lead to complex artifacts (the running software). Hickey contends we must prioritize constructs that yield simple artifacts, as objective simplicity is key to understandability, reliability, and long-term maintainability, even if achieving it isn’t initially easy (familiar).

Before debating which conception of engineering is superior, we must first confront a more fundamental question: When we speak about “engineering,” how can we be certain we’re even referring to the same thing? If we cannot agree on the very definition—the core objective—of engineering itself, how can we ever align on the methods, tools, and priorities required to achieve anything meaningful within it?


§95. Therefore, sensuous-certainty itself is to be asked: What is the This? If we take it in the twofold shape of its being, as the now and the here,then the dialectic which it has in itself will take on a form as intelligible as the This itself. To the question: “What is the Now?”, we answer, for example, “The ‘now’ is the night.” In order to put the truth of this sensuous-certainty to the test, a simple experiment will suffice. We write down this truth. A truth cannot be lost by being written down any more than it can be lost by our preserving it, and if now, this midday, we look at this truth which has been written down, we will have to say that it has become rather stale.

§96. The Now, which is the night, is preserved, i.e., it is treated as what it was passed off as being, namely, as an existent. However, it instead proves itself to be a non-existent. To be sure, the Now itself maintains itself but as what is not the night; likewise, it maintains itself vis-à-vis the day, which it nowis, as what is also not the day, or it maintains itself as a negative as such. This self-maintaining Now is thus not an immediate Now but a mediated Now, for it is determined as an enduring and self-maintaining Now as a result of an other not existing, namely, the day or the night. Thereby it is just as simply as what it was before, Now, and in this simplicity, it is indifferent to what is still in play alongside it. As little as night and day are its being, it is just as much night and day. It is not affected at all by this, its otherness. Such a simple is through negation; it is neither this nor that, it is both a not-this and is just as indifferent to being this or that,and such a simple is what we call a universal. The universal is thus in fact the truth of sensuous-certainty.

§97. We also express the sensuous as a universal, but what we say is: This, i.e., the universal this, or we say: it is, i.e., being as such. We thereby of course do not represent to ourselves the universal This or being as such, but we express the universal; or, in this sensuous-certainty we do not at all say what we mean. However, as we see, language is the more truthful. In language, we immediately refute what we mean to say, and since the universal is the truth of sensuous-certainty, and language only expresses this truth, it is, in that way, not possible at all that we could say what we mean about sensuous being.

But the impulse toward precise definition inevitably contains its own contradiction—not because the dictionary itself is flawed, but because the very act of definition seeks to freeze what is fundamentally fluid. In demanding fixed meanings, we confront the dialectical paradox that meaning is inherently dynamic, relational, and historically situated. Ironically, the more rigorously we strive for clarity through static definition, the more deeply we entangle ourselves in confusion. True understanding emerges not by arresting concepts into rigid snapshots, but through an ongoing dialectical movement—reflecting, questioning, and continually negotiating the tensions inherent in meaning itself.

In the first conception, engineering begins as Sense-Certainty: the application of fixed laws to observable problems. It treats reality as external, knowable, and subject to manipulation through invariant truths. The engineer here is a figure of confidence—equipped with formulas, predictive models, and the conviction that nature yields to reason. This is a form of certainty without reflection, where the truth of engineering lies in its immediate alignment with empirical constraints and logical necessity.

But this view, Hegel would argue, is insufficient. It collapses under the weight of its own unexamined presuppositions. The moment engineering confronts problems that are ill-posed, that resist clean quantification, or that implicate human and social systems—ethics, aesthetics, ambiguity—it encounters contradiction. The object is no longer passive; the problem space is no longer neutral. Engineering must now recognize that it is not a pure application of knowledge, but a constructive act situated in historical, economic, and cultural conditions.

This leads to the second conception, which aligns with Hegel’s stage of Self-Consciousness. Engineering here becomes aware of its own role in shaping reality, not merely representing it. It sees that the artifact is not incidental—it is expressive, communicative, political. The engineer becomes not just a solver of problems, but a constructor of systems that have form, affect, and consequence. But in this shift, a new contradiction emerges: the risk of losing contact with necessity. Of drifting into a world where engineering serves spectacle, optimization, or control, untethered from the real.But at this ethical crossroads, self-awareness poses a distinct challenge precisely because synthesis rarely emerges naturally or effortlessly. Becoming aware that engineering shapes not just inert matter but also social realities, meanings, and power relationships brings with it an intoxicating sense of possibility. Freed from naïve allegiance to necessity or fixed empirical constraints, engineers might deliberately—or unconsciously—drift away from genuine problem-solving and instead indulge in pure expression or spectacle. How does this happen? Once engineering is explicitly viewed as communicative, expressive, or political, the criteria by which success is judged become less objective and more negotiable. Solutions give way to symbolic gestures, concrete utility fades in favor of performative value, and artifacts become tokens in cultural or political games—declarations of identity, status, or ideology.

The dialectical tension, then, is not simply a binary between applied science and aesthetic system design. It is a movement—from naive functionalism to reflexive construction, and potentially toward a higher synthesis. Hegel would call this Absolute Knowing: a standpoint that does not resolve contradiction by choosing sides, but by sublating (Aufheben) both—preserving what is vital in each while overcoming their limits.

In this synthesis, engineering is neither pure problem-solving nor pure form-making. It becomes a historically situated practice of world-making, aware of its entanglement with both nature and meaning. It recognizes the artifact as both solution and symbol, and understands its own methods as conditioned, provisional, and ethically charged. To speak dialectically of engineering is to accept that its truth does not lie in static definition, but in the unfolding of its contradictions. It is to see the engineer not just as a technician or designer, but as a participant in Spirit’s unfolding—mediating between what is, what can be known, and what ought to be built. Every dialectical synthesis is, above all, a moral victory.

Are dialectics merely a crutch for those can’t compute? If anything, it’s computational overdrive: a way of holding competing truths, navigating ambiguity, and synthesizing contradictions into new forms of understanding. It requires the ability to tolerate uncertainty, suspend premature closure, reflect on assumptions, and entertain multiple perspectives without collapsing them into false equivalence. It’s true that dialectical reasoning does not, in the strict sense, discover scientific facts. It doesn’t measure the mass of a particle or determine the half-life of a radioactive isotope. That work belongs to empirical science, with its methods of observation, experimentation, and quantification. But to dismiss dialectics on that basis is to profoundly misunderstand its role. Dialectics does not compete with science—it enables it, critiques it, and gives it meaning.

Science depends on more than data collection. Before any experiment is run, a question must be posed, a contradiction noticed, an assumption challenged. This is where dialectical reasoning begins: not in the lab, but in the moment of conceptual rupture—when the existing model no longer explains the phenomena before us. The great scientific revolutions have all followed this dialectical arc. A dominant framework—Newtonian mechanics, classical genetics, deterministic thermodynamics—encounters a mounting series of anomalies. These contradictions accumulate until they can no longer be ignored, and a new synthesis emerges: relativity, quantum theory, molecular biology. As Thomas Kuhn argued, science moves not only through cumulative knowledge but through paradigm shifts—dialectical breaks where what was once seen as natural becomes untenable, and a new order of understanding takes its place.

Even within normal science, the reasoning process is not linear. Scientific inquiry constantly negotiates between opposites: theory and observation, general law and specific exception, ideal model and messy data. This back-and-forth, this oscillation between abstraction and concrete case, is dialectical in structure. It’s what allows science to refine itself. The imagination that gave us Einstein’s thought experiments or Feynman’s paradoxes wasn’t merely logical—it was dialectical. It held conflicting concepts in tension until a deeper, often more paradoxical truth emerged.

Without dialectical reflection, science becomes brittle. It risks mistaking its models for reality, its instruments for objectivity. This is the danger of positivism—a worldview that insists only observable, measurable things matter and everything else is metaphysical fluff. But history shows us that the most profound advances in science often emerge from the exact opposite impulse: the willingness to question foundations, to dwell in contradiction, to admit that the existing terms of knowledge no longer suffice.

And that is where dialectics proves essential. It does not replace science, but it surrounds it. It asks: what questions are we not asking? What assumptions underlie our models? Who benefits from this technology, and who is harmed? It reminds us that no fact exists outside of a framework, and no framework is free from conflict. This is not the work of street smarts or emotional intuition—it is a philosophical intelligence, a form of cognitive maturity that is able to tolerate ambiguity without collapsing into confusion, and to synthesize opposing truths without erasing their difference.

So while dialectics may not compute the answer, it compels us to ask better questions. It is not a fallback for those who can’t “do the math.” It is what you need when the math is no longer enough—when knowledge becomes contradiction, and progress means holding fire in both hands without flinching.


Yet, this dialectical tension permeates language itself, the very medium through which engineering articulates and communicates its goals, methods, and values. Wittgenstein approaches language not as a container for meaning, but as a social activity. Words do not carry meanings like parcels; they take meaning from use. Language, he argues, is a series of “language games,” embedded in forms of life. Meaning lives in context, in shared practices, in the web of expectation and behavior. To understand a word is not to decode it, but to know how it functions within a particular life-world. Communication, then, is not about transmitting propositions, but participating in a form of life that gives words their force.

For Lacan, this very structure of mediation introduces a fundamental instability. The signifier—the word, the symbol—is never stable. It does not fix meaning, but drifts along a chain of other signifiers. Meaning is never directly accessible; it is always deferred, haunted by what is absent or repressed. The subject, too, is not whole but split—because our very being is stitched together through this shifting network of language. There is no master signifier, no outside from which meaning can be secured. “There is no metalanguage,” Lacan insists. The unconscious speaks, but it does so in riddles, slips, and gaps that resist straightforward translation.

Foucault deepens this suspicion by emphasizing the institutional and historical conditions that shape what can be said. Language is not a neutral tool but a vehicle of power. Discourses do not merely express thoughts—they construct what counts as truth, who may speak, and under what conditions. Meaning is not produced in a vacuum; it is shaped by regimes of knowledge, surveillance, and discipline. To speak is to enter a field already structured by exclusion and control. Even sincerity cannot escape the gravitational pull of authority embedded in discourse.

Derrida radicalizes the instability by showing that even the most basic unit of language is contaminated by delay and difference. In his notion of différance—a term that both defers and differs—he reveals that every sign carries traces of others. Meaning is never fully present. It is always postponed, always shadowed by what it is not. Language does not clarify; it disrupts. Its power lies in its failure to deliver fixed meaning, in its ceaseless unraveling.

Habermas responds not by denying this instability, but by seeking its rational potential. While language is fragile, he argues, it remains a space for mutual understanding. He distinguishes between strategic action, where language is used to manipulate, and communicative action, where the goal is intersubjective agreement. For Habermas, discourse is not merely a contest of meanings—it is a shared practice governed by norms of sincerity, justification, and mutual recognition. The dialectic occurs not between fixed positions, but within language itself, through the possibility of repair and rational exchange.

So how do we communicate when the atoms of language are prone to split? The answer is: we don’t communicate despite this instability—we communicate through it. Every utterance is a wager across a gap. Meaning is not a substance to be transmitted; it is an act of interpretation, of risk, of provisional repair. It is always negotiated, never settled.

Before we can even argue about what engineering is, we must ask if we are speaking of the same thing. But language never guarantees that we are. It only gives us the capacity to test, to revise, to listen for resonance and contradiction. The split is not a flaw in communication. It is the very condition that makes communication—however imperfect—possible at all.

Call for dialectical reasoning?

Students who are academically high-achieving—the classic “book smart” individuals—can be paradoxically among those who struggle most with dialectical reasoning. These students often thrive in traditional academic settings that reward memorization, formulaic problem solving, and clear writing following rubrics. They tend to internalize the rules of schooling and excel at working within established paradigms. However, this very success can become a double-edged sword. Dialectical reasoning requires moving beyond existing paradigms, questioning them, and tolerating uncertainty and conflict. For many “A students” accustomed to praise for getting things right, it can be disorienting to discover that some questions have no single right answer or that every answer may have a counterpoint.

Several factors contribute to this phenomenon. First, academically talented students are often adept at formal logical thinking—what Piaget would call formal operationsPiaget’s theory of cognitive development outlines four main stages: Sensorimotor (birth–2 years); Preoperational (2–7 years); Concrete operational (7–11 years); Formal operational (12+ years). The formal operational stage—the highest in Piaget’s model—emerges around adolescence. It is marked by the ability to think abstractly, logically, and systematically. Individuals in this stage can: Use hypothetical-deductive reasoning; Formulate abstract concepts and test hypotheses; Think about possibilities, not just actualities; This is the stage most education systems are built to develop and reward—especially in math, science, and engineering. It’s where “book-smart” students often excel.—but they may not naturally progress to what some psychologists term “postformal” thinking without further challenge. Postformal thought (a stage of adult cognitive development beyond Piaget’s highest stage) is characterized by recognizing multiple valid perspectives, integrating contradictions, and reflecting on one’s own thinking. It has been described as more flexible, tolerant of ambiguity, and explicitly dialectical compared to earlier stages of cognition. However, conventional education does little to nurture this stage. A student can sail through advanced calculus, ace engineering exams, and memorize scientific facts (all markers of high IQ and academic success) while remaining at a cognitive level that prefers clear rules and truths. In fact, excelling in the educational system as it stands may reinforce black-and-white thinking: if you are constantly rewarded for being correct, you have little incentive to seek out opposing viewpoints or admit uncertainties.However, researchers in adult cognitive development—notably Jan Sinnott, Michael Commons, Robert Kegan, and others—have argued that cognitive growth does not stop with formal operations. They propose a further, more advanced mode of thinking known as postformal thought. This mode is characterized by: Dialectical reasoning: the ability to integrate contradictory ideas into a coherent whole; Relativistic thinking: understanding that truth may be contextual rather than absolute; Metacognition: the ability to reflect on one’s own thinking processes; Integration of emotion and logic: recognizing that human decisions are rarely purely rational; Tolerating ambiguity and paradox: being comfortable with uncertainty and complexity.This kind of thinking is dialectical by nature—it doesn’t reject logic but understands that real-world problems often involve contradictions that cannot be resolved by linear reasoning alone.

Personality and social factors play a role too. Many book-smart students have been praised for their intelligence from a young age, leading them to identify with being “smart.” Admitting ignorance or entertaining seemingly “wrong” ideas in a dialectical debate can feel threatening to that identity. Such students might also have less experience with intellectual humility if they have rarely been seriously challenged. In STEM fields specifically, students often socialize in environments that valorize expertise and problem-solving prowess. This can create an intellectual echo chamber where they mainly interact with others who think similarly and value the same approaches. Without exposure to radically different perspectives—say, through robust liberal arts requirements or interdisciplinary projects—technically minded students may not realize the limits of their approach. They might dismiss philosophical questions as “mumbo jumbo” or ethical quandaries as irrelevant to “real” work, not out of malice but due to a lack of training in handling such issues. As a result, these brilliant students can develop a form of rigidity in thought: highly competent within their domain, but uncomfortable when problems don’t yield to straightforward analysis.


Philosophers, educators, and social critics have long noticed and lamented the undervaluation of dialectical thinking in modern society. The Frankfurt School philosophers (Adorno, Horkheimer, Marcuse, etc.) were among those highlighting how Enlightenment’s overconfidence in instrumental reason could become a new form of domination, blinding us to alternative ways of thinking. Adorno famously critiqued the “fetishization of rationality”—observing that an unreflective worship of scientific rationality can itself become irrational. In education, this is seen when schools treat data, testing, and “objective” methodologies as beyond question, thereby stifling critical, dialectical inquiry about education’s aims and values. As cited earlier, important qualities like moral judgment or relational understanding get sidelined because they are hard to measure. Adorno warned of what he called the “jargon of authenticity”—a hollow use of reason’s language without its substance. One might say many highly educated people today speak the jargon of logic and evidence, yet struggle to engage in authentic dialectical reasoning about what those facts mean in a broader human context.

Educators like Paulo Freire have championed a dialectical approach to teaching, emphasizing dialogue (the “word” as a unity of reflection and action) and the co-creation of knowledge between teacher and student. Freire’s Pedagogy of the Oppressed argues that education should be a practice of freedom, where teachers and students learn from each other through dialogue and challenge. This method is explicitly dialectical, as it involves posing problems and considering multiple answers arising from learners’ lived experiences. In contrast, Freire criticized the “banking model” of education—where teachers deposit information into passive students—as anti-dialogical and oppressive. The influence of Freire and other progressive educators can be seen in pockets of project-based learning and critical pedagogy movements, but these remain alternative currents rather than the mainstream. The mainstream, as noted, trends toward treating students as consumers or receptacles of knowledge rather than co-investigators in a dialectical process.

Even within the sciences, some voices call for a return of dialectical thinking. Physicist David Bohm, for instance, advocated “dialogue” groups to explore thought collectively and examine assumptions. Philosophers of science like Thomas Kuhn (with his idea of paradigm shifts through crises) implicitly acknowledge a dialectical pattern in scientific progress—thesis, antithesis, synthesis, one might say. Yet, science education seldom teaches students to think about science itself in that way. There is a disconnect between how knowledge actually evolves (often through conflicts, debates, the overturning of orthodoxies) and how we train scientists to think (often as if everything is settled and just to be mastered).

In recent years, critics of neoliberalism have added their voices, noting that the dominance of market logic in education not only marginalizes dialectical thought but also any kind of critical thought that might question the system. The emphasis on individual success and adaptability under neoliberal discourse subtly discourages students from questioning social structures (why things are the way they are). Instead, any failure or conflict is framed as a personal learning opportunity or a need for self-improvement. This brings us to another trend: when individuals do show deficiencies in reflexivity or ethical reasoning, the solution society reaches for is often individual therapy or training—not a collective re-examination via education.


A striking trend in modern society is the tendency to treat cognitive or ethical shortcomings as individual pathologies to be remedied by therapy, rather than as gaps in education to be remedied by curriculum reform. If a person is rigid in their thinking or lacks ethical sensitivity, we are more likely to say they need counseling, coaching, or perhaps a “mindset” workshop—rarely do we question whether their schooling should have fostered those capacities in the first place. Social critic Mark Fisher observed that under today’s neoliberal “common sense,” even widespread problems are individualized. For example, rising rates of stress, depression, or an inability to cope are rarely linked to flaws in our social or educational systems; instead, individuals are told to build resilience or seek therapy. Fisher argues that we’ve seen a “privatization of stress”—people are left to solve their own psychological distress while the systemic causes are depoliticized. By analogy, one could say there has been a privatization of dialectical ignorance: if someone can’t handle complex thinking, that’s their problem—perhaps they should read self-help books or see a cognitive-behavioral therapist—rather than a failing of our educational priorities.

Indeed, the rise of Dialectical Behavior Therapy (DBT) in clinical psychology is an intriguing case. DBT, developed by psychologist Marsha Linehan for treating conditions like Borderline Personality Disorder, explicitly teaches skills of dialectical thinking—clients learn to hold and integrate opposite perspectives (e.g., accepting oneself and seeking to change). The very existence of DBT suggests that a portion of the population so lacks dialectical habits of thought that they require formal therapy to develop them. While DBT is a specialized therapy, its core idea (embracing contradiction and finding synthesis) is a cognitive skill arguably beneficial to everyone. One could ask: if our general education from an early age taught dialectical reasoning as a basic skill—much like we teach arithmetic or writing—would we see fewer people struggling with all-or-nothing thinking and emotional dysregulation later on? In modern practice, however, such reflective capacities are not widely taught; instead, they are outsourced to the therapy industry after problems have already manifested.

The resort to therapy also reflects how ethical deficiencies are handled. Consider professionals in technical fields who make ethically questionable decisions (e.g., an engineer who disregards safety, or a programmer who creates an algorithm with harmful biases). Rarely will the response be to scrutinize that person’s educational background in ethics or critical thinking. More often, organizations respond with individual-centric solutions: perhaps send the person to an ethics seminar (a quick training) or, if the issue is severe, treat it as a personal failing (even a psychological issue) of a “bad apple.” There is a reluctance to admit that maybe our STEM curricula should integrate ethical dialectics (debating value conflicts, societal impacts) as rigorously as they do calculus. As a society, we lean on after-the-fact fixes—counseling, compliance training, PR damage control—rather than preventative education in dialectical and moral reasoning.


Engineering suffers particularly from the absence of dialectical reasoning because it operates in a paradox: it is taught as the most grounded and precise of disciplines, yet it is in practice one of the most entangled with complexity, contradiction, and ethical ambiguity. The way engineers are trained—through years of problem sets, deterministic models, and optimization routines—promotes a mode of thinking that seeks clarity, closure, and control. Students are taught to frame problems with defined parameters, to search for efficient solutions, and to deliver predictable results. This training develops technical competence, but at the cost of cultivating the intellectual flexibility needed to confront the uncertainty and contradiction that define the real world.

At its core, engineering education valorizes instrumental reason. It teaches students how to achieve predefined goals with technical precision, but rarely asks them to interrogate the goals themselves. The problems engineers face in the real world, however, are rarely clean or contained. Infrastructure design involves balancing sustainability, cost, and public welfare. Artificial intelligence raises questions of bias, surveillance, and social manipulation. Biomedical engineering wrestles with tradeoffs between accessibility, profit, and human dignity. These are not merely technical problems; they are profoundly dialectical, requiring the synthesis of conflicting imperatives. Yet engineers are often unprepared for this kind of reasoning because their training treats contradictions not as spaces of reflection, but as inefficiencies to be optimized away.

The cultural ethos of engineering compounds the issue. Engineers are taught to be apolitical, to see themselves as neutral technicians operating outside of ideology or moral ambiguity. This belief in neutrality is itself a powerful ideological stance—one that discourages engagement with the social, historical, and political contexts in which engineering unfolds. It is anti-dialectical at its root: it denies the entanglement of the artifact with the world. The design of a bridge, a power grid, or an algorithm is never just a technical act; it is a decision about who benefits, who bears the risk, and whose values are embedded in the final product. Dialectical reasoning makes these tensions visible. Without it, engineers are more likely to reproduce the status quo under the guise of technical efficiency.

This rigidity of thought becomes particularly consequential when engineers ascend into leadership roles. Increasingly, engineers and technologists are not just building systems; they are running companies, shaping public discourse, and influencing policy. But when the same mindset that excels at deterministic problem-solving is applied to social and political domains, it falters. Leadership requires the ability to navigate ambiguity, to weigh competing values, to engage with dissent, and to synthesize diverse perspectives into coherent action. These are not algorithmic tasks. They are dialectical. And without training in dialectical reasoning, even the most intellectually gifted engineers may find themselves brittle in the face of uncertainty—capable of building sophisticated systems, yet blind to the human consequences those systems unleash.

In this light, the underemphasis on dialectical reasoning is not a philosophical oversight; it is a structural vulnerability. It is what allows brilliant minds to dismiss dissenting voices, ignore social warnings, and double down on flawed designs. It is how systems built with technical rigor can still collapse under ethical weight. To repair this, engineering must be reimagined not only as a science of precision but as a discipline of reflection. Only then can engineers build not just what works—but what ought to be.

Towards ethical technical discourses

No need to argue

While Lacan and Derrida emphasize the slipperiness of meaning in structural terms, Wittgenstein tackles meaning from a pragmatic angle: meaning is determined by how words are used in specific forms of life. In his later work (Philosophical Investigations), Wittgenstein famously stated, “the meaning of a word is its use in the language.” Meaning is not an inherent property of a word, but rather what we do with the word in particular social activities or language games. A language game refers to a routine or context in which language is employed with certain rules and purposes (for example, giving orders, telling a story, coding a program, explaining a math proof—each can be seen as a distinct language game). Words can have stable meaning within a given game, but if you take them into another game, their meaning can shift dramatically. Wittgenstein underscored that words do not have fixed, intrinsic meanings; they acquire meaning from their usage in a given life context.

A key illustration is the concept of family resemblance. Instead of assuming every instance of a concept shares a single defining feature, Wittgenstein noted that many concepts are like a family: the “members” (uses or instances) resemble each other in overlapping ways, without any one trait common to all. For example, the term “game” cannot be pinned down by one essence—some games involve competition, others don’t; some have boards, some have teams, etc. There is “no single definition fits all uses of the word ‘game’”, yet we understand the word through a network of similarities. This means that our categories are often fuzzy, and a word’s meaning can stretch or contract to cover new cases as needed. The implication is that meaning is inherently a bit flexible and context-sensitive. Words have clusters of uses, and new contexts might activate a different subset of the meaning.

Wittgenstein also introduced the notion of forms of life: the broader cultural-social backdrop that gives a language game its sense. Two groups with different forms of life (say, an artist collective and a group of engineers) might use the same word but “live” it differently. For instance, the word “design” might conjure aesthetic decisions for the artists, but structural or functional plans for the engineers. Each group’s form of life includes different practices, so the language game of “design discussion” has different unspoken rules and expectations. From Wittgenstein’s perspective, ambiguity and misunderstanding arise not because words mysteriously lose meaning, but often because people are unwittingly playing different language games or assuming different family resemblances for the terms. If two engineers communicate using specialized jargon, they succeed because they share a form of life (common training, goals, and experiences) where those terms have a specific use. But if an engineer speaks to a layperson with the same terms, meaning can break down—not due to the words alone, but due to a lack of shared usage context.

Wittgenstein’s insights encourage defining terms by example and practice rather than by abstract definitions alone. In an engineering team, when onboarding new members, it’s often effective to show them how certain terms are used: e.g., what exactly counts as a “critical bug” by walking through bug triage examples. This aligns with Wittgenstein’s view that meaning is public and rooted in shared practice. We can’t just privately decide a meaning and assume others know it; we must develop a common use. This is why coding style guidelines, project glossaries, and design patterns exist—they create a shared language game for the team. A term like “pattern” itself could mean a sewing template in one game, but in software “design pattern” refers to a known solution schema to a design problem. Only by participating in the software engineering form of life does one pick up that meaning.

One practical upshot of Wittgenstein’s view is the importance of documentation and training that focus on usage. Instead of just giving definitions, good documentation often includes examples of how a term or command is used in context. For instance, an API document might not only describe a function’s parameters abstractly, but also show a snippet of code (a mini language game) demonstrating typical use. This way, the reader learns the meaning as the creator intended by seeing it in action. Another upshot is the value of domain-specific languages or notations in engineering—essentially creating a constrained language game where each term has a single, agreed meaning (as much as possible). For example, electrical engineers use circuit diagrams—symbols like a resistor or capacitor have fixed interpretation within the practice of circuit design. It’s a highly regimented language game, which significantly reduces ambiguity (one wouldn’t confuse a resistor symbol for a transistor, similar to how in chess the knight’s move is clearly defined within that game).

Finally, Wittgenstein would likely remind us that many disagreements in technical projects are actually about language—about not using words the same way—rather than purely about facts. When a software developer says “This feature is done,” and a tester disagrees saying “It’s not done, it has bugs,” they might be using “done” in different language games. The developer might mean “code completed,” the tester means “meets all requirements and passes all tests.” Both are legitimate uses in different subcontexts. Resolving the conflict might involve creating shared criteria for “done” (e.g., a Definition of Done in agile methodology). That is essentially forging a new language game for the project so that “done” has a agreed use. Wittgenstein’s approach is very pragmatic: get everyone playing the same game with the same rules. If you do that, you minimize miscommunication. If you don’t, words will wander and sprout unintended meanings.

Wittgenstein offers a reassuring counterpoint: while meaning may not be absolute, it is at least negotiable through shared use. Unlike Lacan or Derrida who stress our helplessness before language’s slippages, Wittgenstein suggests we can achieve effective communication by carefully cultivating shared practices and contexts. For engineers and developers, that means immersing in the team’s jargon, clarifying through use-cases, and remembering that words are tools—their “precise” meaning is just how we all agree to use them in our specific work. The precision, therefore, comes not from the word alone but from the community of users. Technical communication improves as the community tightens its language games (through standards, style guides, examples) and as individuals become bi- or multi-lingual across different games (able to translate tech jargon to lay terms, for instance). Wittgenstein’s legacy in linguistics and AI indeed has been to highlight context-dependence and use—exactly the lessons we apply when we say “explain it in plain English” or “let’s define our terms before we proceed.”

Talk, but not too much

Lacan’s psychoanalytic theory portrays language as a dynamic chain of signifiers (words or symbols) in which meaning is never fixed but perpetually “slides” along the chain. For Lacan, meaning is not found in any single signifier; it emerges in the play between signifiers, and is therefore unstable. Each word only points to another word (another signifier), in an endless metonymic movement. This implies that we never arrive at a final, stable signified concept—instead, meaning is continuously deferred. Lacan writes that “the signifier, by its very nature, always anticipates meaning by unfolding its dimension before it”, highlighting that understanding is always one step behind the words. And because one can always add another link in the chain (“ad infinitum”), “a signifying chain can never be complete… meaning ‘insists’ in the movement from one signifier to another”. In other words, meaning is not a fixed content but a momentary effect of signifiers in sequence, forever in flux.

This view goes hand in hand with Lacan’s notion of the split subject—the idea that the speaking subject is internally divided by language itself. Lacan famously said that “the signifier represents the subject for another signifier,” implying that when we speak, the “I” that is speaking is actually being defined by language, not fully controlling it. The very act of speaking splits the subject: there is the conscious intention (what one wants to say) and the effect of the words used (which may carry other or incomplete meanings). As the “speaking being,” the subject is split by the fact that speech divides the subject of the enunciation from the subject of the statement”. We cannot express a fully present self in language—part of us (our intent or exact meaning) is always barred or left out, because we must use imperfect signifiers to represent ourselves. This is why Lacan uses the symbol of the barred S to denote the subject struck through by language. The result is a fundamental alienation: we can never say exactly what we mean, since language mediates and often distorts our intended meaning.

Moreover, Lacan asserts the “impossibility of metalanguage,” the idea that there is no language outside of language from which we can objectively define our terms. Any attempt to pin down or clarify meaning (for instance, giving a definition) is itself done in language, which is subject to the same slippages. “Since every attempt to fix the meaning of language must be done in language, there can be no escape from language, no ‘outside’” from which to get a perfectly stable viewpoint. In Lacan’s words, “no metalanguage can be spoken,” meaning we cannot step into a neutral, context-free language to state “what we really mean” once and for all. Even a technical glossary or a formal specification—attempts at a metalanguage in engineering—remain within language and inherit its ambiguities. Lacan aligns here with the structuralist insight that there is “no transcendental signified”—no ultimate reference point that anchors meaning firmly. There is always a remainder of uncertainty because language only refers to itself in an endless play.

An engineer writing a specification or a programmer writing documentation might believe they are conveying a clear, fixed meaning, but Lacan’s perspective suggests otherwise. The terms used in a software requirements document, for example, gain meaning only through their relation to other terms and assumptions. If a requirement says “the database shall support heavy loads,” the word “support” is a signifier that could link to various signified notions (handle without crashing? maintain performance? simply allow?). Its precise meaning “slides” along the chain of context: “support” what?—heavy loads of what kind?—under what conditions? One needs further signifiers to pin it down, and even those invite further questions. In Lacanian terms, engineers often attempt to create quilting points (points de capiton)—definitions or diagrams that momentarily stitch a signifier to a specific signified to stabilize meaning. For instance, a technical standard might define “heavy load = 1000 concurrent users” to anchor the term. This can act as a temporary stop in the chain of signification, giving the illusion of fixed meaning. However, Lacan would remind us that this fix is provisional and “illusory”. The definition itself consists of more signifiers (“concurrent users”) whose interpretation could be debated (what counts as a user in this context? what if they are idle?). Thus, even in engineering, meaning cannot be wholly locked down—there is always an open margin where interpretation can differ.

Lacan’s split subject also resonates in technical teamwork. An engineer (the subject) may think they have communicated exactly what the client needs, only to discover misalignments—part of their intent never made it into the words, and part of what the words did convey was unintended. For example, a software architect might describe a design as “efficient and simple,” intending a specific algorithmic elegance. But a developer reading it might interpret “simple” as simple to implement (whereas the architect meant simple for the end-user). The architect’s speaking subject was “split”—what was meant versus what was said. The unconscious connotations of words (like “simple”) can speak through the person without them realizing, leading to unexpected interpretations. Lacan’s insight that “the subject will never know himself completely, always cut off from his own knowledge” is a caution: in technical communication, we should remain humble about our ability to fully control our message.

In practice, acknowledging Lacan’s view means embracing redundancy and dialogue. Engineers often iterate on requirements, adding clarifying signifiers (e.g., examples, prototypes) to close gaps in understanding—essentially extending the signifying chain until all parties feel the meaning is sufficient. We also see an appreciation that no specification can be truly final or self-contained (no perfect meta-specification exists outside language). This is one reason agile software methodologies favor continuous communication over assuming the spec will speak for itself. Lacan helps us see why absolute precision is so elusive: language’s structure guarantees a bit of drift, where meaning can always split or slip. Recognizing this, technical teams can compensate by constantly negotiating meaning—reviewing terminology, confirming interpretations, and being aware that some ambiguity might always remain. In summary, Lacan teaches that miscommunication is not just a failure of effort but is structurally embedded in language. Technical communicators must therefore work with and through this slipperiness, rather than assuming it can be entirely eliminated.

Languaguistic mandala

Derrida’s philosophy takes the instability of meaning even further, arguing that meaning is always deferred and dependent on context—never fully present or self-contained. His famous term différance (a play on the French différer, meaning both “to differ” and “to defer”) captures how words produce meaning through differences from other words and through an endless postponement of definitive interpretation. In Derrida’s view, a word’s meaning arises not from some positive essence, but from its distinction from what it is not. For example, the concept “engineer” might partly be defined by what an engineer is not (not a technician, not a layperson, not a scientist—each difference contributing to its meaning). Likewise, in language we often know what something means by contrasting it with other terms. Derrida says “to Derrida, trace and not ‘being-there’, difference and not identity, create meaning inside language”. Every term carries traces of other, absent terms—echoes of what has been excluded or contrasted—and these traces are what actually shape meaning. We do not access some pure, present meaning directly; instead meaning is like a pattern formed by the gaps and contrasts between words, a bit like how a silhouette is defined by the empty space around it.

Crucially, Derrida argues that meaning is never finalized; it is “disseminated” in a play of possible interpretations. Any text or utterance, if examined closely, contains internal tensions or assumptions that can yield multiple readings. In fact, “deconstructive criticism aims to show that any text inevitably undermines its own claims to have a determinate meaning”, inviting the reader into an activity of semantic “freeplay”. This does not mean that communication is impossible, but rather that no single, ultimate interpretation can exhaust the meaning of a given piece of language. There is always some excess or ambiguity that can spawn new interpretations. Derrida famously noted “there is… no proper context to provide proof of a final meaning”, because context itself can never be perfectly fixed. We can narrow down meaning by specifying context, but we can never eliminate all uncertainty—a different context or a slight shift in conditions can defer the meaning again. In short, meaning is contextual but context is boundless; you can always find another layer of context that changes how a sign is understood.

Another key concept is the trace—the notion that every word bears an absent element of meaning, a leftover of other words not actually spoken. For Derrida, whenever a word is used, it implicitly references what it excludes. For example, saying “This design is secure” conjures a trace of “insecure” (the possibility or fear of insecurity gives “secure” its meaning by contrast). Thus meaning is haunted by what’s not explicitly said. “Through the act of différance, a sign leaves behind a trace, which is whatever is absent but still affects meaning”. This is why Derrida insists “there is nothing outside the text,” by which he means no meaning exists in a pure form outside of this interplay of differences and traces—even reality is interpreted through language, so we only encounter it via these textual processes. Any attempt to say “I literally mean X” is still subject to différance, since literal meaning itself relies on context and contrasts that are not fully present in the utterance.

Deconstruction is the method Derrida employs to reveal these instabilities. To deconstruct a piece of language (a paragraph, a definition, a requirement) is to show how it paradoxically depends on what it tries to exclude or how it contains contradictions that prevent a single coherent interpretation. Instead of finding a secure foundation, deconstruction uncovers “the absence of presence, freeplay of meanings” within texts. One might deconstruct a software specification by showing that a term like “optimal” in one requirement conflicts with “low-cost” in another—revealing an undecidable tension (is the system to prioritize speed or cost? The text doesn’t stably decide). Derrida’s point is not to foster confusion, but to make us aware of the inherent multiplicity of meaning, so we do not fall into the trap of logocentrism—the belief in a single, centered truth of meaning. Traditional approaches (especially in technical fields) try to assume words have fixed referents, a “metaphysics of presence” where meaning is fully present in the word. Derrida challenges this, showing instead that meaning is a fluid effect of differences and can always be reconsidered from a new angle.

In engineering and software development, Derrida’s ideas might at first seem abstract, but they directly relate to the persistent problem of requirements ambiguity and misinterpretation. Consider a software requirement specification (SRS)—often, enormous effort is spent to phrase requirements in unambiguous, “literal” terms. Yet teams still encounter differing interpretations. From a Derridean perspective, this happens because the language of the spec cannot cover all contexts or eliminate all traces of alternative meaning. For example, a requirement stating “The system shall be flexible to user needs” sounds straightforward. But flexible carries multiple potential meanings: does it refer to configurability, scalability, a user-friendly interface, or adaptability to future changes? Each reader may supply a different context to this word. The spec might try to clarify by adding sub-requirements (much as one might add footnotes or parenthetical definitions—analogous to Lacan’s quilting points). However, those clarifications too are made of words and thus introduce new differences. No matter how many clauses you add, the process of deferral continues—final meaning is always “deferred” to further interpretation or future clarification. This is evident when teams realize during implementation that their understandings diverged and they must go back to the text for reinterpretation. In Derrida’s terms, the “supplement” (extra clarification) both adds meaning and reveals that meaning was never fully present to begin with.

Derrida would also encourage looking at the implicit contrasts and assumptions in technical documents. If a design document says “We chose solution A for its simplicity,” it implicitly contrasts A with a more complex solution B (perhaps not mentioned)—the meaning of “simplicity” is partly given by the trace of “complexity” that is absent. This might lead to miscommunication if, say, a team member later proposes adding a feature that reintroduces complexity; they might argue “It’s necessary,” while others resist, citing the commitment to simplicity. Each side is invoking the trace of what they think “simple” excludes (maybe one thinks it excludes any additional configuration, another thinks it excludes only unnecessary features). Deconstructing the discussion can reveal that “simplicity” in the original decision was never a fully stable criterion—its meaning evolved as the context changed (initial architecture vs. added features). The lesson is that technical terms carry layers of meaning that evolve and sometimes conflict, and these layers often come from contexts not explicitly in the text (historical decisions, unstated priorities, industry jargon, etc.).

We can also see différance at work in how code comments or documentation are understood over time. A comment in code might have made sense to the original author (within the context of that moment), but later maintainers struggle because the context has shifted—perhaps an external library changed (so what “optimal” meant in 2019 is not what it means after a 2025 update). There is “no proper context” that guarantees the final meaning across all time. The meaning “differed”—it was deferred until someone encountered it in a new situation, whereupon they had to interpret it anew. This is why in software engineering we often emphasize updating documentation and writing tests (to lock down expected behavior): these are attempts to reduce the freedom of freeplay by concretizing context. But even tests are written in a formal or natural language that can be interpreted; edge cases may still emerge that force reconsideration of what the requirement really means.

Derrida’s idea that texts “undermine their own claims to determinate meaning” finds concrete form in the way a spec can generate unintended behavior. For instance, a safety requirement might inadvertently conflict with a performance requirement—the document, taken as a whole, cannot be satisfied as written because its pieces deconstruct each other (ensuring maximum safety might mean violating the performance spec and vice versa). In practice, engineers perform a kind of deconstruction when they do requirement reviews or use cases: they try to find internal contradictions or ambiguous terms, effectively exposing the text’s instability so it can be resolved. Each resolution, however, is just an interpretation agreed upon by the team (for now). Derrida would note that this agreement is a sort of temporary truce, not an eternal truth: future scenarios might resurrect alternative meanings.

In sum, Derrida’s perspective urges technical communicators to embrace iterative clarification and humility about meaning. One cannot assume that just because a requirement is written down, it has one truth. Instead, communication is an ongoing process of context-setting, re-reading, and revising. It helps to provide multiple representations (text, diagrams, prototypes)—each medium highlights different aspects and traces—and to remain open to questions. Recognizing différance also means being aware that some meaning always escapes complete capture. As a result, wise teams leave margins for interpretation and explicitly discuss them: for example, writing a glossary of terms (acknowledging that terms need context) or listing out-of-scope interpretations (“by secure we mean resistant to known attacks, and we do not mean formally proven unbreakable”). These practices, essentially, are about managing the play of difference by narrowing it, without ever believing one has eliminated it entirely. Derrida teaches that meaning in engineering is not immune to the play of language—by deconstructing our communications before errors happen, we can anticipate misunderstandings and design around them.

Ethics of technical discourses

Habermas’s Theory of Communicative Action posits that language is not merely a game of shifting signifiers or a tool of manipulation, but has a proper use oriented toward mutual understanding. In other words, the very structure of language use carries an “inbuilt aim of understanding”. When we engage in conversation, we implicitly strive to reach an agreement about what our words mean and what is true or right in the context. Habermas calls this orientation communicative rationality, emphasizing that ordinary language is geared towards cooperation and shared meaning, not endless play.

Wittgenstein showed that meaning depends on use and context (“games” governed by rules). Habermas accepts that context matters, but argues there is more at work: even across different language-games, speakers are usually aiming to understand each other and coordinate. For Habermas, meaning isn’t only a contextual game; it’s also guided by a commitment to truth and clarity. Language in its normal use is dialogical and cooperative, not arbitrary—it “has an inbuilt aim of understanding”. This means that while Wittgenstein’s engineers might each follow their own jargon rules, Habermas would note that they can still talk through their differences and establish a common ground.

Lacan suggested that words lead only to other words in a perpetual chain, with our desires and misunderstandings ever in play. Habermas, by contrast, focuses on intersubjective recognition: speakers can acknowledge each other’s intentions and work to repair misunderstandings through dialogue. The chain of signifiers need not drift indefinitely; through rational discussion, participants clarify references and resolve ambiguities. In Habermas’s view, when miscommunication occurs (as Lacan would predict), the remedy is not to abandon hope of meaning, but to engage more—asking for clarification, giving reasons, and coming to a shared interpretation of the signifiers. Meaning may be polysemic, but it can be stabilized when people cooperate in good faith.

Derrida’s notion of différance holds that meaning is always deferred—every definition slides into further interpretations, never landing on a final truth. Habermas acknowledges the moment of deferral but insists it can be overcome through what he calls rationally motivated consensus. In practice, this means that speakers jointly test and confirm meanings until ambiguity is reduced. Whereas Derrida sees an infinite play of signification, Habermas sees a process of discussion that checks that play: we raise questions like “Do we mean the same thing by this term?” and offer justifications until we agree. He argues that language’s “original mode” is reaching agreement—using reasons to convince one another—and that manipulative or idiosyncratic uses of language are actually “parasitic” on this primary, meaning-sharing function. In short, Habermas counters différance by saying: yes, interpretation is open-ended, but human beings have the capacity (and desire) to close the gap through discourse, arriving at interpretations we mutually accept (even if provisional).

At the heart of Habermas’s theory is the idea that everyday communication raises what he calls validity claims—claims to truth, to normative rightness, and to sincerity. For example, a software architect explaining a design is (implicitly) claiming: “The design works (truth), it follows agreed standards (rightness), and I genuinely believe in this approach (sincerity).” Crucially, these claims can be questioned and discussed. If a teammate doubts the design works, they can ask for evidence (challenging the truth claim); if a practice violates a standard, they can point it out (challenging rightness); if they suspect ulterior motives, they can probe sincerity. Through such communicative discourse—a reasoned dialogue about the claims we put forward—participants can resolve uncertainties or disagreements. In Habermas’s words, “achieving understanding in language is… introduced as a mechanism for coordinating action”. Rather than relying on a higher authority or a perfect formal language to secure meaning, Habermas entrusts the process of conversation itself to do this work.

Habermas’s approach suggests that we overcome ambiguity not by imposing a strict meta-language or final definitions, but through open-ended dialogue. If a term is unclear or a specification contested, the solution is to have the stakeholders talk to each other—ask “What do you mean by this?”, “Is this what we all intend?”—and work out a common understanding in situ. This is an ongoing, interactive process, not a one-time fix. Importantly, it requires a certain ethical stance from participants: a willingness to listen, to consider others’ viewpoints, and to let the “best argument” win. Habermas calls this the ideal speech situation—a situation of dialogue “free from any kind of distortion, any form of coercion… except the force of the better argument”. In practical terms, it means everyone with a stake has an equal right to participate and to question any assertion, introducing new ideas or corrections without fear. This norm of intersubjective recognition ensures that conversation is not a zero-sum debate to be “won,” but a collaborative search for truth. Participants approach the discussion with what we might call ontological openness: they are open to revising their assumptions about the issue at hand. For instance, an engineer and a requirement author might initially attach different meanings to a term; rather than each rigidly defending their own definition, a Habermasian dialogue invites them to acknowledge the difference and refine the definition together, perhaps even discovering a better concept that integrates both perspectives. Through such rational discourse, meaning becomes more stable—not because it was fixed from the start, but because it is continually re-affirmed and adjusted by the group until a workable consensus is reached. Notably, this consensus is “rationally motivated”—it’s achieved by giving reasons that all find acceptable, not by one side overpowering the other. In sum, Habermas bolsters our framework with the idea that language can overcome its own indeterminacies: by its nature, it provides us the tools (questions, reasons, replies) to negotiate and solidify meaning through communal effort.

Habermas’s model of communicative action has powerful implications for engineering teams and software development, where miscommunication is a notorious source of project failure. Technical communication is often plagued by ambiguity: requirements that mean different things to different people, design documents interpreted in conflicting ways, or stakeholders and developers talking past each other. Earlier, we considered how “language games” or “différance” might explain these instabilities—a requirement spec can indeed become a playground for multiple interpretations or even a battleground of agendas. Habermas, however, offers a path beyond misinterpretation or game-playing, toward authentic understanding and consensus. In many projects, one can observe a dysfunctional pattern: team members “play to the requirements” as if following rules in a game, sometimes executing the letter of a specification while knowing it might not be what the users really need. This is akin to strategic action in Habermas’s terms—using language (the requirement text) instrumentally, with an attitude of “we’ll do exactly what it says to cover ourselves,” or “we’ll twist the wording to suit our department’s goals.” Similarly, project communication can devolve into power games: e.g. a senior architect pushing through their interpretation of a design without truly addressing colleagues’ doubts, or a client writing deliberately vague requirements to retain leverage over the vendor. Another failure mode is deferral: nobody resolves the ambiguity now; instead, each party proceeds with their private interpretation, deferring the reconciliation until a crisis (e.g., when delivered software doesn’t meet the real need—a classic “misunderstanding discovered too late”).

Adopting a Habermasian lens, teams would instead commit to communicative action—treating every interaction (meetings, reviews, feedback) as an opportunity to reach mutual understanding. This means replacing the mentality of “spec negotiation” with one of collaboration. In fact, modern agile methodologies implicitly endorse this: the Agile Manifesto famously values “Individuals and interactions over processes and tools” and “Customer collaboration over contract negotiation”. These principles echo Habermas’s point that genuine dialogue should trump rigid procedure or one-sided contracts. For example, Agile teams strive to maintain a “shared understanding” of goals and requirements through continuous communication—doing so “eliminates the potential for misunderstandings” and keeps everyone on the same page. Rather than writing exhaustive specs (a meta-language of sorts) to pin down every term, agile teams hold frequent discussions (daily stand-ups, sprint planning, reviews) to clarify intent and adapt meaning as the project evolves. Communication is fluid and happens when the need arises, not only at pre-set checkpoints. This fluid, ongoing discourse aligns with Habermas’s idea that understanding is an active process, not a one-time transfer of info.

A Habermasian approach in engineering practice would entail concrete shifts in how teams communicate:

By treating technical language as part of a discourse ethic rather than a static code or a battlefield, engineering teams can escape the trap of unstable meaning. The very things that Lacan or Derrida would warn about—the slippage of meaning, the play of signifiers—are kept in check by the team’s communicative practices. Ambiguity is not eliminated upfront (an impossible task), but continually managed and reduced through dialogical efforts. Habermas shows us that stability of meaning is an achievement—one that arises from communicative cooperation. In a software project, this might manifest as a well-aligned team where requirements, user stories, and design rationale all carry a meaning that is richly understood by everyone involved (not just superficially “agreed” on paper). Misinterpretations, when they occur, are caught early in conversation and become opportunities to refine the group’s understanding, rather than landmines. The overall effect is a shift from seeing language as a source of endless bugs (“requirements are always misinterpreted!”) to seeing it as a tool for building shared knowledge. In Habermas’s terms, the team develops a communicative rationality of its own: a rational, open discourse where technical terms and decisions are continually vetted for mutual understanding, and where consensus is not static but dynamically maintained as the project evolves.

By adding Habermas to our framework, we introduce a vision of overcoming the instability of meaning through the very act of communication. Where Wittgenstein, Lacan, and Derrida alert us to the pitfalls—contextual limits, subjective slippages, endless deferrals—Habermas responds with a pragmatic faith in discourse. He insists that when engineers and stakeholders engage one another with sincerity and reason, language can indeed function as a bridge rather than a barrier. Technical communication, under this lens, becomes less about wrestling with an unruly “game of code, power, or différance,” and more about participating in a collaborative dialogue aimed at truth and consensus. This doesn’t magically erase all ambiguity, but it provides a robust method to handle ambiguity: through iterative, inclusive conversation. In practical engineering terms, Habermas’s theory supports methodologies that prioritize ongoing clarification (daily stand-ups, pair programming discussions, user feedback loops) over those that rely on exhaustive upfront specification. It suggests that even in the most technical realms, ethics and understanding go hand in hand: a team committed to open, respectful discourse will likely produce clearer, more agreed-upon technical documents and systems. In sum, Habermas helps us envision technical language not as a fixed reference code nor as an infinite play of terms, but as a living, social process of meaning-making—one that, when guided by communicative rationality, can greatly stabilize and enrich the meaning of our words and the success of our collaborative projects.

← Back to all posts