Internal states, external worlds: concurrency, consensus, and sociotechnical challenges in computing
May 24, 2025
Common Intelligence Runtime project explores the future of software engineering in the post-AI era, blending philosophical inquiry with technical design to develop a shared runtime for intelligence, biological and artificial.
- From physical toil to cognitive burnout: modern labor and the ethics of software design
- Internal states, external worlds: concurrency, consensus, and sociotechnical challenges in computing
- Engineering speak: prolegomena to ideal technical discourses
- Cognitive mirror: how LLMs compress language and challenge our sense of self
Compute in vivo
From the dawn of computer science, computation has been understood as a kind of state machine—an automaton that transitions through well-defined internal states according to logical rules. Alan Turing’s abstract machine epitomized this model: a processor reading and writing symbols on an infinite tape, its behavior determined wholly by its current state and input symbol. In such a formalism, consistency is an internal affair—the machine’s next state is entirely specified by its current configuration and program. Early computing pioneers like Edsger Dijkstra embraced this rigorous internal consistency. Dijkstra’s quest for program correctness led him to emphasize deterministic design and proof, avoiding unbridled complexity. He famously introduced the mutual exclusion problem in 1965, essentially launching the science of concurrency by asking how N processes could share resources without conflict. His solution (using novel primitives like semaphores) aimed to remove ambiguity from concurrent execution. Indeed, Leslie Lamport wryly notes that many concurrency mechanisms effectively “eliminate concurrency” so we can pretend everything happens sequentially. In other words, the traditional approach to making parallel processes consistent was to constrain them until they resembled a single, orderly state machine.
This classical paradigm—exemplified by Dijkstra’s structured programming and Turing’s theoretical machines—ensured that, within the computer, each step followed logically from the last. The machine’s internal memory and logic were to remain consistent and free of contradictions. However, this paradigm implicitly assumed a closed world. The environment was either absent or treated as a pre-given input that did not disturb the computation once it began. As long as a program’s assumptions held, its internal state transitions could be trusted. Yet real computing systems soon outgrew the closed-world assumption, venturing out into a messy physical reality of sensors, networks, and users. When that happened, the neat state machine model ran into trouble.
As computers interfaced with an unpredictable outside world, ensuring consistency became a far more complex challenge. The most difficult programming and systems problems tend to arise precisely from the mismatch between a computer’s internal logic and the dynamics of its environment. A few notorious examples illustrate this tension:
Race Conditions: In a race condition, the system’s behavior depends on the timing of events or interleaved operations that are not synchronized. Classic races occur in concurrent programs when two processes access a shared resource without proper coordination—the outcome “races” between which process gets there first. The internal state machine alone cannot predict the result, because it hinges on external timing. Dijkstra’s work on mutual exclusion was a direct response to this problem: by enforcing that one process at a time enters a critical section, he imposed a consistent order on events that were otherwise asynchronous. Without such discipline, slight timing differences (nanoseconds of drift) can lead to inconsistent states or failures. The difficulty is that the state machine’s internal consistency must hold across all possible timings of external events—a combinatorial explosion of cases that is infamously hard to get right. Many a subtle bug has arisen because the formal code was correct in every sequential step, yet when two sequences interwove in the real world, an unexpected interleaving produced an error. In effect, the physical world’s simultaneity undermines the program’s logical guarantees.
Distributed Consensus: Even harder is keeping a consistent state across multiple machines in a network.Leslie Lamport, among others, showed that in a distributed system there may be no single global time—no authoritative ordering of events—because each node only has a partial, local view. His famous quip captures the predicament: “A distributed system is one in which the failure of a computer you didn’t even know existed can render your own computer unusable.” In distributed consensus (e.g. agreeing on a database commit or a blockchain ledger), the internal state of each machine must eventually line up to reflect the same reality, but messages can be delayed, lost, or corrupted. The system as a whole has to tolerate contradictory partial states (one node hasn’t yet seen an update that another node has applied) and still converge on one truth. Algorithms like Lamport’s Paxos or the raft of consensus protocols are essentially attempts to impose logical consistency on a fundamentally asynchronous, unreliable environment. The infamous FLP result (Fischer–Lynch–Paterson) even proves that in an asynchronous network with a possibility of one faulty process, deterministic consensus is impossible without further assumptions.A fancy way to say a long delay is indistinguishable from failure. In practice, engineers must balance consistency with availability and partition-tolerance (as per the CAP theorem)—a stark reminder that the physical realities of networking (latency, faults) place hard limits on the guarantees a formal model can provide. The end result is often eventual consistency: a looser condition acknowledging that, at any given moment, different parts of the system may not agree. This is a significant departure from the crisp, instantaneous state coherence imagined in a simple state machine.
Real-Time Concurrency: Computers increasingly control and sense the physical world in real time—from embedded systems in cars and airplanes to sensor networks and robotic actuators. These cyber-physical systems must react to external events within strict time bounds, or else risk disaster. The challenge is that the physical processes they monitor (engine dynamics, flight aerodynamics, human inputs, etc.) continue unfolding whether or not the software is ready. A program proven correct in abstract logic may still fail if, say, a sensor reading arrives too late or an actuator command is issued too early. For example, a medical infusion pump’s software might be perfectly verified to administer doses at specified intervals, yet if its clock drifts or the nurse adjusts a control at an inopportune moment, an overdose can occur—a temporal race condition with real-world consequences. Ensuring consistency here means aligning the machine’s state with real time. The notorious Therac-25 radiation therapy accidents in the 1980s were a tragic illustration: a race condition in the software’s concurrent tasks led to the machine delivering massive overdoses of radiation. The internal program logic had no errors on paper, but the interface between software timing and operator input (a physical interaction) was fatally inconsistent. Real-time scheduling, interrupt handling, and sensor fusion are all about bridging the gap between the discrete steps of computation and the continuous flow of time. These problems show how traditional formalisms struggle when time and ordering become part of the correctness criteria.
Each of these challenges—race conditions, distributed consensus, real-time concurrency—arises from the computer’s attempt to maintain a coherent logical state while engaging with an environment that does not share the same predictability. The difficulty is not simply a matter of “more complex code”; it is fundamentally about the mismatch of domains: the closed, unambiguous realm of symbolic logic versus the open, ambiguous realm of physical processes. As systems theorist Gregory Bateson might put it, the “difference” between these domains is critical—and it “makes a difference” in whether our systems succeed or fail.
“The science of computation has systematically abstracted away the physical world. Embedded software systems, however, engage the physical world. Time, concurrency, liveness, robustness, continuums, reactivity, and resource management must be remarried to computation. Prevailing abstractions of computational systems leave out these non-functional aspects. “ —Edward A. Lee, Embedded Software (2002)
“Software must bear responsibility not merely for what it is, but for what it does—in the world, among people, over time.” — Brian Cantwell Smith, On the Origin of Objects (1996)
“To treat meaning as an internal, formal affair is to miss the point of computation as world-involving action.” — Brian Cantwell Smith, paraphrased from his critiques of formalism
“The hardest part of building software is deciding what to say, not saying it.” — David Parnas, while not specific to embedded systems, this quote resonates especially in embedded design, where specifying real-world constraints like timing, failure, and reactivity is the real challenge.
“The computer is not just a tool. It is a medium for human expression and a participant in the structure of activity.” — Terry Winograd, Bringing Design to Software (1996)
“Programming is not about writing programs. It’s about understanding and describing the world.” — Peter Naur, Programming as Theory Building (1985)
“The computer is an interaction machine. It is most powerful when it vanishes, letting the user think and act in the world.” — Paraphrased from David Gelernter, Mirror Worlds (1991)
Traditional computer science formalism has tended to abstract away the physical world. In early computational theory, one could ignore messy details like timing or sensor noise—a Turing machine doesn’t get interrupted by outside events, nor does it operate on a wall-clock schedule. This abstraction made it possible to reason about algorithms in a vacuum, ensuring internal logical consistency in splendid isolation. But when software is embedded in an actual environment, those “non-functional” aspects (timing, concurrency, real-world resources) suddenly become very functional and very central. The classical formal methods (proofs of correctness, model checking, etc.) struggle at this interface. One issue is state space explosion: when you try to model all possible interactions between a program and an unpredictable environment (multiple concurrent threads, asynchronous inputs, hardware failures), the number of states to consider becomes astronomically large. For example, verifying a simple multi-threaded program requires considering all possible interleavings of instructions—a number that grows factorially with the length of the execution. When you add in environmental inputs that can occur at arbitrary times, the space of possibilities often defies exhaustive analysis. Traditional formalism copes by severely simplifying the environment (e.g. assuming a fixed upper bound on communication delays, or that at most one fault happens at a time). But these assumptions can break in reality, leading to proofs that don’t actually guarantee safety in the real world. The Ariane 5 rocket explosion (1996) is a classic example: a piece of software reused from Ariane 4 was proven correct under the environmental assumptions of the older rocket, but when Ariane 5’s faster engines produced data outside the expected range, the software failed catastrophically. Another limitation is that formal models often assume a clear demarcation between system and environment—a boundary where inputs are fed in and outputs observed. In practice, this boundary is porous and dynamic. Modern systems are interactive and continuous in their operation, not batch processors that take an input and later produce an output. Theoretical computer science has started extending models to address this (with concepts like reactive systems, actor models, or hybrid automata that include both discrete and continuous behavior). But even these advanced models only go so far. They still require that we enumerate the ways environment and system interact, and unexpected interactions can always surprise us. The upshot is a kind of formalist anxiety: as Dijkstra admitted, he approached concurrency with “fear of the nondeterminism” it introduced, striving to tame it so his programs would behave mathematically. We can tame it in simple cases, but as systems scale up—a web of services in the cloud, or an autonomous vehicle navigating a busy street—complete formal taming may be impossible. We then face a choice: restrict our systems and environments until they fit the formal mold (which may be impractical), or seek a different conceptual approach that can handle a degree of disorder. It is here that perspectives from cybernetics and philosophy of technology become illuminating. They invite us to rethink the sharp separation between the logical internal state and the unruly outside world. Instead of viewing the interface as a troublesome gap to be patched with ad-hoc solutions, these thinkers suggest reframing the entire problem: perhaps the machine and its environment are one system, and consistency must be considered at that broader level.
Cybernetic insights
Long before today’s software engineers grappled with race conditions, the pioneers of cybernetics were already studying how systems could maintain order in the face of external disturbances. Norbert Wiener, in developing cybernetics in the 1940s, was concerned with anti-aircraft systems and guided missiles—literally machines trying to hit moving targets in the physical world. He understood that such a system cannot be consistent by internal design alone; it must continually adjust based on feedback from the environment. As Wiener explained, “This control of a machine on the basis of its actual performance rather than its expected performance is known as feedback.” A feedback loop “re-inserts into [the system] the results of its past performance,” allowing the machine to correct any drift between its internal state and reality. In essence, the machine measures the environment (via sensors), compares it to its goal or expected state, and adjusts its behavior (via actuators or control inputs) to reduce any discrepancy. This was a radical departure from the open-loop, pre-programmed notion of a machine. It made the interaction itself—the continuous exchange between system and world—a central feature of system design.
Wiener’s view was deeply interdisciplinary. He saw analogies between machines and living organisms: “the physical functioning of the living individual and the operation of some of the newer communication machines are precisely parallel in their attempts to control entropy through feedback. Both of them have sensory receptors… for collecting information from the outer world.” Here “entropy” can be understood as the tendency toward disorder. A steam engine with a governor, a thermostat regulating a furnace, or even a homeostatic process in the human body (like sweating to cool down) are all examples of feedback keeping the system’s state within acceptable bounds despite external fluctuations. The implication is that consistency is not a one-off guarantee but an ongoing achievement: a dance of adjustments between system and environment. Real-time computing systems echo this insight—consider a PID controller in an embedded system continually adjusting a valve to keep pressure constant. Without feedback loops, any inconsistency between model and reality would accumulate error and eventually cause failure. Cybernetics thus formalized control as the principle of maintaining stability in open systems, complementing the computer science focus on internal logical coherence.
Gregory Bateson, a thinker in the cybernetic tradition, pushed these ideas even further into the realm of mind and society. Bateson argued that we should not delineate the boundaries of a mind (or a system) too narrowly. In his famous example, when a blind man walks with the aid of a cane, the man plus the cane plus the ground form one cognitive system for navigating the world. It makes no sense to say the “thinking” ends at the skull—the information and control loop extends into the environment. Bateson coined the phrase “the unit of survival is organism plus environment,” meaning an organism cannot be understood—or survive—independently of the surroundings it interacts with. In a direct analogy to computing, one could say the unit of reliable operation is computer plus environment. A computer system that “destroys its environment destroys itself”—for example, a network service that overwhelms the network with traffic (destroying its communication environment) will cease to function correctly. Bateson’s insight prefigures modern notions of distributed cognition and sociotechnical systems: the idea that human cognition and technological operation often form an integrated whole. Notably, when asked whether a computer could truly think, Bateson answered that “the thinking system is always the man and the technology and the environment in which they are situated”, with knowledge not residing in any single component alone.
The lesson cybernetics offers to our problem is a change in perspective: rather than obsess solely over the internal state consistency, pay equal attention to the loops that connect the system to the world. Robust systems must continually check and adjust—effectively learning about their environment and adapting. This could mean simple feedback control or more elaborate adaptive algorithms, but in all cases the boundary between inside and outside is crossed repeatedly with information. In technical practice, this is reflected in the shift from static verification to runtime monitoring, self-stabilizing algorithms (which converge to a correct state if perturbed), and systems that degrade gracefully rather than simply crashing on unexpected input. The cybernetic lens teaches that inconsistency isn’t always a bug to be eliminated once and for all—sometimes it’s a condition to be managed over time through dynamic interaction.
The system’s correctness depends on how it lives within its broader loop—of user, tools, time, and environment. To abstract these away is to design for failure. In this view, user manuals, training procedures, and real-world feedback loops are not peripheral—they are core system components. They close the cognitive and operational loop. You’re not just programming machines. You’re designing sociotechnical ecologies.
First-class engineers must also be first-class community leaders—not because of prestige or title, but because the systems they build reshape the world others must live in. Engineering is not just technical—it is ecological, ethical, and political. When software mediates how people communicate, decide, or survive, engineers become architects of social infrastructure. They are no longer just problem-solvers—they are problem framers, norm enforcers, and stewards of shared reality. Leadership, in this sense, means recognizing that code lives in a community. That protocols affect people. That abstractions have consequences. A first-class engineer sees beyond the pull request and into the civic, human, and environmental systems their work inhabits. They lead not just with skill, but with responsibility.
Gilbert Simondon, writing in the 1950s, introduced the concept of the “open machine.” He critiqued the drive toward completely automatic, self-enclosed machines. As Simondon noted, “A purely automatic machine completely closed in on itself in a predetermined way of operating would only be capable of yielding perfunctory results. The machine with superior technicality is an open machine.” In Simondon’s view, truly advanced machines are those that maintain a “margin of indeterminacy” in their functioning—a flexibility that allows them to respond to outside information. Rather than predefining every action, an open machine leaves room to be steered by real-time inputs, much like Wiener’s feedback loops. Simondon went further to say that an ensemble of open machines requires human involvement as a coordinator or interpreter: “the ensemble of open machines assumes man as permanent organizer… Far from being the supervisor of a squad of slaves, man is the permanent organizer of a society of technical objects which need him.” This is a striking reversal of the idea that automation should eliminate human intervention. Instead, the human operator (or user) is an integral part of the system’s operation—analogous to a conductor guiding musicians, as Simondon’s metaphor goes. The conductor doesn’t micromanage each instrument’s every note, but provides a unifying tempo and interprets the overall performance, responding to how the music actually sounds. Likewise, human-in-the-loop can provide the contextual judgment and flexibility that purely formal systems lack. Simondon thus anticipated the modern emphasis on human-centered automation and the design of systems that cooperate with human intelligence rather than replace it. For maintaining consistency, this means sometimes the solution is not a more elaborate algorithm, but a better integration of human judgment at critical junctures (for example, an automatic pilot handing control back to a human when it detects conditions it doesn’t understand).
Bruno Latour, a leading figure in science and technology studies, offers another perspective with his Actor–Network Theory (ANT). Latour and his colleagues argue that the social and the technical are not separate domains but form a single entwined network of relationships. In ANT, humans and nonhumans (“actors” or “actants”) are on the same footing, each influencing the state of the network. Rather than seeing a computer system as purely technical and the user or society as an external influence, ANT would have us see a sociotechnical network in which hardware, software, people, organizations, and even laws or standards are all nodes that shape system behavior. From this standpoint, ensuring consistency and correctness is not just a matter of internal code logic—it’s also about the network of practices and affordances around the technology. For example, a distributed consensus algorithm’s reliability may depend on social processes like operators noticing and rebooting failed nodes, or on institutional trust in the authorities who administer the network. Latour famously stated that “technology is society made durable,” meaning that what we build into machines are often social rules or norms in hardened form. A simple illustration is a door closer: it “embeds” the social expectation that a door should shut itself after someone passes—a task a human doorman might have performed in the past. In computing, one could say that a lock in a concurrent program is society made durable: it’s an enforced convention that only one process “speaks” at a time, analogous to polite turn-taking in conversation. Actor–Network Theory thus encourages us to design for sociotechnical consistency—recognizing that procedures, user training, error reporting systems, and cultural acceptance of a system are as much part of its consistency as the code’s formal properties. A trivial example is how a UI design can prevent inconsistent states by guiding user behavior (e.g. disabling a button after it’s clicked to avoid duplicate requests). The boundary between “user mistake” and “software bug” becomes blurrier; good design treats them together, ensuring the overall system (human + computer) stays in a valid state.
Bernard Stiegler provides a deep philosophical meditation on how technology and humanity co-evolve, which sheds light on why the interface between internal and external consistency is so challenging. Stiegler argued that “technics [technology] forms the horizon of human existence”, coextensive with our being. In his view, it’s impossible to separate what is human from the technical prostheses and artifacts that humans create—from the earliest flint axe to the latest computer. Crucially, Stiegler claims that the genesis of the technical and the genesis of the human are intertwined, and even our experience of time (our temporality) is bound up with technology. This has profound implications for our topic: the struggle to align machine state with physical world state can be seen as part of a larger historical struggle to align our human intentions with the tools we use. Stiegler points out that Western philosophy often suppressed this insight by treating technology as a mere exteriority or instrument (the old episteme vs. techne split). But in reality, the “who” (human) and the “what” (technology) are in an undecidable relation—each constitutes the other. Applying this to computing, the formal logical systems we devise are not apart from us; they reflect our own cognitive habits and also shape them. When formal methods hit a wall trying to model the world, it may indicate that we have to change how we think about the problem, not just engineer a patch. Stiegler would likely urge a pharmacological approach—recognizing that technology is a pharmakon (poison and cure) that disrupts even as it orders. The very issues of race conditions and unpredictability in complex systems are a kind of poison introduced by the power of computing extending into every corner of reality; the cure, in turn, might be more technology (better tools, monitors, AI diagnostics) but also might be social (education, protocols of use) or philosophical (rethinking what robustness means in an age of algorithms). In Stiegler’s terms, we need an ethos for our technical era—a way of taking care of the human-technology relation so that our systems remain coherent and serve human ends.
Integrated coherence
The evolution of computing from isolated state machines to pervasive, real-time, networked systems has forced us to confront the tension between internal consistency and external reality. The most challenging bugs and system failures often arise exactly at this intersection, where the neat determinism of software meets the parallel, unpredictable, and time-bound nature of the real world. We have seen how classical thinkers like Turing and Dijkstra established the importance of internal logical coherence, only for practitioners like Lamport to discover the “chaos” of distributed computing that lies beyond that cozy garden. The response has been twofold: one path refines the engineering, inventing new algorithms and proofs to extend coherence a bit further outwards (as Lamport did with timestamps, consistency models, and model checking for distributed systems). The other path, illuminated by cyberneticians and philosophers, reconceives the problem by embracing the entanglement of computation with its environment.
Ultimately, ensuring consistency in modern computing systems is not a one-dimensional technical problem—it is sociotechnical. It requires thinking in terms of systems of systems, of feedback loops that include people as well as programs, of ongoing processes rather than final end-states. The internal logical clarity we strive for must be matched by an external contextual awareness. In practical terms, this means hybrid solutions: formal verification and runtime monitoring; rigorous algorithms and thoughtful user experience design; fault-tolerant protocols and organizational policies for crisis management. It means, as Simondon advised, keeping the machine open to reality and understanding that true technical perfection lies not in autonomous infallibility, but in adaptive resilience.
The state machine model of computing is not obsolete—it remains a powerful way to reason about complex logic. But it must be complemented by models of interaction, time, and uncertainty. By expanding our view to encompass the whole milieu in which computers operate, we acknowledge that consistency is a shared responsibility between the program and its world. The payoff for this broader view is systems that better survive the unexpected. As we build ever more advanced technologies—self-driving cars, smart infrastructures, autonomous agents—the lesson becomes ever more clear: the consistency that truly matters is that of the combined human-machine-environment system. Achieving that consistency is as much a matter of insight and design (informed by thinkers like Wiener, Bateson, Simondon, Latour, and Stiegler) as it is a matter of code. It calls on us to be, in a sense, both programmers and philosophers, designing not just algorithms, but whole ecologies of computing that can maintain their balance amid the turbulence of the real world.