Tuesday, February 03, 2026

Intelligibility Is Not a Practice: Against Relativism, Naturalism, and Inferential Closure

 

1. The Evasion of Intelligibility

One of the most striking features of contemporary philosophy of language and logic is not what it argues for, but what it quietly avoids. Across otherwise divergent traditions, there is a shared reluctance to treat intelligibility itself as a real philosophical problem. Meaning is analyzed, reference theorized, normativity reconstructed, and inference regimented, yet the question of how anything can count as intelligible at all is either deferred or dissolved.

This avoidance is not accidental. The notion that intelligibility might be irreducible, non-formal, and ontologically basic sits uneasily with dominant methodological commitments. It threatens naturalism by introducing normativity that is not causally explicable. It threatens pragmatism by locating standards of correctness outside social practice. It threatens formalism by insisting that no amount of structure can close the gap between syntax and meaning.

The response has been predictable. Rather than confronting intelligibility directly, contemporary philosophy has sought to explain it away: by appeal to models, to behavior, or to practice. The result is not theoretical economy, but conceptual self-sabotage. In attempting to eliminate non-formal conditions of meaning, these approaches quietly presuppose them.

The claim defended here is straightforward but uncompromising: intelligibility is not generated by formal systems, empirical regularities, or social practices. It is a condition of their possibility. Any theory that denies this either collapses into regress, triviality, or eliminativism.

2. Formal Determination and the Persistence of Meaning

It is uncontroversial that formal systems do not interpret themselves. Yet this fact is routinely treated as a technical limitation rather than a metaphysical one. That is a mistake.

No formal system can, from within its own resources, establish that it is the correct system for the domain it purports to represent. The notions of correctness, adequacy, and relevance are not formal predicates. They are not derivable from axioms or inference rules. They govern the application of systems, not their internal operations.

This is not a merely epistemic limitation reflecting human ignorance or computational constraint. Even an ideal reasoner supplied with unlimited resources would face the same structural situation. Formal derivation presupposes semantic uptake. Proof presupposes satisfaction. Syntax presupposes meaning.

Attempts to evade this by appeal to meta-systems simply reproduce the same structure. A meta-system may encode rules about object-level systems, but the judgment that the meta-system is doing so correctly again relies on standards it does not itself generate. The hierarchy does not terminate in closure. It presupposes a space in which hierarchies can be evaluated at all.

The persistence of this space is not a defect of formalism. It is revealed by formalism at its most rigorous. Logic teaches us, by its own internal limits, that intelligibility cannot be fully objectified.

3. Why Model-Theoretic Relativism Cannot Do the Job

The model-theoretic argument associated with Hilary Putnam is often taken to show that reference and truth cannot be determinate independently of interpretive schemes. The existence of multiple non-isomorphic models satisfying the same theory allegedly undermines metaphysical realism and supports a form of conceptual relativism.

But this argument rests on an equivocation.

The technical result shows that formal theories underdetermine interpretation. It does not show that interpretation is therefore conventional or indeterminate. To reach that conclusion, one must assume that all satisfying models are equally acceptable. Yet that assumption renders the argument unintelligible.

The distinction between intended and unintended models is not a model-theoretic distinction. It is not fixed by satisfaction relations. It presupposes standards of relevance, salience, and adequacy that are not themselves formalizable. If those standards are abandoned, the argument collapses into triviality: any interpretation is as good as any other, including interpretations on which the argument itself fails to refer.

The model-theoretic argument therefore presupposes what it denies. It relies on a non-formal sense of correctness to distinguish meaningful interpretations from pathological ones, while refusing to acknowledge the ontological status of that sense. The result is not deflationary clarity, but conceptual incoherence.

What the argument actually demonstrates is not the relativity of meaning, but the impossibility of eliminating extra-formal intelligibility. The very act of recognizing model-theoretic underdetermination depends on a prior space in which interpretations can count as better or worse.

4. The Failure of Naturalized Semantics

Naturalized semantics promises a more austere solution. Meaning is reconstructed in terms of causal relations, dispositions, or evolutionary success. Normativity is redescribed as reliable response to environmental stimuli. On this view, no irreducible semantic facts remain.

This approach fails not because it is insufficiently detailed, but because it misconstrues the problem.

Causal regularities do not distinguish between correct and incorrect application. They describe what happens, not what ought to count as right or wrong. A pattern of reliable behavior does not, by itself, amount to rule-following unless standards of correctness are already in place.

Scientific reasoning itself presupposes norms of evidential relevance, explanatory adequacy, and inferential legitimacy that cannot be reduced to causal history. Appeals to evolutionary advantage merely shift the problem: advantageous for what, and according to which standards? The invocation of function presupposes intelligibility rather than grounding it.

A naturalized semantics must therefore either smuggle normativity back in under another name or deny that rational normativity is real. The former yields inconsistency; the latter yields eliminativism. Neither can support the authority of science or philosophy.

The problem is not that naturalism explains too little, but that it explains the wrong thing. It explains behavior while presupposing meaning.

5. Inferentialism and the Social Turn

Inferential pragmatism, most prominently associated with Robert Brandom, represents a more sophisticated attempt to take normativity seriously without reifying it. Meaning is constituted by inferential role within a social practice of giving and asking for reasons. Norms arise from mutual recognition and scorekeeping.

This view is correct to reject reduction to causal regularity. But it mislocates the ground of normativity.

Social practices can transmit, stabilize, and contest norms. They cannot generate normativity without circularity. The distinction between correct and incorrect inference cannot itself be constituted by communal endorsement unless correctness is reduced to authority or consensus. In that case, disagreement ceases to be rationally intelligible.

Moreover, inferential roles are intelligible only within a prior space in which inferences can count as about something rather than merely occurring. A practice of scorekeeping presupposes that there is something to keep score of. That presupposition is not supplied by the practice itself.

Inferentialism therefore presupposes intelligibility while denying its independence. It treats the social articulation of norms as their ontological ground, rather than as one mode of their manifestation.

6. Against Naturalism Once More

It may be objected that the foregoing critique relies on an inflated notion of normativity, one that contemporary philosophy has learned to distrust. Perhaps intelligibility simply is what competent users do. Perhaps there is no further fact of the matter.

This response merely restates the problem.

If intelligibility is exhausted by use, then there is no distinction between correct and incorrect use beyond what is contingently accepted. But then the authority of philosophy, logic, and science evaporates. Critique becomes sociology. Argument becomes reportage.

No one who engages in philosophy actually accepts this consequence. Appeals to error, misunderstanding, misapplication, and confusion are ubiquitous. They presuppose standards that transcend local practice.

The refusal to acknowledge these standards does not eliminate them. It merely renders them philosophically invisible.

7. Intelligibility as a Condition, Not a Product

The common failure of relativism, naturalism, and inferentialism lies in their shared assumption that intelligibility must be produced: by systems, by organisms, or by practices. When production fails, intelligibility is either relativized or denied.

The alternative defended here is that intelligibility is a condition of determinability. It is not an entity, a rule, or a theory. It is the space in which determinate meanings, judgments, and truths can arise.

This space is not formal, because any attempt to formalize it collapses it into what it conditions. It is not subjective, because subjects participate in it rather than generate it. It is not social, because practices presuppose it in order to function as practices.

It orients rational activity without necessitating outcomes. It grounds normativity without competing with causal explanation. It makes disagreement, correction, and progress possible without guaranteeing closure.

To deny the reality of this space is not to adopt a leaner metaphysics. It is to undermine the very distinction between sense and nonsense on which philosophy depends.

8. Quine and the Refusal of the Question

The resistance to treating intelligibility as irreducible can be traced back, in part, to the influence of W. V. O. Quine. By rejecting the analytic-synthetic distinction and advocating a thoroughgoing naturalism, Quine sought to dissolve questions of meaning into empirical science.

But what is dissolved is not confusion, but authority.

Quine’s holism presupposes that some revisions of belief are better than others. His naturalized epistemology presupposes standards of evidential relevance. His own arguments presuppose intelligibility at every step. What he denies is not normativity as such, but its philosophical articulation.

The refusal to articulate the conditions of intelligibility does not free us from them. It merely leaves them unexamined.

9. The Inescapable Conclusion

The attempt to explain intelligibility away has failed. Formal systems do not close the gap. Naturalism cannot ground normativity. Social practice cannot generate correctness. Each approach presupposes what it denies.

The conclusion is not mysterious, but it is unwelcome: intelligibility is real, irreducible, and ontologically basic.

One may resist this conclusion. One may redescribe, deflect, or postpone it. But one cannot eliminate it without eliminating the very enterprise of philosophy.

If intelligibility is not real, nothing we say means anything.
If it is real, then the project of explaining it away is incoherent.

The burden of proof now lies with those who claim otherwise.

Basic Intelligibility, Teleo-Spaces, and the Discipline of Sense (Reading Wittgenstein's Tractatus 1:0 - 3.5)

 I. The Problem of Basic Intelligibility

Any philosophy that takes itself seriously must eventually confront a question that is almost never stated with sufficient clarity: why is anything intelligible at all?

This is not the familiar question of why particular propositions are true, nor why certain inferential practices succeed. It is the more basic question of why determinacy itself obtains—why distinctions hold rather than dissolve, why meaning does not collapse into either infinite regress or sheer indifference. One may explain this or that truth, but explanation already presupposes a field in which explanation can count as explanation. The deeper question concerns the possibility of sense as such.

Reflection shows that this question cannot be answered algorithmically. An algorithm already presupposes a distinction between correct and incorrect application and therefore operates within a prior space of intelligibility. Nor can the question be resolved by appeal to subjectivity, social practice, or convention, since these themselves function only insofar as distinctions already matter. Even formal logic cannot close the issue. Logic presupposes a field of possible sense in order to operate as logic at all. It does not generate that field.

To name this condition without prematurely domesticating it, I shall speak of teleo-space. A teleo-space is not an entity, a subject, or a hidden metaphysical layer. It is a structured field of intelligibility—one in which distinctions can obtain, norms can exert force, and direction toward sense can emerge without the prior imposition of explicit rules. Logical space is one such teleo-space, but it is not unique. Ethical, perceptual, and practical spaces exhibit the same basic structure. In each case, intelligibility is not conferred by a subject nor derived from convention; it is the condition under which judgment is possible at all.

The thesis hovering over this series can therefore be stated with restraint: regress in meaning, truth, and metaphysics does not terminate in silence, algorithm, or social practice, but in a basic, weighted intelligibility of reality itself. Whether this intelligibility is finally grounded in the Logos is not presupposed here. That question will emerge—or be resisted—under pressure from the texts themselves.

With that pressure in view, we turn to the Ludwig Wittgenstein's first book: The Tractatus Logico-Philosophicus. 

The World as Determinate (1.0–1.13)

Wittgenstein opens with a sentence that immediately enforces determinability:

1. Die Welt ist alles, was der Fall ist.
“The world is everything that is the case.”

The world is not the totality of what exists but of what obtains. This is already a restriction of intelligibility. What cannot be the case cannot be said, and what cannot be said does not enter the space of sense.

The point is sharpened immediately:

1.1 Die Welt ist die Gesamtheit der Tatsachen, nicht der Dinge.
“The world is the totality of facts, not of things.”

Facts, not objects, are the bearers of intelligibility. Objects do not explain sense; they participate in it only insofar as they occur in determinate configurations. An isolated “thing” is not yet meaningful. Sense requires articulation; it requires that something could be otherwise. What blocks regress here is not explanation but constraint.

Logical Space and Possibility (1.13–2.0122)

Wittgenstein then introduces logical space as the unified field in which facts are possible:

1.13 Die Tatsachen im logischen Raum sind die Welt.
“The facts in logical space are the world.”

Logical space is not constructed, inferred, or discovered. It is presupposed. One does not assemble intelligibility piece by piece; one always already operates within a field of possible sense.

This presupposition becomes explicit in the discussion of objects:

2.0121 Es wäre unmöglich, die Gegenstände zu denken, ohne sie in einem Sachverhalt denken zu können.
“It would be impossible to think of objects without thinking of them as occurring in states of affairs.”

Objects can only be thought as possibly occurring. Their independence is therefore modal rather than ontological:

2.0122 Das Ding ist selbständig insofern es in allen möglichen Sachlagen vorkommen kann.
“The object is independent in so far as it can occur in all possible situations.”

From the perspective of teleo-spaces, this is decisive. Objects are not intelligible on their own; they are nodes within a space of directed possibility. Any attempt to ground meaning in metaphysical atoms is thereby foreclosed. At the same time, the unity of logical space itself is presupposed rather than explained. It is enforced as a condition of sense.

Simplicity and the Arrest of Regress (2.02–2.0212)

Wittgenstein insists:

2.02 Der Gegenstand ist einfach.
“The object is simple.”

This simplicity is not empirical but logical. Objects mark where analysis must stop if meaning is to arrive at all. The reason is explicit:

2.0211 Wenn die Welt keine Substanz hätte, so würde, ob ein Satz Sinn hat, davon abhängen, ob ein anderer Satz wahr ist.
“If the world had no substance, then whether a proposition had sense would depend on whether another proposition was true.”

If sense depended on further propositions, regress would be infinite. Meaning would never stabilize. Here Wittgenstein aligns fully with the teleo-space intuition: intelligibility cannot be deferred without limit. There must be a given field of determinability within which articulation can occur. What he refuses to do is explain why such a field holds together. He treats it as a condition of sense rather than an object of theory.

Picturing, Logical Form, and Showing (2.1–2.18; 2.172)

When Wittgenstein writes,

2.1 Wir machen uns Bilder der Tatsachen.
“We make to ourselves pictures of facts,”

Wittgenstein is not appealing to psychology. A Bild is a structured representation whose power lies not in mental imagery but in shared articulation. What every picture must share with reality is logical form:

2.18 Was jedes Bild … mit der Wirklichkeit gemein haben muss … ist die logische Form.
“What every picture must have in common with reality … is logical form.”

Logical form is not an object among others. It is the condition of representation itself. This is why it cannot be represented:

2.172 Das Bild kann seine logische Form nicht abbilden; es zeigt sie.
“The picture cannot represent its logical form; it shows it.”

What shows itself here is not ineffable content but unavoidable constraint. No formal system can state its own conditions of operation without circularity. No rule can generate the space in which it functions as a rule. This marks a decisive anti-algorithmic moment in the Tractatus. Wittgenstein blocks explanation precisely where intelligibility is doing its deepest work.

Tension and Orientation

Up through this point, Wittgenstein consistently affirms determinacy, blocks regress, rejects algorithmic closure, and denies subjectivist grounding. All of this converges with the claim that intelligibility is real, structured, and irreducible.

The tension emerges at the question of ground. Logical space is treated as a condition of sense that must be presupposed but not accounted for. The teleo-space framework insists that such presuppositions themselves demand ontological reckoning—not as entities or axioms, but as the condition under which determinacy can obtain at all.

Whether that reckoning must finally appeal to the Logos remains an open question. But the Tractatus ensures that the question cannot be avoided. It disciplines thought into seeing where explanation must stop—and where philosophy must either recoil or press forward.

That pressure is the work ahead.

Friday, January 30, 2026

Sense without Sense?

Quine, Carnap, and the Persistence of Intelligibility

The reflections that follow were occasioned by a recent paper by Lucas Ribeiro Vollet, “Sense (Sinn) as a Pseudo-Problem and Sense as a Radical Problem: A Reading of the Motivations of Quine against Carnap" (https://independent.academia.edu/s/c59ed03835). Vollet kindly invited comment on the piece, and it repays careful reading. The article revisits the familiar—but still unsettled—Quine–Carnap dispute over Sense, not as a merely historical episode, but as a diagnostic window into the limits of semantic theory in the twentieth century.

Vollet’s central claim is worth stating clearly at the outset. He argues that Quine’s skepticism about intensions is not eliminativist in the crude sense often attributed to him. Quine does not regard the question of Sense as a pseudo-problem in the way some moral or metaphysical notions are dismissed as meaningless. Rather, he believes the question is incorrectly framed. What Carnap treats as a semantic problem—the need for intensional identity conditions stronger than extensional equivalence—Quine reinterprets as a scientific and methodological challenge: the ongoing task of coordinating empirical investigation, theory revision, and socially stabilized paradigms of meaning.

As Vollet summarizes in his abstract, Quine’s naturalism applies skepticism about intensions not to deny the reality of the problem, but to expose its mislocation. The appeal to sense functions as a semantically dogmatic expression of a broader difficulty already present in scientific practice itself: the challenge of providing coherence to inquiry, securing rational consensus, and stabilizing paradigms of meaning over time. Vollet names this the radical problem generated by the idea of Sense.

This reframing is philosophically serious and historically sensitive. It resists the temptation to caricature Quine as a blunt extensionalist and instead situates his critique of sense within a broader vision of scientific rationality. There is much here with which one can agree. Yet the very sophistication of Vollet’s reconstruction also sharpens a question that neither Quine nor his interpreters fully resolve: what becomes of intelligibility once formal semantics has been dismantled, but scientific practice itself presupposes more than extensional structure can supply?

It is this question—rather than the fate of “sense” as a semantic object—that motivates what follows.

I. The Legitimate Collapse of Intensional Semantics

Quine’s central insight was not merely that intensional entities resist formal definition. It was that the criteria by which such entities were supposed to be individuated—analyticity, synonymy, necessity—could not be specified without circularity. Any attempt to regiment them either presupposed what it claimed to explain or relied on pragmatic judgments smuggled in under the guise of logical form.

Vollet is right to insist that this is not a technical oversight but a structural failure. There is no algorithm for sense. No calculus decides synonymy. No formal rule distinguishes what is merely coextensive from what is cognitively equivalent. To that extent, the Carnapian project fails decisively.

This failure should not be minimized. It forces a clean and non-negotiable distinction between what syntax can secure—derivability, consistency, inferential order—and what it cannot: meaning, reference, or truth. Formal rigor does not rescue sense; it exposes its absence as a formal object. Any attempt to recover sense by enriching syntax, appealing to semantic rules, or invoking linguistic frameworks only relocates the difficulty without resolving it.

In this respect, Quine’s critique remains one of the most important negative results of twentieth-century philosophy.

II. The Non Sequitur: From Failure to Elimination

Where the argument falters is in the inference drawn from this failure. Quine famously concluded that since intensional semantics cannot be formalized, there is no fact of the matter beyond extensional equivalence and the evolving practices of empirical science. Meaning becomes, at best, a byproduct of theory choice, pragmatic convenience, and holistic revision.

But this conclusion does not follow.

The failure of formal capture does not entail the unreality of what resists capture. It shows only that the object in question is not an object of the same kind as formal derivations or syntactic structures. To infer elimination from non-formalizability is to mistake a methodological limitation for an ontological verdict.

Indeed, Quine’s own account of scientific practice quietly depends on distinctions that extensionalism alone cannot generate. Theory revision is not arbitrary. Scientific change is constrained by judgments of relevance, coherence, explanatory power, and unification—none of which are derivable from extensional relations alone, and none of which can be dictated by data without remainder. These judgments are not internal to a theory in the way axioms are; nor are they reducible to convention. They presuppose a space in which theories can count as making better sense of a domain rather than merely succeeding instrumentally.

Quine identifies the failure of intensional semantics correctly. What he fails to identify is what that failure presupposes.

III. Intelligibility Without Intensions

Once sense is rejected as a formal intermediary, we are left with a striking alternative: intelligibility without intensional objects. The question is no longer What is the sense of this expression? but How is sense-making possible at all, if no formal structure can generate it?

Vollet gestures toward an answer by appealing to coordination, rational negotiation, and scientific practice. But these gestures remain descriptive rather than explanatory. They name sites where intelligibility is exercised without accounting for what makes such exercise possible in the first place.

Theory choice, interpretive adequacy, and conceptual revision all presuppose a non-formal orientation toward meaning. This orientation is not itself a theory, nor a rule, nor a convention. It is the condition under which theories, rules, and conventions can be assessed as intelligible rather than merely adopted.

This is the point at which extensionalism quietly depends on what it officially disavows. The rejection of sense does not eliminate the problem of meaning; it relocates it to a level that resists objectification.

IV. Determination and the Space of Orientation

What emerges here is not a new intensional entity, but a structural distinction: the distinction between determination and determinability. Formal systems determine relations within a domain. But the capacity for such determination—to count as relevant, adequate, or successful—depends on a space of orientation that is not itself formally determined.

This space is neither subjective nor sociological, though it is encountered through finite judgments. It does not dictate outcomes, but it orients inquiry toward intelligibility. It guides without necessitating. It grounds without competing with determinate structures.

Attempts to collapse this space into practice, convention, or revisionary habit fail for the same reason that intensional semantics failed earlier: they confuse the exercise of intelligibility with its condition. What is presupposed in every successful act of interpretation cannot itself be reduced to the history of such acts.

V. Translation, Stabilization, and a Persistent Remainder

A brief exchange following Vollet’s paper sharpens this point further. In response to a comment emphasizing the importance of language–metalanguage distinctions, translation procedures, and higher-order logical resources for reconstructing ontological commitments—especially in contemporary AI contexts—Vollet agrees that such distinctions are crucial. They allow ontological commitments to be reorganized through mapping rules that preserve predictive roles while revising theoretical vocabulary.

Yet Vollet also raises an important hesitation. Translation frameworks, he suggests, may not be the only way to model ontological stabilization. Fixed-point constructions, iterative self-mapping, and convergent computational paths might also generate stable ontological frameworks internally, without appeal to pre-given semantic foundations. From this perspective, intensional—or even “supersensible”—structures emerge as products of convergence rather than as metaphysical primitives. Quine, Vollet suggests, might allow such internal stabilization, provided it remains constrained by biological, social, or cultural selection pressures that guide coordination toward shared reference.

This exchange is illuminating precisely because it confirms the deeper issue. Whether one appeals to translation, fixed points, or convergence, the question remains the same: what makes stabilization intelligible as stabilization rather than mere iteration? What distinguishes convergence from coincidence, coordination from collapse, agreement from brute alignment?

No amount of internal reorganization can answer that question from within.

VI. A Limit Quine Cannot Cross

Quine was right to dismantle the myth of sense as a semantic object. He was wrong to suppose that nothing remains once the myth is dispelled. What remains is intelligibility itself—real, irreducible, and non-formal.

Extensional logic shows us, with remarkable clarity, what form can and cannot do. But it also shows us that meaning is not generated by form. It is presupposed by it. The recognition of this fact does not require a return to intensions. It requires acknowledging that intelligibility has an ontological ground that formal systems inhabit but do not produce.

To say this is not yet to speak theologically. But it is to arrive at a threshold. The problem of sense dissolves. The problem of intelligibility does not. And it is precisely at that point—where philosophy has exhausted its formal resources without collapsing into irrationalism—that a deeper account of meaning becomes unavoidable.

That account begins not with semantic enrichment, but with the recognition that the space in which meaning appears is itself grounded. 

Saturday, January 24, 2026

Quantum Collapse, Incompleteness, and the Ontology of Intelligibility -- A Short Excursus

Prefatory Orientation

The discussion that follows is addressed to readers trained in theology and metaphysics rather than in physics or mathematical logic. Accordingly, its aim is not to adjudicate technical disputes within quantum theory itself, but to draw out the structural significance of those disputes for questions of intelligibility, realism, and explanation. Quantum mechanics functions here as an analogy—not because metaphysics is to be derived from physics, but because conceptual failures in one domain often expose homologous failures in another. In particular, the recurrent temptation to appeal to observers, subjects, or acts of recognition precisely at the point where explanation falters is a pattern that cuts across physics, philosophy, and theology alike.

One of the most instructive analogies for contemporary debates over intelligibility therefore arises not primarily within philosophy of language or theology, but within the foundations of quantum mechanics—specifically in the unresolved tensions between locality, completeness, and explanation. These tensions are not merely technical puzzles internal to a physical theory. They reveal fault lines concerning the relation between reality and its intelligibility, and they do so with a clarity that is often obscured in more familiar philosophical contexts.

At stake is a question that is metaphysical before it is mathematical: does reality possess determinate structure independently of observers, or must actuality itself await acts of measurement, recognition, or judgment in order to be what it is?

Put otherwise, is intelligibility grounded in being itself, or is it supplied—explicitly or implicitly—by the subject at the moment where formal description proves insufficient?

The pages that follow argue that the latter option, however tempting, functions not as an explanation but as a displacement. Appeals to subjectivity at points of theoretical failure do not resolve the problem of intelligibility; they merely relocate it. The analogy with quantum mechanics will serve to make this displacement visible, and thereby to reopen a more demanding realist alternative—one in which intelligibility is not constituted by minds, but encountered by them as already operative within reality itself.

Locality, Completeness, and the Measurement Problem

On the Copenhagen interpretation of quantum mechanics, a physical system is described by a wave function that encodes a superposition of possible states. Prior to measurement, the system is said not to possess definite physical properties. Only upon measurement does the wave function “collapse” into a single, concrete outcome.

The difficulty here is not merely that collapse is probabilistic rather than deterministic. The deeper problem is that the theory provides no physical account of what collapse is. Instead, it treats “measurement” as a primitive notion, invoked precisely at the point where explanation is required. The theory thus relies on a term whose application is left formally indeterminate.

What, then, qualifies as a measurement?

  • Is it the presence of a conscious observer?
  • Is it the interaction with a macroscopic apparatus?
  • Is it an irreversible physical process?
  • Is it the registration or acquisition of information?

The Copenhagen interpretation notoriously refuses to specify necessary and sufficient conditions for the application of the term observation. As a result, an objective physical transition—the passage from superposition to determinate actuality—is rendered dependent upon an appeal to subjectivity that is itself left undefined. Nature’s transition from possibility to actuality is explained not by physical law, but by reference to an epistemic event whose ontological status remains obscure.

This is not a marginal technical omission. It marks a structural failure of explanation. Where the formal dynamics of the theory fall silent, subjectivity is introduced not as an object of analysis, but as a terminus of inquiry. Measurement does not explain collapse; it names the point at which explanation is deferred.

It was precisely this feature of the Copenhagen interpretation that troubled many physicists at the time, and the concern emerges with particular clarity in the Einstein–Podolsky–Rosen argument. Contrary to widespread caricature, Einstein’s objection in EPR was not motivated primarily by an attachment to classical determinism or by resistance to probabilistic laws. His concern was more fundamental. It was a concern about completeness.

A physical theory, in Einstein’s sense, is complete if every element of physical reality has a corresponding element within the theory’s description. Completeness, so understood, is not a demand for total predictive power, but for ontological adequacy. If the actualization of physical properties requires appeal to something outside the theory’s formal resources—namely, an observer, an act of measurement, or an epistemic intervention—then the theory is incomplete by its own standards.

The Copenhagen interpretation, by locating the transition from possibility to actuality at the level of observation while refusing to specify what observation is, appears to violate this criterion. The theory’s formal apparatus describes the evolution of the wave function, but the actuality of outcomes is secured only by appeal to something that the theory itself does not and cannot describe. The observer thus functions not as an element within the theory, but as a compensatory device introduced to mask a gap in ontological description.

Einstein’s worry, therefore, was not that quantum mechanics lacked determinism, but that it lacked reality—that it could not account for physical actuality without tacitly importing an epistemic surrogate at precisely the point where an ontological account was required.

EPR, Locality, and the Meaning of “Hidden Variables”

The Einstein–Podolsky–Rosen argument proceeds from a realist assumption that is deliberately modest and carefully constrained. If one can predict with certainty the value of a physical quantity without in any way disturbing the system in question, then that quantity corresponds to an element of physical reality. The assumption does not assert determinism, completeness of knowledge, or classical metaphysics. It asserts only this: that certainty without disturbance is sufficient for reality.

This assumption is not gratuitous. It articulates a minimal criterion for intelligibility within physical explanation. If reality cannot be ascribed even where prediction is certain and interaction absent, then the very notion of physical description becomes unstable. The EPR argument therefore begins not with a controversial metaphysical thesis, but with a demand internal to the practice of explanation itself.

Quantum mechanics, however, violates this assumption in the case of entangled systems. Two particles may be prepared in a single joint quantum state such that a measurement performed on one particle allows the value of a corresponding quantity in the other particle to be predicted with certainty. Crucially, this holds regardless of the spatial separation between the particles. The prediction can be made without any physical interaction with the second system.

If one accepts the realist criterion just stated, then the predicted property of the second particle must correspond to an element of reality. Yet standard quantum mechanics denies that the particle possessed that property prior to measurement. The theory therefore forces a choice between two alternatives, neither of which is easily relinquished.

Either the particles already possess definite properties prior to measurement, in which case the quantum description is incomplete, or the act of measurement performed on one particle instantaneously affects the physical state of the other, regardless of spatial separation.

The second option entails a violation of locality. Locality, in this context, has a precise and non-negotiable meaning: no physical influence propagates faster than light, and spatial separation constrains causal interaction. This principle is not a metaphysical preference inherited from classical physics. It is a structural feature of relativistic spacetime, woven into the very framework within which modern physical theory operates.

Einstein rejected the second option. His objection was not that quantum mechanics introduced indeterminacy, nor that it abandoned classical trajectories. It was that the theory appeared to require non-local influence in order to secure determinate outcomes, thereby undermining the causal structure that relativity was meant to preserve. At the same time, Einstein did not insist that the underlying structure be deterministic in a classical sense. What he insisted upon was ontological adequacy: that physical reality not depend upon superluminal influence or epistemic intervention.

This is the point at which the language of “hidden variables” enters the discussion and where it is most often misunderstood. Hidden variables, in the EPR context, are not hypothetical classical properties smuggled in to restore determinism. They name, more generally, whatever additional structure would be required to render the theory complete—to ensure that elements of physical reality correspond to elements of the theory’s description without appeal to measurement as a primitive.

The issue, then, is not whether nature is deterministic, but whether physical actuality can be accounted for without collapsing explanation into observation. Hidden variables are not introduced to save predictability, but to preserve intelligibility: to prevent the actual from depending upon an act of measurement whose physical status the theory itself refuses to specify.

Seen in this light, the EPR argument does not demand a return to classical metaphysics. It demands consistency between physical explanation and the causal structure of spacetime. The dilemma it poses is therefore stark. Either quantum mechanics is incomplete, in that it fails to describe all elements of physical reality, or it is non-local, in that it permits physical determination without spatially mediated causation.

The force of the argument lies precisely in its refusal to resolve this dilemma by appeal to subjectivity. Measurement is not allowed to function as an ontological solvent. If physical reality becomes determinate only when observed, then explanation has been displaced rather than achieved. The EPR argument presses the question that Copenhagen defers: what in reality itself accounts for determinacy?

Bell’s Theorem and the Disentangling of Assumptions

Much of the conceptual confusion surrounding quantum mechanics in the latter half of the twentieth century arises from a persistent failure to distinguish determinism, locality, and hidden variables. These notions are routinely conflated, with the result that objections to one are mistakenly taken as refutations of the others. This confusion was decisively clarified by the work of the Northern Irish physicist John S. Bell, whose theorem remains one of the most important conceptual results in the foundations of quantum theory.

Bell proved that no theory can reproduce all the empirical predictions of quantum mechanics while preserving both locality and a minimal form of realism. Crucially, Bell’s theorem does not show that determinism is false. Nor does it show that realism is incoherent. What it shows is more precise and more troubling: any theory that reproduces the characteristic quantum correlations must either abandon locality or abandon the claim that measurement outcomes correspond to pre-existing physical properties.

This result is frequently misunderstood. Experimental violations of Bell inequalities are often said to refute realism outright, or to demonstrate that reality is somehow created by measurement. Neither conclusion follows. What Bell’s theorem refutes is local realism—the conjunction of two claims: first, that physical properties exist independently of measurement; and second, that causal influence is constrained by spatial separation in accordance with relativistic locality.

The structure of the result therefore matters. Bell does not force a choice between realism and quantum mechanics. He forces a choice between locality and a certain kind of realism. And even here, the realism in question is not metaphysically extravagant. It is the minimal claim that measurement outcomes reveal, rather than generate, physical properties.

Non-locality, in Bell’s sense, must also be handled with care. It does not entail that signals or information propagate faster than light. Quantum mechanics remains consistent with the no-signaling constraint. What non-locality indicates instead is something more ontologically unsettling: that the structure of physical reality cannot be exhaustively decomposed into independently existing local parts whose properties are fixed prior to interaction.

Correlation, on this view, is not an artifact of ignorance, nor a defect of description. It is ontologically primitive. The world is not merely a collection of locally self-sufficient entities whose relations are secondary. Rather, relational structure itself enters into the constitution of physical reality.

This is the point at which Bell’s result deepens, rather than resolves, the problem of intelligibility. If locality is abandoned in order to preserve realism, then the causal architecture of spacetime is no longer sufficient to account for physical determination. If realism is abandoned in order to preserve locality, then actuality becomes dependent upon measurement in precisely the way that Copenhagen presupposes without explaining. Either way, formal description reaches a limit.

What Bell’s theorem makes unavoidable is this: the actual structure of reality exceeds the explanatory resources of any theory that insists upon both local causation and observer-independent properties as traditionally understood. But it does not follow that subjectivity must therefore be invoked as an explanatory ground. That inference is precisely the mistake Bell’s result exposes.

Bell’s theorem does not license the claim that observation creates reality. It shows, rather, that the ontology presupposed by classical locality is insufficient. The demand, then, is not for epistemic supplementation, but for ontological revision. Something about the structure of reality itself—its relational, non-local character—has not yet been adequately articulated.

Bell therefore stands not as a defender of instrumentalism or observer-dependence, but as an ally of Einstein’s deeper concern: that physical theory must provide an account of actuality that does not rest upon unexplained appeals to measurement. The failure of local realism does not dissolve the problem of completeness; it sharpens it. The question is no longer whether reality is determinate independently of observers, but how such determinacy is to be understood once locality, as classically conceived, can no longer bear the explanatory weight placed upon it.

It is precisely at this juncture that the move to subjectivity appears most tempting—and most illicit. Where locality fails, observation is often invited to fill the gap. But Bell’s theorem leaves no room for this maneuver. The inadequacy it exposes is not epistemic, but ontological. What is required is not an appeal to minds, but a richer conception of physical reality itself.

Penrose and Ontological, Not Epistemic, Explanation

The mathematical physicist Roger Penrose radicalizes Einstein’s original concern by insisting that the incompleteness of quantum mechanics points not to the necessity of observers, but to the inadequacy of our ontology. Where Copenhagen relocates explanatory failure into acts of measurement, and where some post-Bell interpretations retreat into instrumentalism, Penrose insists that the problem lies elsewhere: not in what we can know, but in what there is.

Penrose rejects hidden variables in any classical or algorithmic form. He does not propose that quantum behavior is governed by undiscovered deterministic parameters that could, in principle, be computed or simulated. On the contrary, his work consistently emphasizes the limits of algorithmic explanation, both in physics and in the theory of mind. Yet this rejection of classical hidden variables does not lead him to subjectivism. It leads him instead to a demand for a deeper, non-algorithmic account of physical reality itself.

On Penrose’s view, wave-function collapse is neither a subjective act nor a mere update of information. It is an objective physical process, one that occurs independently of observers and independently of acts of measurement as epistemic events. Collapse must therefore be grounded in real features of the physical world—features that are not yet adequately captured by existing formal theories. Penrose locates the likely source of these features in the relation between quantum mechanics and gravitation, suggesting that spacetime itself may contain the resources required to account for physical actualization.

The crucial point is not the specific mechanism Penrose proposes, but the explanatory posture he adopts. Collapse, on this account, is not something that happens when we look. It is something that happens in nature. The failure of current quantum theory to account for this process is therefore not a failure of prediction or control, but a failure of ontological depth. Our theories describe how systems evolve, but not how possibilities become actualities.

Nature, on this view, does not wait upon minds in order to become determinate. Rather, minds encounter a reality whose determinacy outruns present formalization. The gap exposed by quantum mechanics is not a gap between reality and knowledge, but a gap between reality and its current theoretical articulation. To close that gap by appeal to subjectivity would be to mistake the symptom for the cause.

Penrose thus offers neither reductionism nor instrumentalism. He does not dissolve physical actuality into formal description, nor does he treat theory as a mere predictive tool devoid of ontological commitment. Instead, he presses for a richer conception of physical reality—one capable of sustaining actualization, non-local correlation, and determinate outcomes without recourse to observers as ontological triggers.

In this respect, Penrose stands as a decisive counterexample to the claim that quantum mechanics forces a retreat into epistemology. The incompleteness of the theory does not show that reality is indeterminate until measured. It shows that reality possesses structure that our present theories do not yet capture. Explanation fails, not because actuality depends upon observation, but because ontology has not yet caught up with actuality.

Penrose’s position therefore sharpens the dilemma rather than evading it. If collapse is real and observer-independent, then the ground of intelligibility must lie within nature itself. The task is not to explain how minds impose determination on an otherwise indeterminate world, but to explain how the world itself gives rise to determinacy in a way that makes knowledge possible at all.

It is precisely this ontological demand that makes Penrose so significant for the present argument. He demonstrates that one can reject classical determinism, algorithmic closure, and subject-centered explanation simultaneously—without abandoning realism. The refusal of subjectivism here is not a philosophical preference. It is an explanatory necessity forced upon us by the structure of the problem itself.

Metaphysical Analogy: Subjectivism as Placeholder

The structural predicament exposed in quantum mechanics is not unique to physics. It recurs, with remarkable consistency, across philosophy, theology, and the theory of meaning. Wherever formal explanation reaches a principled limit, the temptation arises to relocate the missing element into the subject. Observation, recognition, interpretation, or communal uptake are asked to do explanatory work precisely at the point where ontology has fallen silent.

In the Copenhagen interpretation, “measurement” functions in this way. It is invoked not as a describable physical process, but as a terminus where explanation ceases. The wave function collapses when measured, yet the theory refuses to say what measurement is. Subjectivity thus enters not as an explanandum but as a placeholder. It marks the failure of ontology while appearing to resolve it.

An analogous maneuver is widespread in contemporary philosophy and theology. When intelligibility, normativity, or meaning is said to arise only through acts of recognition, linguistic practice, or communal validation, subjectivity is again pressed into service at precisely the point where explanation falters. The claim is not merely that subjects encounter meaning, but that meaning itself is constituted by those encounters. What cannot be grounded in being is relocated into use.

This move should be resisted. Appeals to subjectivity at explanatory limits do not illuminate the phenomena in question; they merely displace the problem. To say that meaning, obligation, or intelligibility arises through recognition is not to explain how these things are possible, but to redescribe their absence as a human achievement. The explanatory burden has not been discharged. It has been deferred.

The alternative to this displacement is not reductionism, but realism. Just as Penrose insists that the actualization of physical states must be grounded in the structure of nature itself, intelligibility must be grounded in the structure of being. Subjects do not confer meaning on an otherwise mute world. They encounter a reality already ordered toward sense.

This is the metaphysical claim at stake. Intelligibility is not a psychological projection, a linguistic artifact, or a social construction. It is a real feature of the world, one that precedes and conditions any act of recognition. The failure of formal systems to exhaust meaning does not license the conclusion that meaning is subjective. It demands a richer ontology.

The same structure appears wherever explanation reaches its limits. In ethics, obligation is said to arise from endorsement or consensus. In theology, doctrine is reduced to grammar or practice. In epistemology, truth is dissolved into warranted assertibility. In each case, subjectivity functions as a compensatory mechanism. Where reality is no longer allowed to bear intelligibility, subjects are asked to supply it.

This strategy is ultimately self-defeating. Subjectivity cannot ground what it presupposes. Acts of recognition, interpretation, or judgment already operate within a space of intelligibility that they do not create. The very possibility of recognizing something as meaningful, binding, or coherent presupposes that meaning, normativity, and coherence are already operative.

The metaphysical error, therefore, lies not in acknowledging the role of subjects, but in mistaking participation for constitution. Subjects participate in intelligibility; they do not generate it. They respond to meaning; they do not invent it. To reverse this order is to confuse the conditions of encounter with the conditions of possibility.

It is here that the analogy with quantum mechanics becomes decisive. Just as the appeal to measurement in Copenhagen quantum mechanics functions as a placeholder for an absent ontology, so appeals to subjectivity in philosophy and theology function as placeholders for an absent metaphysics. In both cases, explanation is suspended rather than completed.

The task, then, is not to refine the appeal to subjectivity, but to refuse it. Where formal description fails, the demand is not for epistemic supplementation, but for ontological depth. Intelligibility must be located where it belongs: in being itself.

The real, non-formal, non-algorithmic orientation within reality by virtue of which determinate structures can count as intelligible at all is what I have termed teleo-space. It is not a mental space, a linguistic framework, or a cultural horizon. It is the ontological condition that makes formal systems, judgments, and interpretations possible without determining them in advance.

Teleo-space does not complete systems or supply missing rules. It does not legislate outcomes or guarantee consensus. It orients without necessitating and grounds without competing. It names the fact that reality itself is ordered toward intelligibility, even where formalization fails.

Across physics, logic, and metaphysics, the lesson is the same. Where explanation reaches a limit, the choice is not between subjectivism and irrationalism. The alternative is realism about intelligibility itself. Subjectivity is not the source of sense, but its respondent. And incompleteness, far from threatening intelligibility, is the most reliable sign that intelligibility exceeds our forms of capture.

Gödel, Formalization, and the Refusal of Subjectivism

The structural lesson drawn from quantum mechanics is not weakened but reinforced when one turns from physics to logic and the theory of formal systems. Here, however, a further clarification is required, especially for readers outside mathematics. The term incompleteness does not carry the same meaning across domains, and failure to distinguish its senses has generated persistent confusion in philosophical theology.

The incompleteness theorems of Kurt Gödel concern not physical theories, but formal systems: axiomatic frameworks governed entirely by explicit rules of symbol manipulation. Gödel demonstrated that any formal system sufficiently expressive to encode elementary arithmetic must exhibit two structural features.

First, there will exist true statements expressible within the system that cannot be proven using the system’s own axioms and rules. Second, such a system cannot demonstrate its own consistency without appeal to principles stronger than those contained within the system itself.

These limitations are not the result of human ignorance, cognitive finitude, or technical immaturity. They are not provisional defects awaiting future repair. They are structural. Truth outruns formal derivability in principle. Any attempt to close the gap by adding further axioms internal to the system merely generates new undecidable truths in turn.

What matters for present purposes is not simply the existence of undecidable propositions, but the status of the judgments by which such propositions are recognized as true. A Gödel sentence is not an ineffable mystery. Its truth can be seen—rigorously and non-arbitrarily—from a standpoint that understands what the system is doing. Yet this recognition cannot be generated by the system’s own syntactic resources.

Here the temptation toward subjectivism arises. If truth exceeds proof, one may be tempted to conclude that what cannot be formally derived must be fixed instead by an act of judgment understood as voluntaristic, conventional, or decisionistic. In logic, this temptation takes the form of psychologism or decisionism: the view that where formal derivation fails, truth is supplied by stipulation, agreement, or choice.

This move is a mistake.

The act of recognizing the truth of a Gödel sentence is not subjective in this sense. It is neither arbitrary nor expressive of preference. It is constrained—indeed necessitated—by the structure of the formal system itself. The judgment does not add content to the system; it acknowledges what the system, by its own resources, cannot articulate.

This is where the analogy with quantum mechanics must be handled with care. The incompleteness of quantum mechanics is not Gödelian in a strict sense. Quantum mechanics is not a formal system in the logician’s sense, and wave-function collapse is not an undecidable proposition. The incompleteness at issue in quantum theory concerns ontological description: whether the theory provides a complete account of physical reality without appeal to observers.

Nevertheless, the structural parallel is exact. In both cases, formal description reaches a principled limit. In logic, derivation fails to exhaust truth. In quantum mechanics, formal evolution fails to exhaust physical actuality. In neither case does the excess license an appeal to subjectivity as an explanatory ground.

Yet the temptation is the same. Where formal systems fail to close upon themselves, one may attempt to relocate the missing element into acts of recognition, observation, or judgment. In logic, this takes the form of psychologism or conventionalism. In physics, it takes the form of observer-dependent collapse. In both cases, subjectivity is asked to supply what formalism cannot.

This relocation does not solve the problem. It displaces it.

The necessity of judgment in Gödel’s theorem does not mean that truth depends upon judgment. It means that judgment responds to a structure of intelligibility that exceeds formal capture. This brings us squarely into the terrain of reflecting judgment as articulated by Immanuel Kant.

Reflecting judgment operates precisely where no determining rule can be given in advance. It does not legislate content, invent norms, or complete systems by fiat. Rather, it orients inquiry toward coherence, adequacy, and sense in the presence of formal limitation. Its necessity is not provisional but structural. Without reflecting judgment, no formal system could be recognized as truth-apt at all.

Here again the temptation arises to relocate this function into subjectivity. Reflecting judgment is often misread as a merely human capacity supplementing otherwise self-sufficient forms. But this reverses the order of dependence. Judgment does not generate intelligibility. It responds to it. The very possibility of judging that a system is incomplete, adequate, or in need of revision presupposes a space of intelligibility not constituted by judgment itself.

Gödel and Kant thus converge on the same point from opposite directions. Formal systems disclose their own limits, and judgment becomes necessary not because meaning is subjective, but because intelligibility is richer than form. The excess that resists formal capture is not supplied by the subject. It is encountered by the subject as already operative.

This is precisely the role played by teleo-space. Teleo-space names the real orientation toward intelligibility that makes possible both the recognition of formal limits and the rational movement beyond them. It does not dictate conclusions, supply algorithms, or complete systems. It orients without necessitating and grounds without competing. And it does so independently of any appeal to consciousness, language use, or communal validation.

Across logic, physics, and judgment, the lesson is consistent. Where formal closure fails, the choice is not between subjectivism and irrationalism. The alternative is realism about intelligibility itself. Just as quantum mechanics requires an ontology richer than Copenhagen allows, and formal logic requires a conception of truth that exceeds proof, so metaphysics requires an account of intelligibility that does not rest upon minds.

Subjects judge, measure, and interpret—but they do so within a reality already ordered toward sense. Formal incompleteness does not threaten intelligibility. It discloses its depth.

Conclusion: Incompleteness and the Logos

The argument developed across the preceding sections converges on a single structural insight. Incompleteness is not a threat to intelligibility; it is its most reliable witness. Wherever formal systems reach their principled limits—whether in quantum mechanics, in logic, or in rational judgment—the temptation arises to appeal to subjectivity as an explanatory supplement. Observers, recognizers, interpreters, or communities are asked to supply what formal description cannot. Yet such appeals do not resolve the problem they address. They merely relocate it.

In quantum mechanics, the appeal to measurement functions as a placeholder where ontology has fallen silent. In logic, the appeal to decision or convention attempts to compensate for the excess of truth over proof. In philosophy and theology, the appeal to recognition or communal practice substitutes epistemic uptake for ontological ground. Across these domains, the pattern is the same. Where formal closure fails, subjectivity is conscripted to do metaphysical work it cannot sustain.

The alternative is neither irrationalism nor reductionism. It is realism about intelligibility itself. The failure of formal systems to exhaust meaning does not indicate that meaning is subjective, emergent, or merely pragmatic. It indicates that intelligibility is grounded more deeply than form. Formal rigor does not abolish this depth. It reveals it.

Quantum mechanics requires an ontology richer than the Copenhagen interpretation allows—one capable of sustaining physical actuality without appeal to observers. Logic requires a conception of truth that exceeds derivability without collapsing into psychologism. Judgment requires an orientation toward coherence and adequacy that cannot be reduced to rules without regress. In each case, intelligibility is presupposed, not produced.

What these domains jointly disclose is the same structural fact. There exists a real, non-formal, non-algorithmic orientation within reality by virtue of which determinate structures can count as intelligible at all. This orientation does not dictate content, supply algorithms, or complete systems. It orients without necessitating and grounds without competing. It is encountered wherever sense is made, truth is recognized, or explanation succeeds—yet it is not itself an object among objects or a rule among rules.

This is what I have named teleo-space. Teleo-space is not a mental horizon, a linguistic framework, or a cultural achievement. Nor is it a hidden metaphysical mechanism. It is the ontological condition under which formal systems, theories, and judgments can function as intelligible without being self-grounding. Subjects participate in this space; they do not constitute it. They respond to intelligibility; they do not generate it.

At this point, the theological stakes can no longer be postponed. Philosophy can describe the structure of intelligibility and expose the limits of formalization. It can show that meaning, truth, and adequacy presuppose a ground that is neither formal nor subjective. But philosophy cannot generate that ground from within its own procedures without circularity. Reason reaches its limit not in incoherence, but in recognition.

The doctrine of the Logos names precisely this recognition. Logos does not designate a proposition, a system, or a highest concept. It names that by virtue of which articulation, truth, and intelligibility are possible at all. Logos is not what is said, but that in which saying can be true. It is not the content of meaning, but the ground of its possibility.

To invoke the Logos here is not to import theology as an explanatory add-on. It is to name what metaphysical reflection already requires but cannot finally articulate. The Logos is not an object within reality, nor a principle that competes with finite causes. It grounds without displacing. It orders without coercing. It sustains intelligibility without exhausting itself in any determinate form.

Seen in this light, the failures of formal closure in physics and logic do not undermine theological realism. They confirm it. They show that reality cannot be exhausted by formal systems, algorithms, or procedures—not because it is opaque or irrational, but because it is richer than such modes of capture allow. Intelligibility exceeds formalization because it is grounded more deeply than form.

Subjects do not supply meaning where reality is mute. They respond to a world already ordered toward sense. Judgment, interpretation, and understanding are participatory acts, not constitutive ones. They presuppose an antecedent Logos that makes truth, coherence, and actuality possible at all.

Incompleteness, therefore, is not a deficit to be overcome by further formalization or epistemic substitution. It is the trace of intelligibility’s depth. It marks the point at which explanation refuses subjectivist displacement and demands ontological seriousness.

For the theologian, this reflection is not an excursion into alien territory. It is a contemporary articulation of an ancient conviction: that reason is neither the enemy of faith nor its foundation, but its participant—because reality itself is already ordered toward meaning. The Logos is not threatened by incompleteness. Incompleteness is the sign of its inexhaustibility.


Wednesday, January 14, 2026

On Explanatory Closure, Intelligibility, and the Limits of Algorithmic Rationality.

I. Explanatory Success and a Residual Question

Recent work in metaphysics, philosophy of science, and the theory of explanation has emphasized the structural parallels between causal, logical, and metaphysical explanation. In each domain, explanation appears to involve a tripartite structure: an explanans (that which explains), an explanandum (that which must be explained), and a principled relation that connects them. Causes explain effects by standing in law-governed relations; axioms explain theorems by inferential rules; fundamental facts explain derivative facts by relations of metaphysical dependence.

This structural alignment is not accidental, but reflects a broader aspiration toward explanatory closure: the ideal that, once the relevant principles are specified, what follows is fixed. Explanation, on this picture, consists in situating a phenomenon within a framework whose internal relations determine its place. The better the framework, the less residue remains.

There is much to recommend this ideal. It captures the power of formalization, the success of scientific modeling, and the clarity afforded by explicit inferential structures. It also motivates the widespread hope that explanation can, in principle, be rendered algorithmic: given sufficient information about initial conditions and governing principles, outcomes should be derivable.

And yet, explanatory practice itself resists this aspiration in subtle but persistent ways. Even in domains where formal rigor is maximal, explanation does not terminate merely in derivation. Judgments of relevance, adequacy, scope, and success continue to operate, often tacitly, at precisely those points where explanation appears most complete.

The question to be pursued in what follows is therefore not whether explanation works—it manifestly does—but whether explanatory success exhausts the conditions under which explanation is recognized as success. What remains operative, even where explanation appears closed?

II. Dependence Relations and the Temptation of Functionalism

The appeal of tripartite explanatory models lies in their promise of determinacy. Once the intermediary relation is fixed—causal law, inference rule, metaphysical dependence—the explanandum appears as a function of the explanans. To explain is to map inputs to outputs under stable rules.

This functional picture has been especially influential in recent metaphysics. If derivative facts depend on more fundamental facts in accordance with metaphysical principles, then explanation seems to consist in exhibiting a function from the fundamental to the derivative. Once the base facts and principles are in place, the result follows.

However compelling this picture may be, it quietly imports a further assumption: that the adequacy of the explanatory mapping is itself secured by the same principles that generate it. In other words, it assumes that once the function is specified, there is nothing left to assess.

But this assumption is false to explanatory practice.

Even in logic, where inferential rules are explicit, the correctness of a derivation does not by itself settle whether the axioms are appropriate, whether the system captures the intended domain, or whether the conclusion answers the question posed. Similarly, in metaphysics, identifying a dependence relation does not determine whether it is explanatory rather than merely formal, illuminating rather than trivial, or relevant rather than artificial.

The functional picture thus explains too much too quickly. It conflates derivability with explanatory satisfaction. The former can be fixed by rule; the latter cannot.

This gap is not accidental. It reflects a structural feature of explanation itself.

III. Explanatory Adequacy and the Irreducibility of Judgment

Consider the role of judgment in explanatory contexts that are otherwise maximally formal. In logic, the selection of axioms, the interpretation of symbols, and the identification of an intended model are not dictated by the formal system itself. In science, empirical adequacy underdetermines theory choice; multiple frameworks may fit the data equally well while differing in unification, simplicity, or fruitfulness. In metaphysics, competing accounts of grounding may be extensionally equivalent while differing profoundly in explanatory character.

In each case, explanation requires decisions that are not compelled by the formal machinery. These decisions are not arbitrary, nor are they merely psychological. They are normative: they concern what counts as explaining rather than merely deriving.

Crucially, these judgments are not external add-ons to explanation. They are conditions under which explanatory relations can function as explanations at all. A mapping from explanans to explanandum becomes explanatory only insofar as it is situated within a space of assessment in which relevance, adequacy, and success can be meaningfully evaluated.

Attempts to eliminate this space by further formalization merely reproduce it at a higher level. Meta-rules governing relevance or adequacy would themselves require criteria for correct application. The regress does not terminate in a final algorithm. What persists is the necessity of judgment.

This necessity should not be misunderstood. It does not signal a failure of rationality, nor an intrusion of subjectivity. Rather, it reveals that rational explanation presupposes a non-algorithmic space within which determinate relations can be taken as intelligible, appropriate, or successful.

Explanation, in short, presupposes intelligibility. And intelligibility is not itself a function of the explanatory relations it makes possible.

IV. Theory Choice, Model Adequacy, and the Limits of Formal Closure

The persistence of judgment becomes especially visible in contexts of theory choice and model adequacy, where formal success does not settle explanatory priority. In such cases, multiple frameworks may satisfy all explicitly stated constraints while nevertheless differing in their capacity to illuminate, unify, or orient inquiry. The choice among them is not determined by additional derivations, but by evaluative considerations that are internal to rational practice yet irreducible to rule.

This phenomenon is familiar across domains. In logic, distinct formal systems may validate the same set of theorems while differing in expressive resources or inferential economy. In the philosophy of science, empirically equivalent theories may diverge in their explanatory virtues—simplicity, coherence, depth, or integration with neighboring domains. In metaphysics, competing accounts of dependence or fundamentality may agree extensionally while offering incompatible explanatory narratives.

What is striking in these cases is not disagreement as such, but the form disagreement takes. The dispute is not over whether a rule has been followed correctly, nor over whether a derivation is valid. It concerns whether a framework makes sense of the phenomena in the right way—whether it captures what is explanatorily salient rather than merely formally sufficient.

No finite list of criteria resolves such disputes without remainder. Attempts to formalize explanatory virtues inevitably encounter the same problem they seek to solve: the application of the criteria themselves requires judgment. To ask whether a model is sufficiently unified, sufficiently simple, or sufficiently illuminating is already to presuppose a background sense of what counts as unity, simplicity, or illumination here rather than there.

This does not imply that theory choice is subjective, conventional, or arbitrary. On the contrary, the judgments involved are responsive to real features of the domain under investigation. But responsiveness is not compulsion. The domain constrains judgment without dictating it. Explanatory rationality thus occupies a space between determination and indifference—a space in which reasons can be given, criticized, refined, and sometimes revised, without being reduced to algorithmic selection.

The significance of this point is often underestimated because it emerges most clearly at moments of philosophical maturity rather than at the level of elementary practice. When a framework is first introduced, its power lies in what it enables. Only later, once its success is established, does the question arise of how that success is to be assessed, limited, or compared with alternatives. At that stage, explanation turns reflexive: it must account not only for its objects, but for its own adequacy as explanation.

What becomes apparent in such moments is that explanatory closure is never purely internal to a system. Even the most formally complete framework remains dependent on a space of evaluation in which its claims can be judged relevant, sufficient, or illuminating. This space is not itself a further theory competing with others. It is the condition under which theories can compete meaningfully at all.

The persistence of this evaluative dimension should not be regarded as a temporary limitation awaiting technical resolution. It is a structural feature of rational inquiry. Explanation advances not by eliminating judgment, but by presupposing it—quietly, continuously, and indispensably.

V. Articulation, Revision, and a Limit Case for Algorithmic Explanation

The limits identified above become especially clear when we consider not the objects of explanation, but the activity of explanation itself: the practices of articulation, revision, and defense through which theoretical frameworks are proposed and sustained. These practices are not peripheral to rational inquiry. They are constitutive of it. Yet they sit uneasily within accounts that aspire to explanatory closure through algorithmic or law-governed relations alone.

Consider a familiar kind of case from the history of twentieth-century psychology and philosophy of science: a theorist committed to a thoroughly naturalistic and algorithmic account of human behavior undertakes the task of writing a systematic defense of that very account. The activity involves drafting, revising, responding to objections, anticipating misunderstandings, and adjusting formulations in light of perceived inadequacies. The goal is not merely to produce text, but to get the account right—to articulate it in a way that clarifies its scope, resolves tensions, and persuades a critical audience.

From the standpoint of the theory being defended, the behavior involved in this activity may be describable in causal or functional terms. One may cite conditioning histories, environmental stimuli, neural processes, or computational mechanisms. Such descriptions may be true as far as they go. But they do not yet explain what is explanatorily central in the context at hand: namely, why this articulation rather than another is judged preferable, why a given revision counts as an improvement rather than a mere change, or why the theorist takes certain objections to matter while setting others aside.

These judgments are not epiphenomenal to the enterprise. They are what make the activity intelligible as theorizing rather than as mere behavior. To revise a manuscript because a formulation is inadequate is to operate with a norm of adequacy that is not supplied by the causal description of the revision itself. To aim at persuasion is to treat reasons as bearing on belief, not merely as inputs producing outputs.

Importantly, the difficulty here is not that the theory fails to predict or describe the behavior in question. It may do so successfully. The difficulty is that prediction and description do not exhaust explanation in this context. What remains unexplained is how the theorist’s activity can be understood as responsive to reasons—as governed by considerations of correctness, clarity, and relevance—rather than as merely following a causal trajectory.

One might attempt to extend the theory to include meta-level explanations of these practices. But such extensions merely relocate the problem. Any account that treats theoretical articulation as the output of a function—however complex—must still presuppose criteria by which one articulation is taken to be better than another. Those criteria cannot themselves be generated by the function without circularity. They must already be in place for the function to count as explanatory rather than as merely generative.

Consider a function d that specifies the dependency relations by virtue of which a metaphysical system M is explained on the basis of more fundamental objects, properties, relations, or states of affairs F. On this view, F together with d metaphysically explains M.

The question that immediately arises concerns the status of d itself. Is d something that admits of explanation, or is it not? If d is explained, then there must be some more basic function p in virtue of which d obtains. But once this path is taken, it is difficult to see how an infinite regress is avoided, since the same question must then be raised concerning p.

Suppose, alternatively, that d is not in need of explanation—that it is primitive, incorrigible, or somehow self-evident. This move, however, is problematic. Why should a metaphysical dependency function enjoy a privileged status denied to laws of nature or other explanatory principles? One might argue that certain transformation rules in logic possess a form of self-evidence or decidability, but this cannot plausibly be extended to metaphysical dependency relations. If it could, metaphysics would collapse into a formal logical system, contrary to its actual practice.

The difficulty, then, is not that metaphysical explanation fails, but that modeling it as a function obscures the normative and non-algorithmic judgments that are required to identify, assess, and deploy dependency relations in the first place.

This point does not target any particular theory as incoherent or self-refuting. The issue is structural, not polemical. Explanatory frameworks that aspire to algorithmic completeness necessarily presuppose a space in which articulation, revision, and defense are assessed as norm-governed activities. That space is not eliminated by successful explanation; it is activated by it.

The case thus serves as a limit test. Where explanation turns reflexive—where it must account for its own articulation and adequacy—the aspiration to closure gives way to dependence on evaluative judgment. The theorist’s practice reveals what the theory itself cannot supply: the conditions under which its claims can be meaningfully proposed, criticized, and improved.

VI. Explanatory Ambition and a Structural Constraint

The preceding analysis does not challenge the legitimacy of algorithmic, causal, or formally articulated explanation. Nor does it deny the success of contemporary explanatory frameworks in their respective domains. What it challenges is a specific aspiration: the hope that explanation can be rendered fully self-sufficient—that once the relevant relations are specified, nothing further is required for explanatory adequacy.

What emerges instead is a structural constraint on explanatory ambition. Explanatory relations, however rigorous, do not determine their own adequacy as explanations. They presuppose a space in which relevance, success, and improvement can be meaningfully assessed. This space is not external to rational inquiry, nor does it compete with formal explanation. It is internal to the very practice of offering, revising, and defending explanations as such.

This conclusion should not be misunderstood as reintroducing subjectivism, voluntarism, or irrationalism. The judgments involved are constrained by the domain under investigation and answerable to reasons. But they are not compelled by rules alone. Explanation constrains judgment without exhausting it. The possibility of error, disagreement, and revision is not a defect of rational inquiry but a condition of its vitality.

Nor does this conclusion invite a regress to foundational doubt. The space of judgment at issue is not a prior theory awaiting justification. It is operative wherever explanation functions successfully. To recognize its indispensability is not to abandon explanatory rigor, but to acknowledge what rigor already presupposes.

The temptation to explanatory closure is understandable. It reflects the genuine power of formal systems and the desire to secure rationality against arbitrariness. But when closure is taken to be complete, it obscures the very practices through which explanations gain their standing. What is lost is not explanation itself, but intelligibility—understood as the condition under which explanation can count as illuminating rather than merely generative.

The upshot, then, is modest but firm. Explanation does not collapse into derivation, because rational inquiry cannot dispense with judgment. This is not a contingent limitation to be overcome by future theory, but a permanent feature of explanatory practice. Any account that neglects it risks mistaking formal success for explanatory sufficiency.