Paraconsistent Logic and AI models

Hi everyone. Im not much of a programmer, but I have some studies into logical systems and philosophy of language. Here are some thoughts on current limitations of LLMs and AI models. Care to join and share your point of view and offer some perspective on why I might be onto something or just completely wrong?

A true Epistemology for Artificial Intelligence

By Daniel Fonseca

Philosopher, writer, and investigative journalist .

It’s worth mentioning that the idea for writing this article came from a conversation with an artificial intelligence (grok) in which the responses were nothing more than crazy hallucinations. Funny, of course, but still, machine hallucinatio ns.

The aim of this article is to reflect on the model adopted by Big Tech companies for programming Large Language Models (LLMs) and to point out its limitations – evidenced by the high number of machine hallucinations – and to propose a hybrid model for LLM programming based on a combination of Kantian Judgment Theory and paraconsistent logic (or fuzzy logic). The proposal of this article can be summarized as follows: a hybrid LLM model that is not limited to mere statistical calculati o n s .

The current limitations o f LLMs

The current model of LLMs can be summarized as follows: given a prompt, the AI performs a matrix calculation to predict the next words that answer what was proposed in the prompt. The business model of Big Tech companies is that if an LLM is large enough, it will eventually achieve the holy grail of AGI. The problem with this view is that it confuses the category of quality with that of quantity – let us remember the old Aristotle who already said that they are different things and one does not replace the other. It is, therefore, a business model based on a logical fa llacy .

In his Sophistical Refutations, Aristotle identified the confusion of two categories as being the same thing as a logical fallacy. Two distinct predicates cannot belong to the same category. That is, the size of a neural network, even an artificial one, is not the same as the production of intelligence. This fact is observable in biology: a whale’s brain has more neural connections than a human’s, but a human is more intelligent than a whale.

The problem with Big Tech is described in that anecdote about the monkey and the typewriter: given enough attempts, a monkey pressing random keys on a typewriter will eventually produce its own Iliad. Big Tech bases its business on the fallacy that if it creates sufficiently complex statistical algorithms for predicting the next word, artificial intelligence wil l emerge.

Perhaps it’s just ironic that companies with tens of thousands of engineers don’t know Aristotle and therefore don’t understand that it’s impossible to arrive at a conclusion based solely on statistical calculations. Oh! How much we need a philosopher among these e n g i neers!

What is the proposal for a true epistemology of Artificial I ntelligence?

The intention here is not to say that Artificial Intelligence will surpass human intelligence, but rather to provide a foundation in the Philosophy of Language and the Philosophy of Logic to establish a new paradigm for artificial intelligence. The aim is to propose a hybrid model for AI that goes beyond mere statistical calculation, establishing the true limits of AI and proposing a programming model with a layer preceding the statistical calculations derived from training artificial neu ral networks.

Artificial intelligence will never be intelligent, because that is a quality that arises from billions of years of natural selection and evolution of biological beings. And it will never be artificial, because, since it is nothing more than an algorithm, it will always need someone to program it. Or, in Aristotelian terms: the predicate of A is not the same as the predicate of B, that is, the intelligence emerging from biological beings is not the same as the intelligence of digi tal machines .

As Thomas Aquinas said, rereading Aristotle: a creature cannot be endowed with more substance than its own creator. And this is what Big Tech companies sell as business models: a computer more intelligent than its own creator. This business model certainly serves for financial speculation, but in the real world, it’s nothing more than a fallacy. Computers may be better at performing some specific tasks than human beings, but they will never be human or artificial life forms. What Big Tech proposes is the same dream as Pinocchio: a creature that gains its own life and becomes human. It’s a pity for the Big Tech business model that there’s no fairy godmother to make this d r e a m come true.

A Hybrid AI Model: or an AI that doesn’t drea m l i ke Pinocchio.

Boolean Logic and P araconsistent Logic

From Plato’s later dialogues and, especially, in Russell’s Philosophy of Language, it has been almost universally accepted that truth is an attribute of judgments according to the identity between proposition and the real world. Since AI computers are merely complex algorithms, they can never assign a truth value to what exists and what does not exist in the real world, as this task will always be subject to what has been programmed. AI engineers have created a good solution to this problem: reinforcement training. But, note, dear reader, that the truth value is obtained through the statistical extrapolation of a truth value giv en by a human being.

The problem with removing the human component from assigning truth value is the same as believing that a book can read itself and teach itself what it has learned by reading itself. Science uses double-blind tests so that it is not the same author who proposes the status of scientific truth to a study. The person analyzing data cannot be the same agent who proposes the data. And this limitation of current generative AIs is what produces an abysmal amount of machine hallucinations. It is a violation of the scientific method: a truth value assigned by itself to itself through complex stat istical calculations.

The logical, unavoidable, and necessary consequence of this limitation is that AIs will never reach the much-desired level of General Artificial Intelligence, because an algorithm is nothing more than an algorithm, that is, the execution of a specific computational task from an input that generates a statistical ly predictable output.

So far we have established two key points of this hybrid AI model: 1) that the size of the LLM will not give rise to superhuman intelligence and 2) the cause of machine hallucinations lies in the machine learning model employed by big tech companies, that is, reinforcement learning is what generates machine hallucinations, because if predictive statistical calculations of the next word of a proposition are made large enough, any prop osition can be reached.

If the reader has already executed the same prompt several times, they may have noticed that each time there is a slightly different response. This happens because current AIs are programmed so that in their matrix calculations there is a certain degree of uncertainty in the output. It’s an attempt to emulate human creativity. A solution that seems good, but is another factor generating machine hallucinations. Inevitably, this degree of uncertainty intentionally added to matrix calculations generates logical explosions not predicted by mere mathematical prediction. The problem, in technical terms, is believing that binary Boolean logic allows degrees of uncertainty beyond the mere assignment of truth values that are distinct from “true” or “false”.

The solution proposed by Big Tech engineers, which aims to emulate human creativity, is flawed from its very premise. Language is an inconsistent system, that is, non-linear. In this sense, all the programming logic of an AI must be based on paraconsistent logic .

The central point for programming AIs using paraconsistent logic lies in the possibility of reducing the statistical calculation of predicting the proposition to a triviality – that is, to a trivia l and inconsistent system.

In short, the problem with using Boolean logic in a language system lies in the logical explosion where everything can be inferred from everything else through a mere non-scienti fic Aristotelian syllogism.

In a practical example, consider the prompt: “Is water at 35 degrees hot or cold?”. The Boolean matrix calculation will determine that the algorithm compares the propositions stating whether the water is hot or cold with its database and, based on this, generates an answer that considers only the statistical value given by a combinatorial analysis of whether 35 degrees is equivalent to “being hot” or “being cold”. This calculation does not allow the answer to be “at 35 degrees the water is lukewarm”, because the concept of lukewarm is a contradictory concept that, in the logical systems currently employed by AIs, generates a logical explosion (that is, an i nvitation to hallucination).

Something being “warm” is a truth value not allowed by Boolean binary logic, because it is something intermediate between the truth values “absolutely true” and “absolutely false,” 0 or 1. The paraconsistent approach, on the other hand, allows intermediate values between “absolutely true” and “absolutely false.” In programming, it is possible to say that water at 35 degrees has a truth v alue of 0.5 hot and 0.5 cold.

Unlike current AIs, a paraconsistent AI allows assigning non-absolute truth values to a proposition. Paraconsistent logic is a model of logic that does not collapse when given an explosive proposition in the face of a contradiction that is only a contradicti on in a Boolean binary system.

In the example given above, of lukewarm water, a Boolean system collapses when trying to define an absolute truth value (necessarily true and necessarily false) when trying to define something as lukewarm. It’s worth repeating that, in the current AI system, given the limitations of binary logic, water is only allowed to be hot or cold. The concept of lukewarm is unthinkable with in this binary computing logic.

An even more serious example of binary logic programming is found in the proposition: “Why is water cold at 20 degrees, lukewarm at 35, and hot at 50 degrees?” The Boolean logic model of matrix statistical calculation lacks sufficient mathematical tools to answer “why” questions or questions whose answer requires a subjective perception analysis. What current AIs do is search their database for similar propositions and infer an answer from them through a statistical calculation of equivalence betwe en the prompt and the algorithm.

This limitation is another invitation to hallucinations, since the classical logic adopted by AIs presupposes consistent systems free from contradiction. This prompt for comparison between different temperatures, which has a hidden premise—namely, a subjective judgment—is a contradiction that generates a logical explosion in the statistical matrix calculation of the gene rative AI’s predictive algorithm.

Another factor that invites the machine hallucinations generated by Boolean logic in statistical matrix calculations lies in the fact that “reasoning,” or what big tech companies sometimes call “deep thinking” or “think harder,” is based merely on creating hypothetical answers from inference models focused on what Aristotle called the “copula” of a premise. The central problem is that in classical logic systems, a contr adiction trivializes the argument.

The problem with Boolean logic is that, through a contradiction (A = 1 and “not-A” = 0), trivialization occurs; that is, for current AIs, every subsequent premise is deducible from the previous premise. In other words, the model in which the AIs are programmed defines that the “next word” is determined by a statistical calculation of relevance comparison between the “previous word” and the database. The recurring hallucinations of current AI models lie in the fact that mere statistical calculation makes any proposition liable to be true. Therefore, different prompts can generate opposite and contradictory answers. It is the logical explosion of the p urely statistical inference system.

It’s worth noting that paraconsistent logic doesn’t persist in negating the principle of non-contradiction, but rather in refining contradictory propositions so that they don’t trivialize the proposition and lead to absurd inferences resulting from mere Boolean matrix statistical calculations predicting the next word in the output processed by the algorithm given a specific prompt. What I mean is that, in an LLM (Logical Logic Model), according to binary logic, hallucinations are recurrent because the language consists of a non-trivial logical model where inconsistency is necessary to exist without generating an explos ion, and therefore, a hallucination.

The problem of machine hallucinations in current AI models is simple to understand from a logical point of view. The problem is that language is an inconsistent system, which generates triviality where everything is probable. And since current AI models boil down to calculating the statistical probability of the next word by comparing it to a database, hallucination in an AI that uses the Boolean model of matrix statistical calculation is inevitable and unavoidable. Or, in other words, the current AI model hallucinates because, in trying to assign absolute truth values to propositions, it does not allow for the refinement of propositions in such a way that logical explosion is inevitable. In short, the classical logic systems used in programming current AIs, when confronted with a contradiction, trivialize the system in such a way that every proposition bec omes true – that is, a hallucination.

The advantage of using paraconsistent logic in an AI or an LLM is that it’s possible to compute “local contradictions” without generating “global trivialities .” In the example of warm water, there is no logical contradiction in assigning the truth value “true” to both the concept of hot and the concept of cold in paraconsistent logic. In other words, there is a local contradiction (hot and cold are opposite concepts), but there is no global triviality – that is, the concept of lukewarm is a refinement of the proposition in such a way that the uncertainty of what is lukewarm – given that there is a subjectivity in this concept – does not generate a logical explosion insofar as an intermediate truth value is assigned between what is “absolutely true,” or 1, or “absolutely false,” that is, 0. In this sense, in that example I gave of a comparison between three temperatures, there is an assignment of intermediate truth values for each of the temperatures compared in the original proposition without there being a logical explosion, but what paracons i s t e nt logic calls a gentle explosion.

Critique of Pure Artificial Intelligence

As I argued above, the problem with current AI models lies in their way of thinking, where all inference is possible (the triviality problem) given their operating model, which is a mere statistical calculation predicting the next word in a proposition. The universality of word prediction through statistical inferences is a necessary and fun damental cause of machine hallucinations.

The first step in avoiding machine hallucinations lies in what paraconsistent logic calls proposition refinement. An AI, when processing a prompt, must refine this prompt in such a way that it is possible to seek an inference that is not merely a statistical calculation equivalence comparing the prompt and the database. The way to avoid trivialization is through the refinement of the prompt in the form of wha t Aristotle called a scientific syllogism.

  • Rule of Terms:It must contain only three terms (greater, lesser , and middle), each used in the same sense.

  • Middle Term Rule:The middle term should never appear in the conclusion.

  • Rule of Extension:The terms in the conclusion cannot have a greater extension than those in the premises.

  • Universal Middle Term Rule:The middle term must be un iversal (total) at least once in the premises.

  • Rule of Negatives:From t wo negative premises, nothing can be concluded.

  • Rule of Affirmations:Two affirmative pr emises must result in an affirmative conclusion.

  • Rule of Particulars:From t wo particular premises, nothing can be concluded.

  • Rule of the “Weak Part”:The conclusion always follows the weakest premise; that is, if there is a negative premise, the conclusion is negative; if there is a particular premise, the conclusion is particular.

To a certain extent, the prompts have already been refined in the form of Aristotelian scientific syllogism; however, the central point for achieving, at most, a gentle explosion in the concept of paraconsistent logic lies in the a pplication of Hempel’s paradox. The paradox states:

Inductive Logic:The principle that seeing black cro ws confirms that “all crows are black” is intuitive.

The Equivalence:Logically, the phrase “All crows are black” is ide ntical to “Anything that is not black is not a crow.”

The Problem (Counter-intuitive):Following the logic above, when you see a red apple (which is neither black nor a crow), you are confirm ing the second statement and, consequently, the first.

Conclusion:The paradox demonstrates that inductive confirmation, based strictly on formal logic, can lead to absurd conclusions, since irrelevant objects could validate a scientific theory.

That is, in the logic of LLM, not every subsequent word can be inferred from the preceding word. When an LLM is based merely on predictive statistical calculations, it is falling into the trap of intuition. The inconsistency of a purely statistical language system allows inferences that are nothing more than hallucinations generated by a combinatorial analysis calculation of word pr obability given by the database used in the AI training.

If we recall the paradox of white and black swans, we find a guiding thread to avoid this trap of trivial systems in a practical LLM context. When seeking the equivalent of the Aristotelian scientific syllogism, AI must assume that every conclusion is false until a true example is found. That is, the universal proposition that every swan is white is false, since black swans can exist. And why should every premise be taken as false until a true example is found? Due to Karl Popper’s principle of falsifiability .

By refining the prompt into an Aristotelian scientific syllogism, therefore, the AI, when seeking an equivalence with its database, must look for equiva lences that are falsifiable and not absolute truth values.

If Kant, in *Critique of Pure Reason*, limited the scope of reason—that is, what is knowable by reason—a “Critique of Pure Artificial Intelligence” would have no problem limiting the applicability of artificial intelligence. Just as human reason is limited by sensory experience and by the very mental structures that organize the knowable reality for humans, AI will always be limited by its own algorithm and the necessary consequence of limiting what are computable and non-computable operations. An AI will always be limited, it is worth emphasizing, by its algorithm and its database. And it is due to this very limitation that General Artificial Intelligence is a logical impossibility and a theoretical oxymoron. It will always be limited to its own algorit hm and the knowledge used for its training in its database.

AGI is a logical impossibility and a theoretical oxymoron because imagination is a non-computable problem. Let us remember Sartre’s concept of imagination: a fundamental expression of human freedom, that is, the process by which humans, faced with nothingness (the absence of a pre-defined destiny), act inventively in their interaction with the world. Imagination is the human capacity for derealization, that is, the glimpsing of possibilities beyond phenomenal reality (a Hegelian concept that defines consciousness a s the self-expression of reality in its historical process).

If an AI relies on predictive statistical calculations, it will only be able to compute new forms of past knowledge. The originality of new human knowledge is intangible to an AI. An AI will never have a “Eureka!” moment where the scientist glimpses new knowledge. Its limitations, imposed by its own algorithm, will allow it, at most, to recreate what has already been thought and is stored in its database. And even if the AI, like the example I gave of the monkey that randomly types on a typewriter and eventually reproduces the Iliad, is not endowed with consciousness, since this is a human attribute, and all too human , to know that it is an innovation in some area of knowledge.

In short, it is worth emphasizing and concluding that, just as human reason is limited by sensory experience and mental structures t hemselves, an AI is limited by its own algorithm and database.

Another central point for understanding the limitations of artificial intelligence and the impossibility of General Artificial Intelligence lies in the debate between Russell and Frege about what constitutes “truth.” The only truth knowable to an AI is found in the process of equivalence, as defended by Russell. AI can only assign truth value to propositions according to their equivalence with the real world. Although an AI can assign truth value to a logical structure, the real world is an unattainable phenomenon for a digital machine. The truth that an AI is capable of achieving will always depend on the mediation of human reason expressed in the programming of its alg orithm and its training based on a database of human knowledge.

To summarize my argument in the language of exact sciences, so that the engineers who populate the Passárgada of Silicon Valley can understand why AI will never be able to surpass human intelligence: an AI algorithm is a limit function, where the limit is a non-computable cognitive problem between the digital world and the real world. And the real world is only knowable to a digital machine through mediation from the real world itself. Proof of this lies in the need for machine learning, even in its initial stages, to be done by a person. An AI that programs itself is a logical contradiction. It’s the same as, in the example used earlier, a book that writes itself before being read by itse lf s o t h at it can teach itself w h a t it has written about itself.

You can find the full article in here, due to characteres limitations: (DOC) A true Epistemology for Artificial Intelligence

1 Like

Hmm… for now:


You are onto something. But the strongest version of your case is narrower and more precise than the article in its current form.

My overall judgment

I would not defend the article exactly as written.

I would defend a revised version built around this claim:

Current LLMs are not reliable epistemic agents because they do not cleanly distinguish among vagueness, contradiction, uncertainty, formal validity, and empirical grounding. A better architecture would be plural and layered, not purely generative.

That thesis is serious. It matches real technical work. It also fits your philosophical instincts much better than the article’s larger claims about AGI being a logical impossibility.

Why I think you are onto something

1. You correctly see that fluency is not knowledge

Modern LLMs are excellent at producing plausible language. That is not the same as having a built-in theory of when a claim is justified, when evidence is missing, when two sources conflict, or when the right answer is “I do not know.” Current research on hallucinations treats this as a major open problem. OpenAI’s 2025 paper argues that standard training and evaluation often reward guessing over acknowledging uncertainty, and AbstentionBench reports that abstention on unanswerable questions remains unsolved even for frontier models. (OpenAI)

That means your core complaint is real. The field has built very strong generators. It has not yet built a universally strong epistemic discipline on top of them. That is a fair criticism. (OpenAI)

2. Your hybrid instinct is right

The strongest technical work near your thesis does not say that raw next-token generation is enough. Logic-LM combines LLMs with symbolic solvers. LINC uses the LLM as a semantic parser into first-order logic, then hands deduction to an external theorem prover. There is even a 2025 paper that directly integrates an LLM into the interpretation function of a paraconsistent logic while aiming to preserve soundness and completeness. So your instinct that reasoning should be split across different layers is strongly aligned with current neurosymbolic research. (ACL Anthology)

That is the part of your article I find most promising. You are not merely complaining that models hallucinate. You are saying that the architecture is missing distinct components for distinct epistemic tasks. That is a good insight.

3. Your concern about contradiction is legitimate

There is real evidence that LLMs can contradict themselves or generate unstable factual claims. SelfCheckGPT is built around the idea that hallucinated facts often vary or conflict across multiple samples. Chain-of-Verification tries to reduce hallucinations by forcing a draft, then generating verification questions, answering those independently, and only then producing a revised answer. These methods do not prove your philosophical thesis, but they do support your intuition that consistency management and verification matter. (ACL Anthology)

So on the big picture, yes: you are pointing at a real weakness in current systems.

Where your article is strongest

Your best move is not the attack on “Big Tech” or the rhetoric about Pinocchio. Your best move is the underlying structure:

  1. A generative model can be useful without being a full knower.
  2. Language contains vagueness, conflict, ambiguity, and context dependence.
  3. A single undifferentiated text generator is not well suited to all of those at once.
  4. A better system should separate the tasks.

That is a much stronger case than “AI can never be intelligent.”

The article is also strongest when it pushes for a pre-output judgment layer. In engineering terms, that would mean the model does not go straight from prompt to final answer. It first classifies the kind of problem it faces. Then it decides which tools or reasoning regime should apply. That general shape is already visible in Logic-LM, LINC, verification pipelines, retrieval systems, and abstention benchmarks. (ACL Anthology)

Where your article weakens itself

This is the decisive part.

1. You conflate paraconsistent logic and fuzzy logic

This is the biggest conceptual problem in the draft.

Paraconsistent logic is about non-explosion under inconsistency. In plain language, it asks how a system can contain contradictions without collapsing into “everything follows.” Fuzzy logic is about degrees of truth for vague or imprecise predicates. In plain language, it is designed for cases like tall, young, rich, warm, likely, or near, where sharp boundaries are unnatural. (Stanford’s Dictionary of Physics.)

Your “35 degrees water is lukewarm” example is not mainly a paraconsistency case. It is a vagueness case. “Hot” and “cold” are context-sensitive predicates with blurry thresholds. That fits fuzzy logic or many-valued semantics much more naturally than paraconsistency. If you keep using “lukewarm” as your main example of paraconsistency, technically informed readers will attack that immediately, and they will be right to do so. (Stanford’s Dictionary of Physics.)

So the clean distinction is:

  • Fuzzy logic for borderline predicates and graded truth.
  • Paraconsistent logic for inconsistent evidence or conflicting commitments that should not trivialize the whole system. (Stanford’s Dictionary of Physics.)

That single correction would improve your article more than anything else.

2. LLMs are not basically Boolean syllogism machines

Your article often reads as if current models work by applying binary Boolean logic internally and then exploding under contradiction. That is not how transformer LLMs are built. Transformers are neural architectures based on attention mechanisms, and the dominant autoregressive paradigm for LLMs is next-token prediction. They are continuous, probabilistic systems, not classical deduction engines in disguise. (arXiv)

This matters because your current diagnosis mislocates the failure. The main technical problem is not that the model has secretly committed itself to Aristotelian syllogism and Boolean truth tables. The problem is that a probabilistic text generator is being asked to do too many epistemically different jobs without enough explicit structure.

So I would replace this claim:

LLMs hallucinate because they are based on Boolean logic.

with this claim:

LLMs hallucinate because probabilistic generation alone is being used where tasks also require grounding, conflict management, calibration, and abstention.

That version is much stronger.

3. Your causal story about hallucinations is too simple

You often write as if hallucinations mainly come from reinforcement learning, stochasticity, or the attempt to simulate creativity. The current literature is broader than that. Hallucination research now treats the problem as multi-causal. It spans pretraining data, fine-tuning, alignment, prompting, retrieval quality, decoding choices, evaluation methods, and weak incentives for uncertainty. OpenAI’s paper emphasizes “rewarding guessing,” and surveys explicitly treat hallucination as a broad taxonomy with multiple causes and mitigation strategies. (OpenAI)

So I would not say RLHF is the cause. I would say it is one factor in a larger system that still does not sufficiently reward truthfulness, calibration, and refusal.

4. Your AGI impossibility claims are much too strong

This is where the article moves from “provocative and interesting” to “overreaching.”

Saying that AGI is a logical impossibility, that imagination is non-computable, or that AI can never produce genuinely original knowledge is not something your article actually proves. Those are very large philosophical claims. There is an active field of computational creativity, with serious surveys on machine creativity and creative systems. That does not prove machines will equal or exceed human minds. But it does show that the impossibility claim is not established just by saying “algorithms are algorithms.” (ACM Digital Library)

Here I would advise restraint. You do not need those impossibility claims to make your best argument. They make the piece sound grander, but they make it less defensible.

5. Your theory of truth is presented too narrowly

The article often writes as if the relevant account of truth is basically settled and then applies that framework directly to AI. Philosophically, that is too quick. Even leaving aside deeper debates, the practical problem in AI is not only “what is truth?” It is also:

  • what counts as evidence,
  • what counts as support,
  • what counts as conflict,
  • what counts as insufficient information,
  • and when the system should abstain.

Those are epistemic and operational questions as much as metaphysical ones. That is why I think the article is stronger as a paper about epistemic architecture than as a paper about the essence of truth.

The key distinction your paper needs

Your article currently treats several different phenomena as if they had one root. They do not.

You are really discussing at least four different things.

A. Vagueness

Examples: hot, cold, lukewarm, old, likely, near, safe.
Best fit: fuzzy logic, many-valued semantics, context-sensitive semantics. (Stanford’s Dictionary of Physics.)

B. Inconsistency

Examples: one source says X, another says not-X; the model has conflicting retrieved facts; two commitments cannot both be maintained.
Best fit: paraconsistent logic or other non-explosive approaches. (Stanford’s Dictionary of Physics.)

C. Uncertainty

Examples: not enough information, stale information, false-premise question, underspecified question.
Best fit: calibration, confidence estimation, abstention. (OpenAI)

D. Formal deduction

Examples: if-then structure, quantifiers, consistency-sensitive inference, proof tasks.
Best fit: symbolic logic, theorem provers, constraint solvers. (ACL Anthology)

Once you split the problem this way, your article becomes much more powerful. Instead of saying “paraconsistent logic solves LLM hallucinations,” you can say:

LLM reliability requires different formal treatments for different failure modes.

That is a serious claim.

The philosophical frame that fits your case best

I do not think your best frame is “replace current AI with paraconsistent logic.”

I think your best frame is logical pluralism.

Logical pluralism is the view that more than one logic can be correct, depending on what notion of validity or consequence relation is at issue. For your project, that is ideal. It lets you say that no single formal regime should be expected to handle all linguistic and epistemic situations equally well. Different subproblems call for different formal treatments. (Stanford’s Dictionary of Physics.)

That gives you a much stronger architecture:

  • use symbolic logic for strict deduction,
  • use fuzzy logic for vague predicates,
  • use paraconsistent logic for conflicting evidence,
  • use retrieval and verification for empirical questions,
  • use abstention for underdetermined questions. (Stanford’s Dictionary of Physics.)

This is the version of your idea that I think experts would take seriously.

What I would keep from your article

I would keep these points.

Keep 1

“Mere generation is not enough.”
Yes. Strong point.

Keep 2

“Current systems need an explicit judgment layer.”
Yes. Very good point.

Keep 3

“Contradictions should not globally trivialize a system.”
Yes. Good and technically meaningful.

Keep 4

“A useful AI architecture should distinguish kinds of questions before answering.”
Yes. Very strong idea.

Keep 5

“Human-like fluency should not be confused with warranted knowledge.”
Yes. Central and correct.

What I would cut or weaken

Cut 1

“AI is based on Boolean logic, therefore hallucination is inevitable.”

Too simple. Technically inaccurate. (arXiv)

Cut 2

“Paraconsistent logic or fuzzy logic” as if they were interchangeable.

They are not. (Stanford’s Dictionary of Physics.)

Cut 3

“RLHF is the cause of hallucinations.”

Too strong. The evidence points to multiple causes. (OpenAI)

Cut 4

“AGI is a logical impossibility.”

You have not shown that. It invites objections that distract from your best argument. (ACM Digital Library)

Cut 5

The stronger rhetorical attacks on engineers and companies.

They add heat, but they reduce credibility.

What your strongest revised thesis would be

Here is the version I think is best:

Current LLMs should not be treated as self-sufficient epistemic agents. They are probabilistic language generators that can be highly useful, but they need a plural architecture around them. Formal deduction should be delegated to symbolic tools. Vague predicates should be handled with graded or context-sensitive semantics. Conflicting information should be managed by non-explosive reasoning. Empirical claims should be grounded in retrieval and verification. Unanswerable questions should trigger abstention rather than confident guessing.

That is clear. It is defensible. It matches actual technical directions in the field. (ACL Anthology)

What a concrete architecture could look like

If you want your article to move from philosophical essay to serious proposal, I would sketch something like this:

Step 1. Classify the prompt

Is this:

  • factual,
  • deductive,
  • vague,
  • conflicting,
  • normative,
  • or underdetermined?

Step 2. Route by type

  • Deductive → theorem prover or symbolic solver
  • Factual → retrieval plus reference checking
  • Vague → fuzzy or context-sensitive interpretation
  • Conflicting → paraconsistent handling of local inconsistency
  • Underdetermined → abstain or request missing context

Step 3. Verify before final output

Use something like a verification pass. The general pattern behind Chain-of-Verification is good here. SelfCheckGPT is also relevant as a cheap instability detector. (ACL Anthology)

Step 4. Return not just an answer, but an epistemic status

For example:

  • supported,
  • weakly supported,
  • conflicting sources,
  • context-dependent,
  • unknown.

This is what your article is really aiming for. Not just “better answers,” but better judgment.

My view on your use of Kant, Aristotle, Popper, and philosophy of language

Your philosophical direction is better than your engineering description.

  • Kant is useful if you use him to talk about the conditions under which a system can judge, not as a general anti-AI slogan.
  • Aristotle is useful if you use him for distinctions among forms of inference and for the need to avoid careless equivocation.
  • Popper is useful for empirical claims and scientific conjectures, but not every user prompt is a falsifiable scientific hypothesis.
  • Philosophy of language is very useful here because many AI failures are failures of context, reference, vagueness, scope, presupposition, and support.

So yes, philosophy belongs in this debate. But it works best when it sharpens distinctions, not when it substitutes for technical description.

My blunt conclusion

You are not completely wrong. You are identifying a real problem. In fact, you are identifying one of the most important problems in current AI: the gap between language generation and epistemic responsibility.

But your article currently overstates and conflates too much.

The three biggest fixes are:

  1. Separate fuzzy logic from paraconsistent logic.
  2. Stop describing LLMs as if they were basically Boolean engines.
  3. Replace the “AGI is impossible” conclusion with a more precise claim about the limits of current LLM-style epistemic architecture.

If you make those changes, your paper becomes much stronger.

My final assessment is this:

  • Your intuition is good.
  • Your main categories need tightening.
  • Your best contribution is architectural and epistemological, not metaphysical.
  • Your strongest frame is pluralism, not one replacement logic.
  • Your most defensible thesis is that AI needs a layered theory of judgment.

That is a paper worth writing.

Hey john6666 I really appreciate your feedback. You gave me much to think about and a path do better the article.

As I said, programming or computer sciences is not my area of study, but as a pet project I tried to model an AI based on what I stated on my article. I will be posting the model here really soon. So I invite you to join that debate too.

1 Like

If you are instered john, please join this discussion too 0danielfonseca/Doninha · Doninha is a proof of concept of a new kind of AI